Category Archives: B2B 3.0

Have We Reached B2B 3.0 Yet? Part 3: B2B 3.0, A Definition

As per Part I, over seven years ago, Sourcing Innovation published Introducing B2B 3.0 and Simplicity for All, which is available as a free download, to help educate you on the next generation of B2B and prepare you for what comes next. The expectation was that, by now, we would be awash in B2B 3.0 (Business to Business 3.0), which was simply defined as the first generation of technology that actually puts business users on the same footing as consumers, but are we?

In Parts I and II we discussed the history of B2B 1.0 and B2B 2.0 in order to conclude that, neither B2B 1.0 and 2.0 was not enough. B2B 1.0 launched the internet era, but proved that connectivity, and even basic functionality, is useless without content (that helped buyers find what they needed and sellers provided what buyers needed) and community (as the right parties need to come together). B2B 2.0 brought the internet era to the mid-sized business, but ultimately proved that creating private networks and marketplaces didn’t add anything because while redundancy in data centres is good, network redundancy is bad and only increases costs, not value.

That’s why we need B2B 3.0 but is it? First we need to discuss B2C 3.0.

B2C 3.0, which was kicked-off by sites like Froogle (Google Product Search), PriceGrabber, and PriceWatch, allowed consumers to search and browse product listings from multiple sites. TechRepublic, CraigsList, and ComputerShopper provided the community for these consumers to discuss providers and products and find what they wanted at the price they wanted. And C2C 3.0 sites like MySpace, FaceBook, and Twitter connect more users than ever before.

B2B 3.0 is the business equivalent. It’s the next generation of B2B that adds content, community, and open-connectivity to B2B platforms. More specifically, open connectivity that is free to all to access, open community that allows all buyers and sellers to come together though dynamically created virtual networks on an open, shared, secure, and decryption-supporting API to conduct business as needed, and the depth of content required to support complex direct purchases. It’s what B2B 2.0 should have been, but without the unnecessary redundancy and the necessary cost.

B2B 3.0 is an open platform enabled by:

  • web services
    like Google Maps that allows supply chains to be plotted
  • intelligent agents
    that can automatically place re-orders and identify market data of interest to the buyer or supplier
  • meta-search
    that works over multiple catalogs, on multiple sites, accessed using multiple EDI, (c)XML, or other standard protocols
  • real-time collaboration
    instant messaging, (visual) VOIP, screen sharing, and collaborative document authoring
  • semantic technology
    that can identify news stories and reports of interest
  • mashups
    to normalize data from hundreds (or thousands) of file and data formats into a common taxonomy
  • analytics
    that can process, and make sense, of all of the information streams and present meaningful information and actionable insight
  • workflow
    as a good process is an effective and efficient process

But are we there yet? To be continued …

Have We Reached B2B 3.0 Yet? Part 2: B2B 2.0, A History Lesson, Continued

As per Part I, over seven years ago, Sourcing Innovation published Introducing B2B 3.0 and Simplicity for All, which is available as a free download, to help educate you on the next generation of B2B and prepare you for what comes next. The expectation was that, by now, we would be awash in B2B 3.0 (Business to Business 3.0), which was simply defined as the first generation of technology that actually puts business users on the same footing as consumers, but are we?

SI would like to jump right in and answer that question, but first we have to discuss B2B 2.0 and get our terminology straight before we can discuss B2B 3.0.

B2B 2.0: The “Marketplace” era

In the early naughts, thanks in part to efforts by large B2C and C2C (Consumer-to-Consumer) players like Amazon and e-Bay who made great strides in bringing security, trust, and quality to on-line platforms, e-Commerce became a major part of the consumer world. The growth of online business in some industries was so expensive that, almost overnight, small stores and chains started suffering and going out of business. Why pay $20 for a CD that an online store would sell for $14 and ship free if you bought 4 of them?

The end result was that businesses saw the potential of the web to host large, on-line marketplaces, and address the content and community requirements, and a large number of B2B marketplaces and private networks sprang into existence. This included dozens of general purpose marketplaces, including the likes of Ariba, Enporion (now GEP), Quadrem (now Ariba), and TPN Register (acquired by GXS, now OpenText GXS which sprang onto the scene alongside dozens of vertical-specific marketplaces like Aeroxchange, ChemConnect (gone), eSourceApparel (gone), and GNX (merged with WWRE, now Global Sources). The technology was more advanced than 1.0, but it only offered basic e-Procurement features — such as catalog management, request-for-bid, simple reverse auction, and supplier directories. B2B 2.0 expanded the marketplace for e-Procurement as these marketplaces spurred a flurry of new market entrants (such as Emptoris, Ketera [now Deem], and SciQuest) and allowed mid-tier buyers and suppliers to get in the game. And even though dynamic content was limited, and search was primitive, B2B 2.0 was made out to be a good thing.

But in the end, the gains didn’t negate the losses. Even though the marketplaces and private networks initially thrived, the high access fees became even more prohibitive as suppliers had to be on multiple networks to service their buyers and buyers had to be on multiple networks if they wanted to discover new suppliers. Again, only the e-Procurement vendors won.

Lesson learned? Private Networks are redundant with the BIG Network, the ONE Network, the Internet and network redundancy (not machine redundancy in data centres) is bad, especially when everyone is on the same internet that supports the same internet protocol stack and can connect with the same open protocol.

Have We Reached B2B 3.0 Yet? Part 1: B2B 1.0, A History Lesson

Over seven years ago, Sourcing Innovation published Introducing B2B 3.0 and Simplicity for All, which is available as a free download, to help educate you on the next generation of B2B and prepare you for what comes next. The expectation was that, by now, we would be awash in B2B 3.0 (Business to Business 3.0), which was simply defined as the first generation of technology that actually puts business users on the same footing as consumers, but are we?

SI would like to jump right in and answer that question, but first we have to discuss B2B 1.0 and B2B 2.0 to get our terminology straight.

B2B 1.0: The “Free Network” era

In the early nineties, a time when our current Hindsight would have been useful, the Internet burst onto the scene. Almost immediately, entrepreneurs saw the potential of the Internet to grow consumer-based business of all types, and B2C 1.0 was born. And although it was primitive by today’s standards, it took mail order to a whole new level. It wasn’t long before big business took note and decided that the internet would benefit them too, allowing new customers to find them and place orders, and suppliers to participate in reverse auctions to allow them to serve more customers at a lower price point. B2B 1.0 arrived.

B2B 1.0 was largely powered by the “free” connectivity of the internet as opposed to the costly EDI (Electronic Data Interchange) alternatives that ran over private networks that had to be maintained by the business. However, since bandwidth was still quite expensive (as it cost thousands of dollars a month for a dedicated 1.54 mbps T1 line as opposed to the 100 a month you can now pay for 100 mbps cable modem, and since network infrastructure technology was still quite expensive (as it could cost almost 10K for a multi-port enterprise router and switch), B2B 1.0 was still limited to large organizations, who nonetheless saw significant savings potential. (Considering that first generation reverse auctions often saved Millions, what’s a 100K for infrastructure?)

However, while “big buyers” won big, suppliers lost bigger as they ended up having to

  • maintain expensive internet connectivity and infrastructure, which was sometimes considerably more expensive in their rural factory locations versus dense urban business centres
  • support the different EDI and data standards required by different buyers, greatly increasing their IT support costs and
  • maintain different catalog versions for each buyer, with different pricing, buyer SKUS, etc., further increasing their IT support costs.

And these suppliers were the lucky ones. Some suppliers didn’t get to participate at all.

In short, suppliers lost. Lucky buyers broke even. And only the first-generation enterprise e-Procurement vendors, who laughed all the way to the bank, won.

Lesson learned? Functionality, and even connectivity, is useless without content and community.

Decisions Should be Data-Derived – But They Should Not Be Big Data Driven!

In our recent post where we noted that it’s nice to see CNN run a piece that says big data is big trouble we noted that big data is big danger because more data does not automatically translate into better decisions. Better data translates into better decisions. And often that better data comes in the form of a small set of focussed data. For example, if one is trying to determine the right set of features to include in the next version of a product, the best data points are those that represent the desires of your best current customers who are most likely to buy the product. This is especially true if the most profitable market segment are enterprise business customers that buy thousands of licenses or units. If you only have a few dozen of these customers, these few dozen data points are more relevant than thousands of data points you’d get from a mass-market survey which would likely include hundreds of data points from customers who are only vaguely interested in your product (and who would likely never buy it).

Data does matter. But only the right data matters. That’s why only companies in the top-third of their industry in the use of data-driven decision making are 5% more productive and 6% or profitable than their competitors (as per an introduction to data-driven decisions. If it was just a matter of lots of data, then all companies would be more productive and half would be noticeably more profitable than their peers.

So how do you know if the data is good? Ask the right questions. In the HBR piece, the author lists six key questions that should be asked before acting on any data:

  1. What is the data source?
  2. How well does the data sample represent the population?
  3. Does the data distribution include outliers? Do they affect the results?
  4. What assumptions are behind the analysis? Are there conditions that would render the assumptions and model invalid?
  5. What were the reasons behind selecting the data and approach?
  6. How likely is it that independent variables are actually causing changes in the dependent variable?

And the answers that are received should be relevant to the problem at hand. For example, if we go back to our software / hand-held device example, the answers received should be along the lines of:

  1. Business Customer Surveys
  2. Over 70% of the organization’s largest accounts are represented
  3. Some small customers are included as well, but they are less than 10% of respondents and do not affect the results
  4. The assumptions are that the largest accounts provide the most relevant data. Currently, major account satisfaction is good and the data can be relied on so there are no current conditions that would affect assumptions.
  5. Large corporate customers represent over 60% of the company’s profit, so focussing on their needs first was the rationale.
  6. The surveys were designed to minimize the impact of independent variables, so the likelihood is low.

In this situation, you know the data is good, the approach is good, and the assumptions are relatively sound and you can likely count on the results. And, more importantly, the organization should act on them because it’s likely that any frequent correlation in the data indicates a causal hypothesis (if you add the indicated features, then the current customer base will buy the next version) and the benefits outweigh the risk (as a sufficient sales volume will cover the R&D costs).

And, just like the HBR article says, you don’t even have to like math to make the right decision. (Although there’s no reason not to like math.)

It’s Nice To See CNN Run a Piece that Says Big Data is Big Trouble

the doctor doesn’t like the phrase “big data” or the “big data” craze. First of all, as he has said time and time again, we’ve always had more data than we could process on a single machine or cluster and more data than we could process in the time we want to process it in. Secondly, and most importantly, just like the cloud is filled with hail, big data is filled with big disasters waiting to happen.

As the author of the article on on the big dangers of ‘big data’ astutely points out, there are limits to the analytic power of big data and quantification that circumscribe big data’s capacity to drive progress. Why? First of all, as the author also points out, bad use of data can be worse than no data at all. As an example, he cites a 2014 New York Times Piece on Yahoo and it’s Chief Executive which demonstrated the unintended consequences of trying to increase employee drive and weed out the chaff by way of scorecard-based quarterly performance reviews which limited how many people on a team could get top ratings. Instead of promoting talent and driving talented people together, it split them up because, if you were surrounded by under performers, you were sure to get the top score – but if you were surrounded by equals, you weren’t.

This is just one example of the unintended consequences of trying to be too data driven. Another example is using average call time in a customer support centre versus number of calls to close a ticket as a measure of call centre agent performance. If an agent is measured on how long she spends on the phone on average, she is going to try to take shortcuts to solve a customer’s problem instead of getting to the root cause. For example, if your Windows PC keeps locking up every few days and a re-boot fixes it, you will be told to proactively reboot every 24 hours just to get you off the phone. But that doesn’t necessarily fix the problem or guarantee that you will not have another lock-up (if the lock-up is a certain combination of programs opened at the same time that refuse to share a peripheral device, for example). As a result, the customer will end up calling back. Or, if she can’t solve your problem, you will be switched to another agent who “knows the system better”. That’s poor customer support, and all because you’re keeping track of the average time of every call and computing averages by rep and department.

Big data will let us compute more accurate economic forecasts, demand trends, process averages, and so on, but, as the author keenly points out, many important questions are simply not amenable to quantitative analysis, and never will be. The examples of where your child should go to college, how to punish criminals, and whether or fund the human genome project are just a few examples. Even more relevant are product design queries. 34% of users want feature A, 58% want feature B, and 72% want feature C, but how many want features A and B or A and C or B and C or all three features? And how many will be put off if the product also contains a feature they don’t want, is too confusing due to too many frivolous features, or doesn’t have all important feature D that you didn’t ask about, but now have to have because your competitor does?

And, even more important, McKinsey, which in 2011 claimed that we are on the cusp of a tremendous wave of innovation, productivity and growth … all driven by big data had to recently admit that there is no empirical evidence of a link between data intensity … and productivity in specific sectors. In other words, despite all of the effort put into big data projects over the last few years, none have yielded any results that are conclusively beyond results that would have been achieved without big data.

And, most importantly, as someone who has studied chaotic dynamical systems theory, the doctor can firmly attest to the fact that the author is completely correct when he says understanding the complexity of social systems means understanding that conclusive answers to causal questions in social systems will always remain elusive. We may be able to tease out strong correlations, but correlation is not causation. (And if you forget this, you better go back and take another read through Pinky and the Brain’s lesson on statistics.)