Category Archives: B2B 3.0

Have We Reached B2B 3.0 Yet? Part 2: B2B 2.0, A History Lesson, Continued

As per Part I, over seven years ago, Sourcing Innovation published Introducing B2B 3.0 and Simplicity for All, which is available as a free download, to help educate you on the next generation of B2B and prepare you for what comes next. The expectation was that, by now, we would be awash in B2B 3.0 (Business to Business 3.0), which was simply defined as the first generation of technology that actually puts business users on the same footing as consumers, but are we?

SI would like to jump right in and answer that question, but first we have to discuss B2B 2.0 and get our terminology straight before we can discuss B2B 3.0.

B2B 2.0: The “Marketplace” era

In the early naughts, thanks in part to efforts by large B2C and C2C (Consumer-to-Consumer) players like Amazon and e-Bay who made great strides in bringing security, trust, and quality to on-line platforms, e-Commerce became a major part of the consumer world. The growth of online business in some industries was so expensive that, almost overnight, small stores and chains started suffering and going out of business. Why pay $20 for a CD that an online store would sell for $14 and ship free if you bought 4 of them?

The end result was that businesses saw the potential of the web to host large, on-line marketplaces, and address the content and community requirements, and a large number of B2B marketplaces and private networks sprang into existence. This included dozens of general purpose marketplaces, including the likes of Ariba, Enporion (now GEP), Quadrem (now Ariba), and TPN Register (acquired by GXS, now OpenText GXS which sprang onto the scene alongside dozens of vertical-specific marketplaces like Aeroxchange, ChemConnect (gone), eSourceApparel (gone), and GNX (merged with WWRE, now Global Sources). The technology was more advanced than 1.0, but it only offered basic e-Procurement features — such as catalog management, request-for-bid, simple reverse auction, and supplier directories. B2B 2.0 expanded the marketplace for e-Procurement as these marketplaces spurred a flurry of new market entrants (such as Emptoris, Ketera [now Deem], and SciQuest) and allowed mid-tier buyers and suppliers to get in the game. And even though dynamic content was limited, and search was primitive, B2B 2.0 was made out to be a good thing.

But in the end, the gains didn’t negate the losses. Even though the marketplaces and private networks initially thrived, the high access fees became even more prohibitive as suppliers had to be on multiple networks to service their buyers and buyers had to be on multiple networks if they wanted to discover new suppliers. Again, only the e-Procurement vendors won.

Lesson learned? Private Networks are redundant with the BIG Network, the ONE Network, the Internet and network redundancy (not machine redundancy in data centres) is bad, especially when everyone is on the same internet that supports the same internet protocol stack and can connect with the same open protocol.

Have We Reached B2B 3.0 Yet? Part 1: B2B 1.0, A History Lesson

Over seven years ago, Sourcing Innovation published Introducing B2B 3.0 and Simplicity for All, which is available as a free download, to help educate you on the next generation of B2B and prepare you for what comes next. The expectation was that, by now, we would be awash in B2B 3.0 (Business to Business 3.0), which was simply defined as the first generation of technology that actually puts business users on the same footing as consumers, but are we?

SI would like to jump right in and answer that question, but first we have to discuss B2B 1.0 and B2B 2.0 to get our terminology straight.

B2B 1.0: The “Free Network” era

In the early nineties, a time when our current Hindsight would have been useful, the Internet burst onto the scene. Almost immediately, entrepreneurs saw the potential of the Internet to grow consumer-based business of all types, and B2C 1.0 was born. And although it was primitive by today’s standards, it took mail order to a whole new level. It wasn’t long before big business took note and decided that the internet would benefit them too, allowing new customers to find them and place orders, and suppliers to participate in reverse auctions to allow them to serve more customers at a lower price point. B2B 1.0 arrived.

B2B 1.0 was largely powered by the “free” connectivity of the internet as opposed to the costly EDI (Electronic Data Interchange) alternatives that ran over private networks that had to be maintained by the business. However, since bandwidth was still quite expensive (as it cost thousands of dollars a month for a dedicated 1.54 mbps T1 line as opposed to the 100 a month you can now pay for 100 mbps cable modem, and since network infrastructure technology was still quite expensive (as it could cost almost 10K for a multi-port enterprise router and switch), B2B 1.0 was still limited to large organizations, who nonetheless saw significant savings potential. (Considering that first generation reverse auctions often saved Millions, what’s a 100K for infrastructure?)

However, while “big buyers” won big, suppliers lost bigger as they ended up having to

  • maintain expensive internet connectivity and infrastructure, which was sometimes considerably more expensive in their rural factory locations versus dense urban business centres
  • support the different EDI and data standards required by different buyers, greatly increasing their IT support costs and
  • maintain different catalog versions for each buyer, with different pricing, buyer SKUS, etc., further increasing their IT support costs.

And these suppliers were the lucky ones. Some suppliers didn’t get to participate at all.

In short, suppliers lost. Lucky buyers broke even. And only the first-generation enterprise e-Procurement vendors, who laughed all the way to the bank, won.

Lesson learned? Functionality, and even connectivity, is useless without content and community.

Decisions Should be Data-Derived – But They Should Not Be Big Data Driven!

In our recent post where we noted that it’s nice to see CNN run a piece that says big data is big trouble we noted that big data is big danger because more data does not automatically translate into better decisions. Better data translates into better decisions. And often that better data comes in the form of a small set of focussed data. For example, if one is trying to determine the right set of features to include in the next version of a product, the best data points are those that represent the desires of your best current customers who are most likely to buy the product. This is especially true if the most profitable market segment are enterprise business customers that buy thousands of licenses or units. If you only have a few dozen of these customers, these few dozen data points are more relevant than thousands of data points you’d get from a mass-market survey which would likely include hundreds of data points from customers who are only vaguely interested in your product (and who would likely never buy it).

Data does matter. But only the right data matters. That’s why only companies in the top-third of their industry in the use of data-driven decision making are 5% more productive and 6% or profitable than their competitors (as per an introduction to data-driven decisions. If it was just a matter of lots of data, then all companies would be more productive and half would be noticeably more profitable than their peers.

So how do you know if the data is good? Ask the right questions. In the HBR piece, the author lists six key questions that should be asked before acting on any data:

  1. What is the data source?
  2. How well does the data sample represent the population?
  3. Does the data distribution include outliers? Do they affect the results?
  4. What assumptions are behind the analysis? Are there conditions that would render the assumptions and model invalid?
  5. What were the reasons behind selecting the data and approach?
  6. How likely is it that independent variables are actually causing changes in the dependent variable?

And the answers that are received should be relevant to the problem at hand. For example, if we go back to our software / hand-held device example, the answers received should be along the lines of:

  1. Business Customer Surveys
  2. Over 70% of the organization’s largest accounts are represented
  3. Some small customers are included as well, but they are less than 10% of respondents and do not affect the results
  4. The assumptions are that the largest accounts provide the most relevant data. Currently, major account satisfaction is good and the data can be relied on so there are no current conditions that would affect assumptions.
  5. Large corporate customers represent over 60% of the company’s profit, so focussing on their needs first was the rationale.
  6. The surveys were designed to minimize the impact of independent variables, so the likelihood is low.

In this situation, you know the data is good, the approach is good, and the assumptions are relatively sound and you can likely count on the results. And, more importantly, the organization should act on them because it’s likely that any frequent correlation in the data indicates a causal hypothesis (if you add the indicated features, then the current customer base will buy the next version) and the benefits outweigh the risk (as a sufficient sales volume will cover the R&D costs).

And, just like the HBR article says, you don’t even have to like math to make the right decision. (Although there’s no reason not to like math.)

It’s Nice To See CNN Run a Piece that Says Big Data is Big Trouble

the doctor doesn’t like the phrase “big data” or the “big data” craze. First of all, as he has said time and time again, we’ve always had more data than we could process on a single machine or cluster and more data than we could process in the time we want to process it in. Secondly, and most importantly, just like the cloud is filled with hail, big data is filled with big disasters waiting to happen.

As the author of the article on on the big dangers of ‘big data’ astutely points out, there are limits to the analytic power of big data and quantification that circumscribe big data’s capacity to drive progress. Why? First of all, as the author also points out, bad use of data can be worse than no data at all. As an example, he cites a 2014 New York Times Piece on Yahoo and it’s Chief Executive which demonstrated the unintended consequences of trying to increase employee drive and weed out the chaff by way of scorecard-based quarterly performance reviews which limited how many people on a team could get top ratings. Instead of promoting talent and driving talented people together, it split them up because, if you were surrounded by under performers, you were sure to get the top score – but if you were surrounded by equals, you weren’t.

This is just one example of the unintended consequences of trying to be too data driven. Another example is using average call time in a customer support centre versus number of calls to close a ticket as a measure of call centre agent performance. If an agent is measured on how long she spends on the phone on average, she is going to try to take shortcuts to solve a customer’s problem instead of getting to the root cause. For example, if your Windows PC keeps locking up every few days and a re-boot fixes it, you will be told to proactively reboot every 24 hours just to get you off the phone. But that doesn’t necessarily fix the problem or guarantee that you will not have another lock-up (if the lock-up is a certain combination of programs opened at the same time that refuse to share a peripheral device, for example). As a result, the customer will end up calling back. Or, if she can’t solve your problem, you will be switched to another agent who “knows the system better”. That’s poor customer support, and all because you’re keeping track of the average time of every call and computing averages by rep and department.

Big data will let us compute more accurate economic forecasts, demand trends, process averages, and so on, but, as the author keenly points out, many important questions are simply not amenable to quantitative analysis, and never will be. The examples of where your child should go to college, how to punish criminals, and whether or fund the human genome project are just a few examples. Even more relevant are product design queries. 34% of users want feature A, 58% want feature B, and 72% want feature C, but how many want features A and B or A and C or B and C or all three features? And how many will be put off if the product also contains a feature they don’t want, is too confusing due to too many frivolous features, or doesn’t have all important feature D that you didn’t ask about, but now have to have because your competitor does?

And, even more important, McKinsey, which in 2011 claimed that we are on the cusp of a tremendous wave of innovation, productivity and growth … all driven by big data had to recently admit that there is no empirical evidence of a link between data intensity … and productivity in specific sectors. In other words, despite all of the effort put into big data projects over the last few years, none have yielded any results that are conclusively beyond results that would have been achieved without big data.

And, most importantly, as someone who has studied chaotic dynamical systems theory, the doctor can firmly attest to the fact that the author is completely correct when he says understanding the complexity of social systems means understanding that conclusive answers to causal questions in social systems will always remain elusive. We may be able to tease out strong correlations, but correlation is not causation. (And if you forget this, you better go back and take another read through Pinky and the Brain’s lesson on statistics.)

Procurement Trend 04. Control Tower Model / Omni Channel Approach

Only one anti-trend remains. Once we finish this post, we complete our formidable burden, and hope that the sour taste in our mouths will soon depart now that we have shown those fictionally-focussed futurists in fine detail that the snake-oil trends they have been selling have no worth. We want to abash them for their apathy, but we will leave it up to LOLCat to decide their fate. While LOLCat thinks on it, he would like to point out to these Rip van Winkles that when it comes to sleeping through life, No One Out-sleeps a Cat!

So why do these analyst catfish keep churning out the same lousy predictions year after year? Besides the fact that light rarely penetrates down to where they are, it’s probably because they look around, see the laggard organizations still struggling with the best way to organize its operations, and assume they can still sell last decade’s playbook in this decade’s marketplace. Thus, if most organizations are still fighting to get beyond the de-centralized model, then the control tower model sounds quite futurish. Plus, we have the situation where its

  • different strokes benefit different folks
    as different models work well in different circumstances
  • integrated channels result in integrated data feeds
    and more data results in better decisions
  • regional differences not only provide opportunities,
    but can hinder success with the wrong model/approach

So what does this mean?

Understand the Primary Models

There are three traditional models of Supply Management: decentralized, centralized, and center-led. In the decentralized model, there is a Supply Management team in each organizational unit responsible for purchasing for that unit. This model has advantages, primarily along deep knowledge of supply market and needs, and deep disadvantages, primarily with respect to the inability to exploit organizational spend. In the centralized model, all spend is centralized through one Supply Management team. This model has its own set of advantages and disadvantages, many of them diametrically opposite to the decentralized model. In the center-led model, there is a central Supply Management team which defines the categories, identifies the best sourcing methods, executes the contracts, and guides each department on how to procure against the contract. It is supposed to combine the best features of each model.

Understand where Each Model Fits

Each model has its uses. In an organization where most buys don’t cross organizational units (with respect to product needs or supply base), decentralized can work. In an organization which has primarily indirect spend that is common across the organization with a strongly overlapping supply base, for example, a centralized model is a best. In an organization with a mix of common and uncommon categories and suppliers, a center-led model where some spend is centralized and some spend is left up to the individual organizational units is often the way to go.

Understand Centre-Led vs. Center of Excellence vs. Control Tower

They are all similar, but they are not the same. Center-led is where a central organization centralizes some spend but leaves other spend up to the individual departments. A Center of Excellence may do the same thing, but it centralizes sourcing knowledge and best practices and, where appropriate, works with and guides the organizational units on decentralized spend to make sure they always apply best practices and get the best results. A Control Tower is a next generation Center of Excellence that not only manages both centralized and decentralized spend, but continually re-evaluates centralization and sourcing strategy and adapts the model with the market to generate the maximum impact for the organization.

Pick the Model that is Right for Your Organization

Arguably, the Control Tower model is best in theory, but pick the model that best fits your organizational needs based on where it is with respect to Supply Maturity.