Daily Archives: January 25, 2011

Do You Know Just How Risky Your Supply Chain Is?

Or that even seemingly unrelated natural disasters can put it on hold? Or that it’s not just unpredictable disasters, like the eruption of Eyjafjallajokull that can bring things to a halt? Even completely predictable events like floods, which occur fairly regularly in most regions over the course of decades, can have devastating effects well beyond the coastal areas.

Consider the recent flooding in Australia’s Queensland slate. It did more than just make coastal areas unusable. It also resulted in significantly increased coal output. BHP, the world’s biggest producer, minded most of its coal products in Australia’s Queensland’s Bowen Basin from three of the world’s largest coking coal mines — Goonyella Riverside, Blackwater and Peak Downs. As a result of the flooding, all three mines were temporarily out of commission and mining is still constrained. This has caused BHPs production of coal to fall 30%. All because of a little extra water. This is not something that would come to mind if you asked an average organization about its supply chain risks.

So do you know just how risky your supply chain is? If not, maybe it’s time you did an assessment.

Information … Information … Information

Yesterday’s post discussed the lack of realistic starting points for an average organization that wants to merge onto the value focussed path and the need for information. Then the post discussed e-RFX applications and how they are not always the answer as most are not configured for collecting more than a moderate amount of data, and the information required to make the right decision might require a large amount of data to be collected.

For example, consider the information required to make the right decision in a global freight bid where the company has over 5,000 lanes across five continents that are currently being serviced, in part, by almost 500 carriers. Not only will there be a need to collect up to 1,000,000 LTL and TL bids to know what the lowest rates are, but there will be a need to collect data on capabilities (refrigerated, freezer, hazardous martial, etc.), capacities, and serviced lanes. And then, once all of the information has been collected, past performance, guaranteed service levels, (commitments to) sustainability (such as biofuels and hybrid vehicles) will have to be considered in addition to costs and on-time-delivery capabilities. And if multiple carriers are almost equal, long term viability, strategic partnerships, and/or commitment to social responsibility might also need to be considered.

All-in-all, this represents a significant amount of data that needs to be collected, analyzed, and distilled into useful information — data that is not even going to be collected if a firm is still using a first-generation e-Sourcing platform. This is because:

  1. Traditional RFX tools, which are now a commodity (as every provider and their dog has one — trust me), are not built to collect that much information.
  2. Most of the RFX tools that can handle that much information, typically by way of Excel import and export, are not designed with supplier usability in mind. No supplier is going to quote 5,000 lanes at multiple LTL and FTL levels if they only service 3,000 and 2,000 can be broken into 20 cross-regional groups where each lane in the group is priced the same by mile.
  3. Of the few tools that allow for generic pricing and (typically) single-dimensional overrides, most won’t designed with the ability to easily design multiple levels of overrides and the OLAP-like navigation that’s really need to quickly zoom in on the relevant data items (which need to be viewed or altered).
  4. And while most of the better RFX tools allow a user to define as many RFIs, RFPs, and RFQs as the user desires, these generally have to be crammed into rigid workflows that may or may not fit the scenario at hand.
  5. Plus, while most of the tools can push data out into an auction or a SIM tool (that is the foundation for SPM and/or SRM), most don’t allow data to be pulled back in, since the first generation e-Sourcing model was a linear RFX -> Auction -> Decision Optimization -> Award -> Contract Management -> SPM flow.

And then, once you get past all that, you still have to analyze the data to distill the information required to make a good award decision. Because even the best strategic sourcing decision optimization on the market will fail if it’s not provided with the right data AND the right constraints (or, depending on your choice of terminology, rules). The right constraints can only derived by a knowledge individual that has the right information at her disposal.

So how do get the right information? You take your sourcing to the next level. So what does this Next Generation Sourcing look like? Stay Tuned.