Category Archives: Technology

SourceMap: Striving to Bring Supply Chain Visibility to the Masses

SourceMap is a supply chain mapping tool that is designed to help an organization map out their end-to-end supply chain to help them gain critical insight and understanding into their performance, costs, sustainability, and risk. Especially risk. Most companies don’t understand the risks hidden in their supply chain — the sole-source parts, the over-dependence on high-risk geographic areas, or the ability of a single port strike to knock out multiple shipping lanes. (Nor do most companies understand the cost of risk, which is discussed in detail in Sourcing Innovation’s upcoming white-paper on Playing With Fire, but that’s a discussion for another post.)

SourceMap, born as a research project at the MIT Media Lab to publish and measure the environmental footprint of all the products on earth, was launched as a public platform for supply chain mapping in 2009 that allowed individuals to see every aspect of a product’s life — the good and the bad. Then, in 2011 it partnered with the MIT Centre for Transportation and Logistics to pursue opportunities in automating supply chain visualization and risk management. Shortly after, the 2011 Tohuku tsunami hit and wiped out over 45,000 buildings, damaged over 144,000 more, shut down all of Japan’s ports (including 15 that were located in the disaster zone). All told, it did over $300 Billion in damages and sent shockwaves throughout global supply chains. Companies were scrambling to understand the impact on their supply chains, SourceMap was approached, an incorporation followed, and the private sector solution was born.

Hands-down, SourceMap is the best visualization of the supply chain to hit the scene since Resilinc, which is, in Sourcing Innovation’s view, is still the leader in Supply Chain Risk Management solutions, but if all an organization needs is visibility and Supply Chain Visualization, SourceMap is now a leading contender in that arena. SourceMap has the ability to use an organization’s ERP data, public data sources, and survey data from the organization’s suppliers, the suppliers’ suppliers, down to the raw material suppliers, to create a complete point-to-point map of the supply chain that an organization can use to trace it’s products from source-to-sink on a (Google Earth) Map and visually see what is happening. This is a very powerful feature that allows an organization to gain insights into their supply chain that they never knew before. And just like an organization is typically shocked the first time they run a spend analysis (we spend that much with who?!?), they are typically just as shocked when they run a map and see that a number of distributors and tier 1 suppliers are using, or outsourcing a significant portion of, spend to the same tier 2 supplier and just pushing the single-source point of failure an organization is trying to avoid one step further down into the Supply Chain.

And the SourceMap solution, which only needs common location data points, can quickly import and combine all data sources an organization can get its hands on and SourceOne can often create a starting supply chain map for an organization in less than an hour. It’s not complete or perfect, but it allows the organization to quickly drill into the supply chain and see where the data, and focus, is needed.

SourceMap is quickly becoming the new supply chain visibility solution to watch, and for a real in-depth analysis, Sourcing Innovation would recommend the in-depth write-up that the doctor and the prophet collaborated on over on Spend Matters Pro (membership required) that provides four pages of deep insight into the solution.

Why You Need Mass Adoption Of An Optimization-Backed Sourcing Platform

Last week, in our post on why Higher Adoption is Where the True Value of Optimization Lies, we emphasized the importance on not just having optimization, but an optimization-backed sourcing platform that can be used by the most junior of buyers. We focussed on the efficiency, time savings, and value such a platform would bring, but didn’t give you any hard numbers. While the hard numbers will be hard to come by, SI expects that the savings that hit the bottom line from such a platform will increase by at least 150% over using stand-alone optimization, and more than likely will double what an organization would see if it just used a regular strategic sourcing platform without optimization. We know that 2.5X is not a very impressive number when vendors go around talking about 10X ROI, but the ROI that vendors promise is relative to the cost of the platform, not the ROI relative to the organization’s bottom line, and that’s what really counts.

The reality is that, at the end of the day, after COGS, depreciation, taxes, etc. are factored in, a good Procurement organization might only take 2% off of the bottom line. This doesn’t sound that impressive, unless the organization is a 10B organization where 2% is 200M, in which case it’s knock your socks off impressive. Now imagine if that same Procurement organization could increase the straight to the bottom line savings by 150% and show a bottom line savings of 5.2%. That’s another 320M in annual savings for a total savings of 520M! That’s buy everyone on the Sourcing team a custom made Jaguar savings because no other initiative is going to take that much off the bottom line.

But you don’t have to be a 10B organization to see the impact. Imagine you are a small mid-size organization with only 100M in annual spend. Instead of seeing an average year-over-year impact of 2M, you’d see 5.2M. If a fully burdened FTE is 200K and you had a small Procurement department of 5 people managing your spend, the department’s ROI would go from 2X to 5.2X in a single year, and that is quite significant.

So where are these, quite conservative, numbers coming from?

  • A Best In Class Organization has 80% of spend under management (Hackett, Gartner, etc.)
  • A Best in Class Organization will strategically source approximately 1/3 annually (due to resource restrictions) (Crowd Wisdom approximation used by many vendors)
  • A Best In Class Organization with stand-alone or hard-to-use optimization capability will only put the top third of complex, strategic, or high volume spend through the organization (Generous crowd wisdom approximation based upon SI’s interaction with optimization vendors)

As a result, (at most) one-third of one-third of four-fifths of spend gets optimized on an annual basis, or about 9% gets optimized using strategic sourcing decision optimization and the full extent of its capability.

However, if the organization has an optimization-backed sourcing platform that is configured for one-click evaluations and automatic weighted auction awards for low-cost / standard categories,

  • 98% of spend can be under management (as it can flow through the platform as easy as it can flow through an auction or spot buy RFP),
  • one half of that can be sourced annually due to efficiency gains
  • and all of this spend will be subject to optimization.

This means that about one half of organizational spend, or about 48% of spend, can get at least partially optimized on an annual basis. In other words, an organization can subject 5x its spend to optimization on an annual basis.

The net result is that an organization that adopts an optimization-backed sourcing platform that can be used by every buyer will see at least 150% more savings hit the bottom line every year. Why?

If we look at the numbers:

  • the average return from Procurement at a world class organization is 4.7% (Hackett Group)
  • the average return on tail spend (which is never strategically sourced) is 7.1% (Hackett Group)
  • the average return from SSDO on a strategically sourced category where the full power of the solution is enabled is 12% (Aberdeen)

This leads to the following (where we assume 20% of spend is “tail spend”):

Traditional:
09% using SSDO @ 12.0% savings = 1.0% savings
18% using SS   @ 04.7% savings = 1.0% savings
TOTAL = 2.0% savings
SSDO Platform
38% using SSDO @ 12.0% savings = 4.5% savings
10% using SSDO @ 07.1% savings = 0.7% savings
TOTAL = 5.2% savings

Now, mileage will vary among organizations, but this example should make it pretty easy to see that optimization is a huge value driver that will have a significant impact on your bottom line when it is widely deployed.

So if you want to know what to look for in an optimization-backed sourcing platform, download Optimization: Higher Adoption is Where True Value Lies (registration required) today and find out what you need to take optimization from a success to a smashing success in your organization.

Technology Sustentation 80: The Cloud

As SI said in our post on technology damnation 80, software was good. Hosted ASP was better. True multi-tenant SaaS was better still. But the “cloud” is, more often than not, the one step back that follows the two-steps forward.

The cloud is not a white fluffy cloud full of day dreams, it is a gathering storm cloud that could soon erupt and flood your entire operation while the hail it dispenses pummels you to a bloody pulp.

As per our damnation post, if you are not careful, you could:

  • lose your mail,
  • lose your data,
  • lose your platform, and
  • lose your customers as well as
  • lose your supply chain visibility,
  • lose your revenue stream, and
  • lose all the cash in your bank account

And you could be permanently lost at sea when the floods carry you away.

Unless, of course, you take precautions. What kind of precautions? Every kind of precaution you can take. But at a minimum:

  1. Make sure that your providers’ platforms are designed in such a way that not only is there no data cross-pollination, but that there is no access cross-pollination. This may require that the provider not only create a new instance for each client, but run it on a new virtual machine. (The database can be on one server, as long as it’s encrypted and the encryption for each client uses a unique key so that if a hacker gets through to the database through another client’s poor security configuration, and gets all the data for that client, your data can’t be decrypted.)
  2. Make sure that the provider supports encryption across all of your data, not just parts of it, and that it is up to date (and up to snuff). Even data that might be considered inconsequential can be enough to be damaging if enough bits of it are pieced together.
  3. Make sure the provider does near-real time incremental, replicated, distributed, off-site back-ups to make sure that, in the case of hardware failure (or FBI/NSA server seizure), your data is not lost.
  4. Make sure the provider has multiple real-world data centres that the platform can be run on in case one (or more) data centres become unavailable.
  5. Make sure the provider has a distributed fault-tolerant up-time monitoring solution that can detect if an application instance becomes unavailable and restore the most recent back-up to a different data centre and do the necessary re-routings in (near) real time.

In other words, security, fault-tolerance, and distributed processing and back-up are critical. Without it, you’ll be hacked, your system will go down, and you may not get it (or even your data) back.

Technology Sustentation 75: Mobile Movement (Madness)

The mobile movement, as we pointed out in technology damnation 75, is as much of a curse as it is a blessing. As we noted in our post:

  • you will be expected to work anywhere, anytime;
  • data entry will be painful as small screens, and smaller keyboards made for real mice, will be the norm (and you can thank Apple and their new mini 4″ iPhone); and
  • task time will triple as small, limited power processors, chug, chug, chug trying to deal with media-heavy websites and bloated data transfer protocols despite the fact that
  • suppliers and customers will expect a whole new level of relationship management

So what can you do?

  • define your relationship management processes and protocols and make sure new suppliers and customers know, day one, what they can expect and the level, and kind, of service you will provide
  • limit the amount of functionality that your applications will support on a mobile device to needed functionality
  • make sure mobile applications and devices support scanning/sensor reading as much as possible (bar codes, QR codes, RFID chips, etc.); manual data entry should be web-based OCR (image, upload for server processing, user override, save); etc.
  • make sure support channels are well defined so that only people who are working or on call get contacted when requests come in — don’t automatically route a non-critical support call to the primary rep at 3 am in the morning when a secondary support rep is on call half a world away at 3 pm in the afternoon (VOIP is a wonderful thing)

We’re stuck with these devices whether we like ’em or not, so let’s make sure we design for them appropriately and work-life boundaries are properly set, otherwise, we’ll all be asking:

Can I Play With Madness?

Technology Sustentation 80: The Cloud

As SI said in our post on technology damnation 80, software was good. Hosted ASP was better. True multi-tenant SaaS was better still. But the “cloud” is, more often than not, the one step back that follows the two-steps forward.

The cloud is not a white fluffy cloud full of day dreams, it is a gathering storm cloud that could soon erupt and flood your entire operation while the hail it dispenses pummels you to a bloody pulp.

As per our damnation post, if you are not careful, you could:

  • lose your mail,
  • lose your data,
  • lose your platform, and
  • lose your customers as well as
  • lose your supply chain visibility,
  • lose your revenue stream, and
  • lose all the cash in your bank account

And you could be permanently lost at sea when the floods carry you away.

Unless, of course, you take precautions. What kind of precautions?

  1. Make sure that your providers’ platforms are designed in such a way that not only is there no data cross-pollination, but that there is no access cross-pollination. This may require that the provider not only create a new instance for each client, but run it on a new virtual machine. (The database can be on one server, as long as it’s encrypted and the encryption for each client uses a unique key so that if a hacker gets through to the database through another client’s poor security configuration, and gets all the data, your data can’t be decrypted.)
  2. Make sure that the provider supports encryption across all of your data, not just parts of it, and that it is up to date (and up to snuff). Even data that might be considered inconsequential can be enough to be damaging if enough bits of it are pieced together.
  3. Make sure the provider does near-real time incremental, replicated, distributed, off-site back-ups to make sure that, in the case of hardware failure (or FBI/NSA server seizure), your data is not lost.
  4. Make sure the provider has multiple real-world data centres that the platform can be run on in case one (or more) data centres become unavailable.
  5. Make sure the provider has a distributed fault-tolerant up-time monitoring solution that can detect if an application instance becomes unavailable and restore the most recent back-up to a different data centre and do the necessary re-routings in (near) real time.

In other words, security, fault-tolerance, and distributed processing and back-up are critical. Without it, you’ll be hacked, your system will go down, and you may not get it (or even your data) back.