Category Archives: Supply Chain

Why Your Standard Sourcing Solution Doesn’t Work For Direct

Too many of you have been there. You sign that seven figure deal for that end-to-end Source-to-Pay suite, spend another seven figures and 18 months integrating with the ERP, PLM, AP, BI and existing Legal CR solutions, and then try to source your first NPI project natively only to … fail. Why is that?

They just weren’t built for direct.

And it’s not just something you can add in later. If the platform wasn’t designed from the ground up for direct sourcing, there’s zero chance it will ever do a decent job at it. (And, FYI, the majority of the S2P suites the big analyst firms are drooling over in their annual quadrants and waves started out as simple indirect Sourcing or Procurement tools.) People who don’t understand the nature of software don’t get this, but software has to be constructed like a building. You might hear vendors and techies throw around MVC model, which stands for Model-View-Controller, when they talk about how new and well architected their solution is, but that just means it’s built in a maintainable web-friendly way for what, and only what, it was initially designed to do.

It all comes down to the data model and the software architecture of the controller, and neither can be a black box. The data model has to be designed from the ground up to support bill of materials and direct sourcing and procurement data requirements. The controller has to provide the infrastructure to support the complexity of the application that is required. For those who don’t understand software, I like to put it this way. If you pour the foundation for a two story house, and buy wooden beams for all of your structure and supports, you can’t build a 10 story apartment building. You need a foundation for an apartment building and steel and concrete supports. (Even though you can theoretically build a 10-story structure on a two-story foundation if you have the right steel and supports, it won’t be stable. The slightest tremor on the Richter scale [which might not even be detectable by a human] or a strong wind will send it crashing down.) You need both. And just like you can’t replace the foundation under a building or replace the entire support structure in real life, you can’t do the same in code. You have to rebuild, usually from scratch.

So why weren’t they built for direct? Well, there are a number of reasons (besides they wanted to get a product to market fast and/or just weren’t smart enough to build a direct sourcing solution). They include:

  1. direct material sourcing is hard
  2. substitution is not guaranteed
  3. demand aggregation is not straight forward
  4. delivery time guarantees and on-time arrival is significantly more important

To understand these, and learn about the rest of the reasons the majority of sourcing solutions were not built for direct, dive into Standard Sourcing Technology Solutions Don’t Work for Direct – Part One and Standard Sourcing Technology Solutions Don’t Work for Direct – Part 2 over on Supply Chain Matters.

One Supply Chain Misconception That Should Be Cleared Up Now

This originally posted May 14 (2024).  It’s being reposted because this definitely needs to be cleared up before the new year (due to the constant proliferation of AI, which is, when all is said and done, just another technology).

Not that long ago, Inbound Logistics ran a similarly titled article that quoted a large number of CXOs that made some really good observations on common misconceptions that included, and are not necessarily limited to (and you should check out the article in full as a number of the respondents made some very good points on the observations):

The misconceptions included statements that supply chains should:

  • reduce cost and/or track the most important metric of cost savings
  • accept negotiations as a zero-sum game
  • model supply chains as linear (progression from raw materials to finished goods)
  • … and made up of planning, buying, transportation, and warehousing silos
  • … and each step is independent of the one that proceeds and follows
  • accept they will continue to be male dominated
  • become more resilient by shifting production out of countries to friendly countries
  • expect major delays in transportation
  • … even though traditional networks are the best, even for last-mile delivery
  • accept truck driver shortage as a systemic issue
  • accept the blame when anything in them goes wrong
  • only involve supply chain experts
  • run on complex / resource intensive processes
  • … and only be optimal in big companies
  • … which can be optimized one aspect at a time
  • press pause on innovation or redesign or growth in a down market
  • be unique to a company and pose unique challenges only to that company
  • not be sustainable as that is still cost-prohibitive
  • see disruption as an aberration
  • return to (the new) normal
  • use technology to fix everything
  • digitalize as people will become less important with increasing automation and AI in the supply chain

And these are all very good points, as these are all common misconceptions that the doctor hears too much (and if you go through enough of the Sourcing Innovation archives, it should become clear as to why), but not the biggest, although the last one gets pretty close.

 

THE BIGGEST SUPPLY CHAIN MISCONCEPTION

We Can Use Technology to Do That!

the doctor DOES NOT care what “THAT” is, you cannot use technology to do “THAT” 100% of the time in a completely automated way. Never, ever, ever. This is regardless of what the technology is. No technology is perfect and every technology invented to date is governed by a set of parameters that define a state it can operate effectively in. When that state is invalidated, because one or more assumptions or requirements cannot be met, it fails. And a HUMAN has to take over.

Even though really advanced EDI/XML/e-Doc/PDF invoice processing can automate processing of the more-or-less 85% of invoices that come in complete and error free, and automate the completion and correction of the next 10% to 13%, the last 2% to 5% will have to be human corrected (and sometimes even human negotiated) with the supplier. And this is technology we’ve been working on for over three decades! So you can just imagine the typical automation rates you can expect from newer technology that hasn’t had as much development. Especially when you consider the next biggest misconception.

Don’t Abuse Lean and Mean — The Four Horsemen of the Shipocalypse Don’t Need Any Help!

If you are in Procurement or Logistics, you know that the time of cheap, fast, and reliable — which we had for almost two decades, is now long gone and likely to never return. That is because the four horsemen have turned their attention to global trade … specifically, global logistics … and have brought:

  • war: the conflict in the Red Sea, one of the two most important waterways in the world, has made most transport almost impossible
  • famine: the droughts in Panama, the other of the two most important waterways in the world, have reduced its capacity by at least 1/3 for at least 1/3 of the year
  • pestilence: plague has returned, taking down the necessary workers (and closing the necessary ports) with it
  • death: corporate greed and union response have stepped in here to bring certain death to global supply chains if things don’t change:
    • oil prices: the more they go up, the more unaffordable our dirty ocean freight becomes
    • limited capacity: greedy corporations scrapped ships during the pandemic for insurance claims, sometime ships that hadn’t even made a single voyage … and now that they’ve learned they can raise prices up to 10X pre-pandemic prices for a single container during peak season, and the richer (luxury good) companies will still pay the rates, they have no incentive to bring capacity back
    • union demands: inflation has been rampant, workers have been impacted, and they want their pre-pandemic buying power … and, as I’ve noted before, labour unrest and strikes is now one of the biggest risks in your global supply chain

As a result, the last thing you want to do is help the horsemen bring your supply chain to a a halt, but that’s exactly what you keep doing day in and day out as you keep pursuing, and applying, lean, mean, and JIT (just-in-time) where it doesn’t belong.

As noted by the author of this recent LinkedIn article on how you have (less than) two weeks to stave off supply chain chaos, we’re at the point where a one day stop in any part of the supply chain turns into one week to recover from, a one week stop in any part of the supply chain turns into one month to recover from, and a one month stop in any part of the supply chain totally f*cks us for a year! (Since the effects are not linear but exponential!) And it’s all your fault.

Lean and mean was supposed to be about efficiency in manufacturing and lack of waste, not slashing inventory to dangerous levels, not slashing capacity to dangerous levels, and was certainly NOT meant to be used by idiot MBAs (which stands for Master of Business Annihilation) with no concept of what the corporation does running global corporations off of spreadsheets alone!

So stop applying it to inventory and capacity! Thank you.

Have You Brought Your Supply Chain Planning Out of the Middle Ages?

Back in the 1930s, the dark ages of computing began, starting with the Telex messaging network in 1933. Beginning as an R&D project in 1926, it became an operational teleprinter service, operated by the German Reichspost (under the Third Reich — remember we said “dark ages”). With a speed of 50 baud, or about 66 words per minute, it was initially used for the distribution of military messages, but eventually became a world-wide network of both official and commercial text messaging that survived into the 2000s in some countries. A few years later, Bell Labs’ George Stibitz built the “Model K” adder in 1937 that was the first proof of concept for the application of Boolean Logic to computer design. Two years later, the Bell Labs CNC (Complex Number Calculator) was completed. In 1941, the Z3, using 2,300 relays, was constructed and could perform floating point binary arithmetic with a 22 bit word length and execute aerodynamic calculations. Then, in 1942, the ABC (Atanasoff-Berry Computer) was completed, seen by the John Mauchly, who invented the ENIAC, which was the first general purpose computer completed in 1945.

Three years later, in 1948, Frederic Williams, Tom Kilburn, and Geoff Toothill developed the Small-Scale Experimental Machine (SSEM), which was the first digital, electronic, stored-program computer to run a computer program, consisting of a mere 17 instructions! A year later, we saw the modem that allowed computers to communicate through ordinary phone lines. Originally developed for transmitting radar signals, the modem was adapted for computer use four years later in 1953. The same year saw the EDSAC, the first practical stored-program computer to provide a regular computing service.

A year later, in 1950, we saw the introduction of magnetic drum storage, which could store 1 Million bits, which was a previously unimagined amount of data (and twice what Gates once said anyone would ever need), though nothing by today’s standards. Then, in 1951, the US Census Bureau gets the Univac 1 and the end of the dark ages are in sight. Then, in 1952, only two years after the magnetic drum, IB introduces a high speed magnetic tape, which could store 2 million digits per tape! In 1953, Grimsdale and Webb built a 48-bit prototype transistorized computer that used 92 transistors and 550 diodes. Later that same year, MIT created magnetic core memory. Almost everything was in place for the invention of a computer that didn’t take a whole room. In 1956, MIT researchers began experimenting with direct keyboard input to computers (which up to now could only be programmed using punch cards or paper tape). A prototype of a mini computer, the LGP-30, was created at Caltech this same year. A year later, FORTRAN, one of the first third generation computing languages, was developed in 1957. Early magnetic disk drives were invented in 1959. And 1960 saw the introduction of the DEC PDP-1, one of the first general purposed mini-computers. A decade later saw the first IBM computer to use semiconductor memory. And one year later, in 1971, we saw one of the first memory chips, the Intel 1103, and the first microprocessor, the Intel 4004.

Two years later NPL and Cyclades started experimenting with internetworking with the European Informatics Network (EIN) and Xerox PARC began linking Ethernets with other networks using its PUP protocol. And the Micral, based on the Intel 8008 microprocessor, one of the earliest non-kit personal computers, ws released. the next year, in 1974, the Xerox Parc Alto was released and the end of the dark ages were in sight. In 1976, we saw the Apple I, and in 1981 we saw the first IBM PC and the middle ages began as computing was now within reach of the masses.

By 1981, before the middle ages began, we already had GAIN Systems (1971), SAP (1972), Oracle (1977), and Dassault Systemes, four (4) of the top fourteen (14) supply chain planning companies according to Gartner in their 2024 Supply Chain Planning Magic Quadrant (Challengers, Leaders, and Dassault Systemes). In the 1980s we saw the formation of Kinaxis (1984), Blue Yonder (1985), and OMP (1985). Then in the 1990s, we saw Arkieva (1993), Logility (1996), and John Galt Solutions (1996). This says ten (10) of the top fourteen (14) supply chain planning solution companies were founded before the middle ages ended in 1999 (and the age of enlightenment began).

Tim Berners-Lee invented the World Wide Web in 1989, the first browser appeared in 1990, the first cable internet service appeared in 1995, Google appeared in 1998, and Salesforce, considered to be one of the first SaaS solutions built from scratch launched in 1999. At the same time, we reached an early majority of internet users in North America, ending the middle ages and starting the age of enlightenment, as global connectivity was now available to the average person (at least in a first world country).

Only e2Open (2000), RELEX Solutions (2005), Anaplan (2006), and o9 Solutions (2009) were founded in the age of enlightenment (but not the modern age). In the age of enlightenment, we left behind on premise and early single client-server applications and began to build SaaS applications using a modern SaaS MVC architecture where requests came in, got directed to the machine with the software, that computed answers, and sent them back. This allowed for rather fault-tolerant software since if hardware failed, the instance could be moved. If an instance failed, it could just be redeployed with backup data. It was true enlightenment. However, not all companies adopted multi-tenant SaaS from day one, only a few providers did in the early days. (So even if your SCP company began in the age of enlightenment, it may not be built on a modern multi-tenant cloud-native true SaaS architecture.) This was largely because there were no real frameworks to build and deploy such solutions on (and Salesforce literally had to build their own.

However, in 2008, Google launched its Cloud and in 2010, one year after the last of the top 14 supply chain applications was launched, when Microsoft launched Azure, the age of enlightenment came to an end and the modern age began as there were now multiple cloud-based infrastructures available to support cloud-native true multi-tenant SaaS applications (no DC operational knowledge required), making it easy for any true SaaS provider to develop these solutions from the ground up.

In other words, not one Supply Chain Planning Solution recognized as a top supply chain planning solution by Gartner was founded in the modern age. (Moreover, if you look at the niche players, only one of the six was founded in the age of enlightenment, the rest are also from the middle ages.)

So why is this important?

  • If the SCP platform core was architected back in the day of client server, and the provider did not rearchitect it for true multi-tenant, even if the vendor wrapped this core in a VM (Virtual Machine), put it in a Docker container, and put it in the cloud, it’s still a client-server application at the core. This means it has all the limits of client server applications. One client per server. No scalability (beyond how many cores and how much memory the server can support).
  • If the platform core was architected such that each module, which runs in its own VM, requires a complete copy of the data to function, that’s a lot of data replication required to run the platform, especially if it has 12 separate modules. This can greatly exacerbate the storage requirements, and thus the cost.
  • But that’s not the big problem. The big problem is that models constructed on a traditional client-server architecture were designed to run only one scenario at a time, and only do so if a complete copy of the data is available. So if you want to run multiple models, multiple scenarios for a model, or both, you need multiple copies of the module, each with their own data set for each model scenario you want to run. This not only exacerbates data requirements, but compute requirements as well. (This is why many providers limit how many models you can have and scenarios you can run as their cloud compute costs skyrocket due to the inefficiency in design and data storage requirements.)

    And while there is no such thing as a truly optimal supply chain plan, since you never know all the variables in advance, there are near optimal fault-tolerant plans that, with enough scenarios, can be identified (by building up a picture of what happens at different demand levels, supply levels, transportation times, etc.) and you can select the one that balances cost savings, quality, expected delivery time, and risk at levels you are comfortable with.

That’s the crux of it. If you can’t run enough scenarios across enough models to build up a picture of what happens across different possibilities, you can’t come up with a plan that can withstand typical perturbations, and definitely can’t come up with a plan that can be rapidly put into place to deal with a major demand fluctuation, supply fluctuation, or an unexpected supply chain event.

So if you want to create a supply chain plan that can enable supply chain success, make sure you’ve brought your supply chain planning out of the middle ages (through the age of enlightenment) and into the modern age. And we mean you. If you chose a vendor a decade ago and are resisting migration to a newer solution, including one offered by the vendor, because you spent years, and millions, tightly integrating it to your ERP solution, then you’re likely running on a single tenant SaaS architecture at best, and a nicely packaged client server architecture otherwise. You need to upgrade … and you should do it now! (We won’t advise you here as we don’t know all of the vendors in the SCP quadrant well enough, but we know some, including those that have recently acquired newer, age of enlightenment and even modern age solutions, and know that some still have old tech on old stacks that they maintaining because of install base. Don’t be the company stalling progress for your own good!)

The More Things Change …

… the more they stay the same … and the more relevant the past, and the education of, becomes.

Ten years ago today, the doctor asked are you doing it wrong?

Ten years later, the question is just as valid now as it was then. Because if you were doing it right, your supply chains wouldn’t be in such disarray.

Ten years ago we noted that, if you’ve been following the media, you know that we have reached a point were most major business publications are now putting focus on Supply Chain as your top risk and your top opportunity and that they have been preaching the following solutions to not only tame the risk but increase the opportunity.

1. Comprehensive Category Management

Nothing has changed here. One consulting firm is literally sending the same email newsletters they were sending a decade ago on the topic because it’s still relevant, and most firms are still doing it wrong.

As the doctor noted a decade ago, spot buying individual categories at market lows or evening running reverse auctions at opportune times is not category management, not in the least — nor is running your buys through a “magic” or “delightful” intake-to-procure platform (better called “faketake” as a colleague of mine will point out). As was said before, Category Management isn’t just about grouping all seemingly related items and running an event, it’s grouping items that have related characteristics that allow the items to be sourced effectively under the same strategy — which could even be early renegotiation with an incumbent who might give you a great deal to keep you from going back to market. It’s taking a holistic strategic approach, not just mapping to UNSPSC or some out-of-the-box 2-level taxonomy and running with it. And not doing it is what’s resulting in stock-outs and cost-overruns. Because now, it’s not just price, it’s quality and supply assurance. Especially supply assurance. Which brings us to …

2. Supply Chain Risk Monitoring

Not much has changed here, even though the technology now exists for it to change at the majority of multi-national companies. A decade ago, we noted that natural and man-made disasters devastate supply chains when they result in raw material or product unavailability for weeks or months. When a company doesn’t understand their dependence on a single source or the risks that single source is subject too, they can figuratively get caught with their pants down to say the least. Still holds true today.

A month ago we also noted that most leading companies in the Risk Management arena are now tracking and monitoring their tier 1 supply base for not only missed deliveries, but late shipment dates and inquiring immediately when something is late shipping. However, by the time a shipment is late, it’s often too late to go to another source if the reason for the lateness is the lack of an important raw material. Multi-tier monitoring is key, but most Procurement departments are only now exploring supplier risk management in their supplier management module / application, which is tier 1 — even though we now have a number of great solutions that can monitor to at least tier 3, if not down to the source of each raw material in your supply chain. Considering that any good supplier information management solution will allow you to push in risk, compliance, performance, and visibility data, there’s no reason not to be monitoring your critical supply chains. Especially now that we can easily handle:

3. Big Data

What used to be the biggest buzzword-du-jour (before all this useless Gen-AI, desired only by Dr. Evil himself), Big Data is still desirable, but only to the extent you actually have valid, verified, data. Considering that the algorithms that actually work predict demand, acquisition cost, projected sales, etc. based on trends — unverified non-demand, cost, price data (for the wrong product) is NOT going to be of any help.

Get a real data analysis tool, validate the data at your disposal, and use it to your advantage, no more, no less.