Category Archives: Technology

The More Things Change …

… the more they stay the same … and the more relevant the past, and the education of, becomes.

Ten years ago today, the doctor asked are you doing it wrong?

Ten years later, the question is just as valid now as it was then. Because if you were doing it right, your supply chains wouldn’t be in such disarray.

Ten years ago we noted that, if you’ve been following the media, you know that we have reached a point were most major business publications are now putting focus on Supply Chain as your top risk and your top opportunity and that they have been preaching the following solutions to not only tame the risk but increase the opportunity.

1. Comprehensive Category Management

Nothing has changed here. One consulting firm is literally sending the same email newsletters they were sending a decade ago on the topic because it’s still relevant, and most firms are still doing it wrong.

As the doctor noted a decade ago, spot buying individual categories at market lows or evening running reverse auctions at opportune times is not category management, not in the least — nor is running your buys through a “magic” or “delightful” intake-to-procure platform (better called “faketake” as a colleague of mine will point out). As was said before, Category Management isn’t just about grouping all seemingly related items and running an event, it’s grouping items that have related characteristics that allow the items to be sourced effectively under the same strategy — which could even be early renegotiation with an incumbent who might give you a great deal to keep you from going back to market. It’s taking a holistic strategic approach, not just mapping to UNSPSC or some out-of-the-box 2-level taxonomy and running with it. And not doing it is what’s resulting in stock-outs and cost-overruns. Because now, it’s not just price, it’s quality and supply assurance. Especially supply assurance. Which brings us to …

2. Supply Chain Risk Monitoring

Not much has changed here, even though the technology now exists for it to change at the majority of multi-national companies. A decade ago, we noted that natural and man-made disasters devastate supply chains when they result in raw material or product unavailability for weeks or months. When a company doesn’t understand their dependence on a single source or the risks that single source is subject too, they can figuratively get caught with their pants down to say the least. Still holds true today.

A month ago we also noted that most leading companies in the Risk Management arena are now tracking and monitoring their tier 1 supply base for not only missed deliveries, but late shipment dates and inquiring immediately when something is late shipping. However, by the time a shipment is late, it’s often too late to go to another source if the reason for the lateness is the lack of an important raw material. Multi-tier monitoring is key, but most Procurement departments are only now exploring supplier risk management in their supplier management module / application, which is tier 1 — even though we now have a number of great solutions that can monitor to at least tier 3, if not down to the source of each raw material in your supply chain. Considering that any good supplier information management solution will allow you to push in risk, compliance, performance, and visibility data, there’s no reason not to be monitoring your critical supply chains. Especially now that we can easily handle:

3. Big Data

What used to be the biggest buzzword-du-jour (before all this useless Gen-AI, desired only by Dr. Evil himself), Big Data is still desirable, but only to the extent you actually have valid, verified, data. Considering that the algorithms that actually work predict demand, acquisition cost, projected sales, etc. based on trends — unverified non-demand, cost, price data (for the wrong product) is NOT going to be of any help.

Get a real data analysis tool, validate the data at your disposal, and use it to your advantage, no more, no less.

e-Procurement Implementation Success Goes Well Beyond The Basics

the doctor was quite disappointed with this article over on the WorldBank Blogs on 10 success factors for implementing [an] e-Procurement System because all of these “factors” were generic success factors for the implementation of any technical system. Let’s look at them at a high level (and direct you to the article for a description of what the requirements are if they aren’t immediately clear to you):

Governance Principles
all projects need to be managed and governed, so this is pretty much a “d’uh!”
Transparency on Legal and Regulatory Frameworks
any platform that processes any personal, payment, or classified data HAS to adhere to Legal and Regulatory frameworks of ALL countries the corporation operates in, so this is obvious for any platform that requires it
Strategy Ownership and Sustainability
it’s classic project management, no owner, everything goes to cr@p
Implementation and Integration Challenges
preparing for this is just a given
Technical Infrastructure and SaaS-based Systems
all technology implementations need to integrate with the current infrastructure and SaaS systems that contain the necessary data, so this is pretty much a “d’uh!”
Training & Capacity Building
well, you need the capacity and the training regardless of the system being implemented
Engage Stakeholders Actively
without stakeholder support, it will be hard to get the resources for a timely, successful implementation of any technology
Align with International Standards
technology should always align with any regulatory standards in place
Clear Communication and Change Management
necessary for the success of ANY project, not just a technology project, so this is pretty much a super “D’UH!”
Data Security and Privacy
if the data is personal, payment, classified, trade secret, etc. etc. etc. then security and privacy is of more concern than the tech, so, another ‘d’uh!”

e-Procurement success goes beyond the basics. There are too many six, seven, and, for some multinationals locked into 5-year contracts, eight figure acquisitions that have failed to deliver on the promises made. This is because the selection, implementation, and utilization of such systems goes beyond most back-office tech to get right.


In our recent article on The Key to Procurement Software Selection Success: Affordable RFPs!, we noted that selecting the right vendor was paramount to success, and a critical requirement in this selection process was a GOOD RFP.

Furthermore, that RFP needed to specify, among a host of requirements:

  • typical use cases
  • target processes
  • globalization requirements
  • data migration requirements
  • integration requirements

Why are we calling these out? Because these define the key factors for implementation success!


Key Factors are thus:

Primary Components / Modules
… that are needed to support the critical use cases and target processes, that need to be implemented and demonstrated first
Test Cases
that must be passed, in priority order, to ensure the use cases and target processes can be accomplished
… including multi-lingual use cases
that support not only the customer organization requirements but the supplier requirements
Data Migration Requirements
spelled out in detail, as well as cut-over requirements
Cross-System Bi-Directional Integration Requirements
spelled it in minute detail, not just push to the ERP … and considerably more than just a high level holistic strategy … when it comes to tech, the devil truly is in the details and chaos emerges when you overlook even one


A system not utilized is a failed system, even if the implementation and integration goes as well as can be reasonably expected. Utilization is critical, especially early on, or widespread adoption will never be reached. This is why it’s paramount that the functionality required for the critical use cases be implemented and tested first so that utilization of key capabilities can begin as soon as possible, leading to adoption.

In other words, the basic checklist for technology implementation is nowhere near enough for the successful implementation of procurement technology — that success requires going deep.

The Key to Procurement Software Selection Success: Affordable RFPs!

Modern supply chains are risky. Very risky. Nothing made this fragility more clear than COVID where the world essentially broke down due to an illogical (to the point of insanity, thank you McKinsey) over-reliance on outsourcing, especially to China. (There’s a reason that SI has been promoting near-sourcing, home-shoring, and home-sourcing for over sixteen years — because this breakdown was inevitable, the only unknown was whether or not it was to be geopolitical instability/war, a massive natural disaster, or a pandemic that would be the first card to topple in the house of cards.)

Despite the best laid plans, and all the precautions you can implement, something will inevitably go wrong. Very wrong. And the disturbance will cost you greatly. That’s you you buy supply chain insurance which, depending on exposure, limits of dependency, and regionalization, will cost you between 1% and 10% of the policy value (maximum claim amount). If we take 5% as an average (which is not unreasonable), that says for every 1,000,000 of at-risk inventory you need to insure (to prevent devastating loss), you are paying $50,000.

But do you know what’s just as risky as your supply chain? The investment in the technology that you use to power your supply chain. Therefore, you should do everything you can to ensure you get it right! The best way to do this is create a good, proper, RFP to help hone in on software vendors that have appropriate solutions that should be able to fill your need while ensuring that they have the minimum globalization, size, and services you will need to consider giving them an award.

But, as per previous articles, including our last article on why THERE ARE NO FREE RFPs!, you’re probably not capable of doing this on your own. This is because a proper RFP requires

  • understanding your current Procurement Maturity
    (and while you may understand what you’re doing, it’s doubtful you understand how you are faring against the market or best-in-class)
  • understanding your current processes (based on this) vs. your target processes (based on where you should get to within a reasonable time-frame, taking into account that The Hackett Group, based on their book of numbers, discovered that it was typically an eight-year journey to best in class for large global enterprises)
  • understanding how these translate into use cases that must be supported by technology
  • understanding what technological capabilities will be required to get you there and …
  • what additional capabilities would be beneficial to simplify your tasks, identify additional value, or help your team progress in Procurement maturity over time and …
  • understanding which types of solutions / modules on the market contain the bulk of those capabilities so you know which segment of vendors to send the RFP to
  • understanding if the backbone solutions in place are worth keeping or if they should be replaced instead of augmented (i.e. would the solution with the missing capabilities completely subsume these solutions [rending them unnecessary], like simple RFPs in a Sourcing Suite or catalogs in a Procurement suite, or would they still be needed, like an ERP backbone)
  • understanding the globalization needs not just of the company, but the (potential) suppliers
  • understanding the services that will be required for installation, migration, and integration
  • understanding any unique requirements of the organization that will need to be addressed by a vendor (to ensure they can meet them) before negotiations can begin

and if you don’t know

  • what the state of the market is, or what best in class is
  • how your processes should be transformed to advanced up the maturity curve
  • how to define the appropriate use cases
  • … and the key technology capabilities that will be required
  • … and which optional capabilities will be true value add
  • how to identify solution/module types based on these capabilities
  • which solutions you have that you should keep, and which you should replace
  • the full breadth of globalization needs across the extended enterprise
  • the full breadth of services that will be required
  • which of your organizational requirements are truly unique and need to be spelled out

then you CANNOT write a good RFP. So you really, really, should pay an expert, independent, advisor (or consultancy that does not have any preferred provider partnerships) to do the appropriate Procurement and platform maturity assessments and write the RFP that you need.

Especially since this can usually be done for less than 10%, if not 5%, of the 5-year cost of the investment. (Face it, you’re going to be locked into at least three years no matter what you buy, usually five years, and even if not, it’s going to be too costly to switch out even the worst solution in less than five years.) For example, as per previous Sourcing Innovation posts on how much should you pay for a starting platform, as a mid-market you would be looking at about 250K/year in license fees for a good suite across the board (120K for a starter, but that wouldn’t have all the modules or advanced capabilities where you need them), plus implementation, migration, and integration that will run you anywhere from 125K to 500K (or more) up front. Assume 250K, and this gives you a five year baseline cost of 1.5M. 10% of that is 150K, and you can definitely get the help you need for that — and it’s a SMALL price to pay to make sure you get the acquisition right of this make-or-break technology (that can deliver a 3X to 5X+ ROI done right, and cost you Millions done wrong). (And if you’re a larger enterprise, you’d be looking at 3M to 6M for a suite for 5 years, which gives you a budget that even the Big X would be interested in, but which they SHOULD NOT be considered for as they are all preferred implementation partners for at least one of the major suites.)

So if you want true success, big savings (10% for the appropriate strategic sourcing/procurement technologies), and real ROI (3X to 5X or more), put those “FREE” RFPs in the trash where they belong and find the right expert to help you create the right Affordable RFP that will ensure the successful selection that your organization needs.

Have all the Big X fallen for Gen-AI? Or is this their new insidious plan to hook you for life?

McKinsey. Accenture. Cap Gemini. KPMG. Deloitte. Kearney. BCG. etc. etc. etc.

Every single one is putting “Gen-AI” adoption in their top 10 (or top 5) strategic imperatives for Procurement, and its future, and that it’s essential for analytics (gasp) and automation (WTF?!?). One of these firms even announced they are going to train 80,000 f6ckw@ds on this bullcr@p.

It’s absolutely insane. First of all there are almost no valid uses for Gen-AI in business (unless, of course, your corporation is owned by Dr. Evil), and even less valid uses for Gen-AI in Procurement.

Secondly, the “Gen” in “Gen” AI stands for “Generative” which literally means MAKE STUFF UP. It DOES NOT analyze anything. Furthermore, automation is about predictability and consistency, Gen-AI gives you neither! How the heck could you automate anything. You CAN NOT! Automation requires a completely different AI technology built on classical (and predictable) machine learning (where you can accurately calculate confidences and break/stop when the confidence falls below a threshold).

Which begs the question, are they complete idiots who have completely fallen for the marketing bullcr@p? Or is this their new insidious plan to get you on a never-ending work order? After all, when it inevitably fails a few days after implementation, they have their excuses ready to go (which are the same excuses being given by these companies spending tens of millions on marketing) which are the same excuses that have been given to us since Neural Nets were invented: “it just needs more content for training“, “it just needs better prompting“, “it just needs more integration with your internal data sources“, rinse, lather, and repeat … ad infinitum. And, every year it will get a few percentage points better, but if it gets only 2% better per year, and the best Gen-AI instance now is scoring (slightly) less than 34% on the SOTA scale, it will be (at least) 9 (NINE) years before you reach 40% accuracy. In comparison, if you had an intern who only performed a task acceptably 40% of the time, how long would he last? Maybe 3 weeks. But these Big X know that once you sink seven (7) figures on a license, implementation, integration, and custom training, you’re hooked and you will keep pumping in six to seven figures a year even though you should have dropped the smelly rotten hot potato the minute you saw the demo.

So, maybe they aren’t stupid when it comes to Gen-AI. Maybe they are just evil because it’s their biggest opportunity to hook you for life since McKinsey convinced you that you should outsource for “labour arbitrage” and “currency exchange” (and not materials / products you can’t get / make at home) and other bullsh!t arguments that no society in the history of the world EVER outsourced for. (EVER!) Because if you install this bullcr@p and get to the point of “sunk cost”, you will continue to sink money into it. And they know it.

(Yet another reason you should be very, very careful about selecting a Big X for something that is NOT their forte.)

Remember that AI, and Gen-AI in particular, is a fallacy.

The Gen AI Fallacy

For going on 7 (seven) decades, AI cult members have been telling us if they just had more computing power, they’d solve the problem of AI. For going on (seven) 7 decades, they haven’t.

They won’t as long as we don’t fundamentally understand intelligence, the brain, or what is needed to make a computer brain.

Computing will continue to get exponentially more powerful, but it’s not just a matter of more powerful computing. The first AI program had a single core to run on. Today’s AI program have 10,000 core super clusters. The first AI programmer had only his salary and elbow grease to code, and train the model. Today’s AI companies have hundreds of employees and Billions in funding and have spent 200M to train a single model … which told us we should all eat one rock per day upon release to the public. (Which shouldn’t be unexpected as the number of cores we have today powering a single model is still less than the number of neurons in a pond snail.)

Similarly, the “models” will get “better”, relatively speaking (just like deep neural nets got better over time), but if they are not 100% reliable, they can never be used in critical applications, especially when you can’t even reliably predict confidence. (Or, even worse, you can’t even have confidence the result won’t be 100% fabrication.)

When the focus was narrow machine learning/focussed applications and accepting the limitations we had, progress was slow, but it was there, was steady, and the capabilities, and solutions improved yearly.

Now the average “enterprise” solution is decreasing in quality and application, which is going to erase decades of building trust in the cloud and reliable AI.

And that’s the fallacy. Adding more cores and more data just accelerates the capacity for error, not improvement.

Even a smart Google Engineer said so. (Source)