The optimization era is finally beginning!

In a recent article, Koray Köse states that the EU just killed global supply chain optimization.

When, actually, they just ushered in the real optimization era.

If you are a true multi-national, as Koray has said, you have to pick 2 options out of the 3 options available since you can not simultaneously satisfy US CHIPS Act, EU IAA origin/low-carbon requirements, and Chinese local content rules. So you have to decide which 2 options are the most valuable to you (based on costs and revenue opportunity in the market). That’s an expected profit optimization based on predicted sale prices and the localized supply chain optimizations for cost computation.

So you have to run 3 different sets of scenarios against different assumptions and Pareto efficiencies — and as humans we just can’t do that, and today’s AI can’t do that either (despite the over-hyped claim to the contrary). You need optimization to pick/justify your options, and then ongoing optimization to keep costs, and revenue, in line with prediction as global events force you to reroute regionalized and localized supply chains, substitute materials due to shortages, etc.

What was killed was the simple concept of global optimization that was relatively easy to do without optimization (and what passed as optimization for the past 25 years). Up until now, the reality was that, if you had even a few constraints, and the ability to do simple math, you could quickly eliminate the most expensive suppliers and the suppliers that couldn’t meet your constraints, and then, using your constraints, cherry pick the lowest or second-lowest cost supplier/distributor, and come up with a solution that was within 1% to 2% of theoretical optimal, but that was actually more optimal in practice as it was more stable and easier to maintain.

Optimization is only needed when you need to make choices that can’t be made without considering multiple sub-cases, regionalizations, and localizations — and this is exactly what this messed up world has given us!

It becomes even more important if you are a true multi-national with business in, and government commitments in, the US, EU, and China. You have to adhere to all of the rules globally, but you can’t with any one product formulation, so you have to create at least 2 different products where you figure out what 2 of the three combinations are easiest AND cheapest to make, where to make them, and how to supply them to the countries you serve (with there will be one country each mix cannot be imported into). This requires a host of scenarios to be run before a selection to be made, and a host of models to be continually run during production and distribution to ensure everything aligns with changing market conditions.

So while the classic optimization vendors who can’t do anything more than minimally constrained global optimization are now dead, it’s finally opened up the era of real optimization. The question is, what vendors are going to step up to fill the void?

While Your Supply Chains Are Impacted by War, They are Not At War!

And just because autonomous AI has become a standard tool of the current conflicts, that doesn’t mean that autonomous AI should be a standard tool in your supply chains. AI, defined properly, most definitely should, but not autonomous AI. And even then, only with human oversight!

This rant is inspired by THE PROPHET who tells us that The War in Iran is an AI War. Your Procurement and Supply Chain War Should Be as Well. And, despite parts of it appearing in LinkedIn comments, it is being expanded and reposted now to emphasize our previous article (on Friday) that essentially stated YOU SHOULD NEVER TRUST YOUR AI.

First of all, procurement and supply chain management isn’t a war. It’s a tense conflict between buyer needs and supplier leverage, but not a war.

Secondly, the fact that “AI never stops for a coffee break or to complain about leave not being granted.” is not on its own a valid justification for using it.

Because, by the same token, it also doesn’t care if a strike accidentally hits a school and murders hundreds of innocent children. (Al Jazeera, BBC, and Haaretz)

Nor does it care if multiple civilians get killed in a drone strike just to relieve a human soldier of a guilty conscience as they didn’t order the killing of the target and make the decision that resulted in civilian deaths. (NPR, The Guardian, The Times of Israel)

Given that AI has no ethics and no real intelligence to evaluate a situation beyond data it is provided and the question it is asked, is it really good enough to plan an operation on its own? I’d say it is not. (And also that it was applied without a full understanding of its weak points and how to use it properly.) (And if you want a great post about how critical human command decisions are, check out Michael Salehi‘s post and how the right decision always requires judgement, experience, and accountability — which an AI does not have.)

This is why Anthropic wants some safeguards, why you should too, and why you should be just as careful about where and how you use it in your supply chain. There are two realities with AI:

Properly applied augmented intelligence is a gift from heaven.

If you take the augmented intelligence approach, it can process all the data, give you recommendations, give you a synopsis of the reasoning, and allow you to dig into that reasoning, ask questions about risk and indirect ramifications, and explore the broader picture when you need to.

AI is not human, not ethical, not flawless, and not responsible.

You still need to review the synopsis, dig in when something appears to be off (and even if it’s just an uneasy feeling — your “intuition” can often be just as valid as the AI output), and verify the decision. And often these tools will allow what would take weeks to be done in minutes. But sometimes you’ll find there isn’t enough data, and you won’t be able to act confidently right away.

Now, THE PROPHET didn’t like my response, and countered with a number of questions, which I gladly answered and will repeat here because two of those questions missed the point, and including them helps illustrate what the real questions are.

“Would you take action?”: Yes!
(I don’t care if you agree or disagree with my viewpoint, or THE PROPHET‘s viewpoint, as this is not the point.)

“Would you use all tools available?”: YES!
(Again, I don’t care if you agree or disagree with my viewpoint, or THE PROPHET‘s viewpoint, as this not the point either.)

“Would you trust the tools blindly?“: No!

“Would you rush them into deployment without proper field testing and safeguards?”: NO!

That’s the point. All the hype and promises are resulting in an implicit trust of AI when it should be “Trust … But Verify!“. It’s usually the omission of just one extra step, which is usually just a few minutes of extra human review, that is the difference between success and accuracy vs. failure and widespread destruction. And this is true both in war and in business decisions that impact your supply chain.

This is why I continue to so strongly caution against the use of “autonomous AI” when it is largely built on systems that are flawed at the core, where hallucinations are part of the core function, and one subtle change in a prompt or query can result in a completely different output.

The reality is that, while you need modern tech platforms, constant intelligence monitoring, and pre-defined mitigation strategies just to survive, you usually don’t need AI. (Or at least not the “AI” they are selling … which, as you guessed, isn’t “AI” at all.)

What you do need to do is prepare for AI. If you do that, which involves:

  • getting your data under control
  • building an infrastructure for connectivity, process, and data integration
  • updating your processes for modern environments
  • training your talent accordingly

You will find that you have

  • put data at the core of not just category strategy, but overall operations
  • expanded your definition of risk to include price, partners, and related information flows
  • identified where automation fits; where optimization, analytics, and machine learning fits; and where “AI” doesn’t actually add any additional value
  • figured out that Employees backed by Augmented Intelligence and agents with escalating, but still restricted in critical situations, automation privileges as they learn from those human are best
  • developed a much better understanding of multi-tier exposure
  • begun the process of transitioning to a new, alert, organizational state where you are continually monitoring, optimizing, and re-planning your supply chain in response to emerging disruptive threats … and, as Koray Köse (who we may have to start calling The Oracle due to the insightful nature of his posts) points out, this is where you need to be

… and this is everything THE PROPHET says you need. Most importantly, all of this just might be accomplished without any modern AI (and definitely no BS AI Employees) at all!

The Office Of Procurement

In a dark office basement
Stale air all around
Dank smell of old HVAC
Rising up from the ground
Up the stairs in the distance
I see a shimmering light
My eyes grow heavy and my sight grows dim
I have to stop for the night

Then she stands in the doorway
I hear the mission bell
And I am thinking to myself
This could be heaven or this could be hell
Then she lights up a candle
And she shows me the way
There are voices in the sub-basement
I think I hear them say

Welcome to the Office of Procurement
Such a lovely place (such a lovely place)
Where we fall from grace
Plenty of room in the Office of Procurement
Any time of year (any time of year)
You can find me here

My mind is cost-savings-twisted
Can’t buy the Mercedes-Benz, uh
Sales got a lot of pretty, pretty toys
I work weekends
How they spend without limits
It just blows my mind
While they spend in excess
I have to count the dimes!

So I called up the the Vendor
“Can you spare me some yield”
He said, “We haven’t had those margins here since
Woodstock claimed the field”
And still those voices are calling from far away
Wake you up in the middle of the night
Just to hear them say

Welcome to the Office of Procurement
Such a lovely place (such a lovely place)
Where we fall from grace
They’re livin’ it up at the Office of Procurement
What a nice surprise (what a nice surprise)
Bring your alibis

LED Lights on the ceiling
The Prosecco on ice
And they say, “We are all just prisoners here
Of our own device”
And in the master’s chambers
We gather for the feast
We stab it with our steely knives
But we just can’t kill the beast

Last thing I remember
I was running for the door
I had to find the staircase back
To the place I was before
“Relax”, said the Big Boss
“We are programmed to receive
You can check out any time you like
But you can never leave

What’s Wrong With 22% of Organizations? Why Do They Trust AI?

In a recent Horses for Sources Piece on The HFS AI Trust Curve: AI isn’t failing … leadership is, the byline is 78% of organizations do not trust their AI.

What the h3ll? 100% of organizations should not trust their AI when

  1. only 6% of organizations are seeing success (MIT, McKinsey) and
  2. there is no true Artificial Intelligence.

As a result, AI should NOT be trusted!

However, properly designed adaptive robotic automation, Machine Learning, and appropriately gated and guard-railed AI which sends exceptions for humans to deal with when the rules don’t cover the situation, the gaps are beyond what should be dealt with automatically with no approved precedents, and the only resolution you can trust is a human one is an AI that should be deployed since, while it might not be 100% perfect, it can still be applied with confidence as the guardrails will ensure no significant failures.

In other words, while I don’t agree that Agentic AI should be embraced to make decisions, because IBM had it right back in 1979:

a computer can never be held accountable, therefore a computer must never make a management decision
 

I do agree that the vast majority of back office tasks are just bit pushing and can be appropriately defined with flexible, parameterized rules, with machine learning that learns the tolerances over time, which means that agentic AI should be widely applied throughout a back-office, and that organizations that don’t embrace this level of AI are going to fall behind, but the trust in technology should not extend to decision making. Just decision execution.

And if 78% of organizations don’t trust their agentic systems to execute decisions, then that is a problem — they are going to fall behind, they won’t embrace SaS (Software as Services) where it makes sense, their overhead costs will stay high in a tight economy, and they’ll get crushed by the competition who will be able to be more competitive and actually sell in a tight economy.

In other words, despite HFS’ implications, organizations should NEVER trust Agentic AI to make decisions, but they absolutely need to trust the AI to execute the decision. If they don’t, they’re in trouble.

Part of the problem might be the framing of the last step of the current HFS Enterprise Adoption Journey.

Stage 1: Can the AI Model Work?
This is where you start. You have to find a viable model.

Stage 2: Do we Believe the Inputs?
This is where you progress to. You need valid inputs.

Stage 3: Will People Act on it?
This is the next step. If you don’t have organizational readiness, the initiative has failed before it begins.

Stage 4: Is the AI allowed to influence outcomes?
Since there is no such thing as Artificial Intelligence, and a computer should never make a decision, the AI should never be allowed to influence outcomes. It should INFORM outcomes. It’s a slight difference, but an important one. Moreover, it doesn’t really affect how the AI should be implemented. You’re still implementing with the goal that the AI will eventually automate at least 99% of all instances of the task(s) it is designed to execute, and the only difference is that you are deciding what to do with an exception and training the AI to execute your decisions, not being trained by it to accept anything as gospel that it recommends.

This minor change creates the trust matrix you adopt, and puts you on the path to proper Agentic AI automation that will allow your workforce to be up to 10X as productive. Augmented Intelligence, be it in-house or through SAS, is the true future. The tech is there for many tasks now, and you don’t have to wait for a promise that won’t materialize within our lifetime.

Now is NOT A Great Time to Buy (Part 3)

Standalone “Intake to Nowhere”, “Classic Onboarding and Supplier Management”, “Predictive” Analytics, “Contract” AI, “Agentic” AI or Classic Mega-Suites … until 2029

Yesterday we reminded you that while you need intake and orchestration, you need supplier intelligence, you need predictive analytics, you need AI-based contract analytics, and you need “Agentic” AI that executes (but does not make) decisions, you should not buy it standalone, at least not now, and you definitely shouldn’t buy a classic mega suite.

While all of the solutions we have tackled so far are currently over-priced, Agentic AI, which is the new hype, is the most over priced offering of them all, especially with the consistent over-promising by these new generation vendors that are promising BS “AI Employees” while delivering task automation that is reliable as a chocolate teapot where consistent, dependable, execution is concerned. Now, some of these vendors will figure out that you need constrained, double guard-railed, multi-agent systems with human monitoring and exception intervention and eventually deliver reliable augmented intelligence systems that make an average employee super human, and they will be worth it, most of these vendors will simply try to out-prompt each other through custom clod and chat, j’ai pété wrappers, cr@p out at about 80% to 95% reliability depending on the task, never be trust worthy, and never be worth it. Since these just started to hit the big time, with ridiculous over-funding, in the past year or two, it will be three more years before the dust truly settles and 2029 before you want to make any long term bets.

Plus, if you know the real history of AI, which is probably older than your grandfather FYI (with the first algorithm to be awarded the title developed 70 years ago), you know that it’s usually close to two decades before a new algorithm is mature enough, and understood enough, with real, solid, mathematical measures of reliability, for mass, unmonitored, industrial use. And typically at least a decade before it’s ready for leaders to apply it in industry for monitored, target, use. The first LLM hit the scene in 2018. That means 2029 is also the year it will finally start to be reliable for a certain (but small) set of tasks in certain (but a small set of) domains. It will still hallucinate more than an LSD loving dead-head, but by then we’ll have much better detection methodologies and confidence measurements and will actually be able to trust it when the results get through the multi-layered security gates that we’ll finally be able to build with more understanding.

And yes, as we’ve said twice already, you need this tech. But buying “best of breed” will only “bleed your cash in the best way possible” with little measurable return.

But don’t return to a “classic mega-suite”. These are now more over-priced than ever. First of all, as we’ve discussed many times on this site, unless you are a Fortune 1000/Global 3000 multi-national with extensive, and complex, source-to-pay needs, you don’t need to pay Millions of Dollars a year for a suite when an 80% mid-market solution for 250K a year will do the trick. (See our piece on how much should you outlay for ADVANCED Source to Pay.)

Not only do most organizations only have a few categories where advanced technologies are needed, and usually only in one or two of the modules the mega-suite sells, but most of their categories are so straightforward that even BoB mid-market solutions present not just an 80%+, but a 90%+, solution. Plus, modern ARPA and appropriately focussed Agentic solutions are allowing mid-sized organizations to cobble together “good enough” solutions from low-cost 80% point solutions for 10% of the cost of a mega suite that gets them started on their journey, allowing them to upgrade to better solutions as they need, and only as they need.

This is putting severe cost pressure on the mega-suites, which are going to have to admit that most of their solutions, workflow, and UIs are over a decade old and not worth the premium they once charged. For organizations that truly need these solutions, from vendors which aren’t aggressively updating their solutions (due to these vendors being purchased by PE firms at too high a valuation and, thus, being forced to cost cut to meet ridiculous sales targets), if they wait a year or two, these will soon be priced at what they’re worth, and you’ll get an annual license for less than half of what they are charging today and get all the functionality you need to boot!

So, at the end of the day, while you need a solid Procurement solution that comes with a modern intake front-end, has orchestration at the core, provides you supplier intelligence, integrates the analytics you need, helps you with your contracts and their processes (to the extent you actually need that help), and allows for adaptive robotic process automation for all your well defined tasks (and provides the data foundation for “Agentic” AI if you have valid applications where such technology will actually bring value), you don’t need to overpay for it. And you definitely don’t need to pay the double to quadruple price tags that current mega-suites are charging.

But if you can find what you need, at a fair price tag, and you buy that, you buy real value that will appreciate with time because it will do what you need it to do, at a fair price, and that’s the only way you save time and money with ProcureTech. Getting what you need, when you need it, at a fair price point. You know, classic Procurement!

Remember that.