And just because autonomous AI has become a standard tool of the current conflicts, that doesn’t mean that autonomous AI should be a standard tool in your supply chains. AI, defined properly, most definitely should, but not autonomous AI. And even then, only with human oversight!
This rant is inspired by THE PROPHET who tells us that The War in Iran is an AI War. Your Procurement and Supply Chain War Should Be as Well. And, despite parts of it appearing in LinkedIn comments, it is being expanded and reposted now to emphasize our previous article (on Friday) that essentially stated YOU SHOULD NEVER TRUST YOUR AI.
First of all, procurement and supply chain management isn’t a war. It’s a tense conflict between buyer needs and supplier leverage, but not a war.
Secondly, the fact that “AI never stops for a coffee break or to complain about leave not being granted.” is not on its own a valid justification for using it.
Because, by the same token, it also doesn’t care if a strike accidentally hits a school and murders hundreds of innocent children. (Al Jazeera, BBC, and Haaretz)
Nor does it care if multiple civilians get killed in a drone strike just to relieve a human soldier of a guilty conscience as they didn’t order the killing of the target and make the decision that resulted in civilian deaths. (NPR, The Guardian, The Times of Israel)
Given that AI has no ethics and no real intelligence to evaluate a situation beyond data it is provided and the question it is asked, is it really good enough to plan an operation on its own? I’d say it is not. (And also that it was applied without a full understanding of its weak points and how to use it properly.) (And if you want a great post about how critical human command decisions are, check out Michael Salehi‘s post and how the right decision always requires judgement, experience, and accountability — which an AI does not have.)
This is why Anthropic wants some safeguards, why you should too, and why you should be just as careful about where and how you use it in your supply chain. There are two realities with AI:
Properly applied augmented intelligence is a gift from heaven.
If you take the augmented intelligence approach, it can process all the data, give you recommendations, give you a synopsis of the reasoning, and allow you to dig into that reasoning, ask questions about risk and indirect ramifications, and explore the broader picture when you need to.
AI is not human, not ethical, not flawless, and not responsible.
You still need to review the synopsis, dig in when something appears to be off (and even if it’s just an uneasy feeling — your “intuition” can often be just as valid as the AI output), and verify the decision. And often these tools will allow what would take weeks to be done in minutes. But sometimes you’ll find there isn’t enough data, and you won’t be able to act confidently right away.
Now, THE PROPHET didn’t like my response, and countered with a number of questions, which I gladly answered and will repeat here because two of those questions missed the point, and including them helps illustrate what the real questions are.
“Would you take action?”: Yes!
(I don’t care if you agree or disagree with my viewpoint, or THE PROPHET‘s viewpoint, as this is not the point.)
“Would you use all tools available?”: YES!
(Again, I don’t care if you agree or disagree with my viewpoint, or THE PROPHET‘s viewpoint, as this not the point either.)
“Would you trust the tools blindly?“: No!
“Would you rush them into deployment without proper field testing and safeguards?”: NO!
That’s the point. All the hype and promises are resulting in an implicit trust of AI when it should be “Trust … But Verify!“. It’s usually the omission of just one extra step, which is usually just a few minutes of extra human review, that is the difference between success and accuracy vs. failure and widespread destruction. And this is true both in war and in business decisions that impact your supply chain.
This is why I continue to so strongly caution against the use of “autonomous AI” when it is largely built on systems that are flawed at the core, where hallucinations are part of the core function, and one subtle change in a prompt or query can result in a completely different output.
The reality is that, while you need modern tech platforms, constant intelligence monitoring, and pre-defined mitigation strategies just to survive, you usually don’t need AI. (Or at least not the “AI” they are selling … which, as you guessed, isn’t “AI” at all.)
What you do need to do is prepare for AI. If you do that, which involves:
- getting your data under control
- building an infrastructure for connectivity, process, and data integration
- updating your processes for modern environments
- training your talent accordingly
You will find that you have
- put data at the core of not just category strategy, but overall operations
- expanded your definition of risk to include price, partners, and related information flows
- identified where automation fits; where optimization, analytics, and machine learning fits; and where “AI” doesn’t actually add any additional value
- figured out that Employees backed by Augmented Intelligence and agents with escalating, but still restricted in critical situations, automation privileges as they learn from those human are best
- developed a much better understanding of multi-tier exposure
- begun the process of transitioning to a new, alert, organizational state where you are continually monitoring, optimizing, and re-planning your supply chain in response to emerging disruptive threats … and, as Koray Köse (who we may have to start calling The Oracle due to the insightful nature of his posts) points out, this is where you need to be
… and this is everything THE PROPHET says you need. Most importantly, all of this just might be accomplished without any modern AI (and definitely no BS AI Employees) at all!
