Category Archives: rants

San Altman is definitely the P.T. Barnum of our age …

But to repeat claims (as per this Futurism article) that he’s the Bernie Madoff or Sam Bankman-Fried of our age without proving he has the IQ he says he has (which some of us don’t believe he has), or providing evidence he’s the world’s biggest sociopath (as that’s who you’d have to be to knowingly defraud major investment funds of hundred of billions of dollars, funds that likely hold the retirement funds of hundreds of millions of people) seems just a little unfair. After all, if Sam can barely code and misunderstands basic machine learning concepts, which I totally believe (as that would seem to be a fundamental requirement to believing AI actually works and is actually capable of intelligence in its current form), that would seem to indicate his IQ is on the low side and that he thus believes that his AI works and is actually intelligent.

If this is the case, then even though all of his investors will most likely eventually lose Billions (and likely Tens of Billions, and maybe Hundreds of Billions) of dollars on “AI” that will never work, it’s not fraud because he might actually be dumb enough to believe every word of what he’s selling. Fraud, like many major US crimes, requires intent (and, in Sam’s case, would require understanding what his firm’s offering actually does vs. what he seems to believe it does).

18 U.S. Code § 1341 starts off with “Whoever, having devised or INTENDing to devise any scheme or artifice to defraud, or for obtaining money or property by means of false or fraudulent pretenses …”. He didn’t devise the scheme of raising venture capital and private equity, so that doesn’t apply. If he believes his garbage actually delivers intelligence (even though it doesn’t), and will work better with bigger models and better data centres funded by the money he’s trying to raise (even though it won’t), he’s not intending to defraud either. Which means that he’s not a Madoff (who devised a Ponzi scheme with the intent to defraud) or a Sam Bankman-Fried (who willfully misused crypto funds for his hedge funds and pay personal debts).

He’s just a showman peddling his digital puppet theatre (who is blissfully unaware of how bad it is) and if you’re dumb enough to fall for it, that’s on you, not him!

If you’re looking for real fraud, maybe look to your federal government?

PS: I never thought I’d feel the need to defend an individual who I see as one of the biggest scourges of the digital age! But when there are a lots of individuals out there actively defrauding consumers with knowledge and intent every single day and getting away Scott free without any effort whatsoever to even formally recognize the fraud, that was a really unfair byline.

STOP USING AI FOR WRITING NOW! (BEFORE IT’S TOO LATE FOR YOU!)

If you think AI writing is becoming excellent, that’s the SIGN you should STOP using AI for writing immediately.

The reason you think this is multifold, and no part of it is good.

1) AI is too agreeable (and sycophantic)

(Source)

2) This increases your dependence on it

(Source)

3) Which leads not only to cognitive decline

(Source)

4) but cognitive surrender

(Source)

And as for the “I know how to use it, I’m in control” argument, that’s all BS. It’s an illusion because frequent use of AI BREAKS THE MOST RATIONAL OF THINKERS!

(Source)

You might think you’re guiding it, but it’s brainwashing you to accept without question the same derivative cr@p it always spits out because that’s all it can do. Remember, just because the token size is large enough for the LLM to generate grammatically proper English 99.999% of the time, that doesn’t mean there’s any logic or meaning to what it generates!

And yes, the doctor saw all of this coming, as he understood early on exactly what LLMs were and were not. That’s why SI has had a formal NO AI policy (as well as a NO AI BS policy) for a long time now (and never used AI)!

(You have to remember that, as humans, there is a relatively significant chance we will end up using a nursing home at some point in our lives in North America, with some estimates now putting that chance over 50%, and an ever greater chance that when we end up there, it will be [partially] due to mental decline, dementia, and similar conditions. We’re also suffering population stagnation, if not decline, in most western countries. As a result, it’s in our best interest to do everything we can to keep our mental faculties about us for as long as we can, because there’s barely enough health care workers to care for those who already need care as it is. Think seriously about what’s going to happen if, en-masse, society goes all-in on technology that is essentially turning us into drooling mindless idiots and greatly increasing the chances we become unable to care for ourselves immediately upon entering retirement … )

Today is the One Day Procurement Doesn’t Have to Worry About Purchasing Software and Services …

… because vendors try to make a fool of them every day of the year.

As per our recent 3-part series on Now is NOT a Good Time To Buy (1, 2, and 3), vendors across the board are trying to overcharge you on a daily basis, by as much as 900% of the software’s actual value.

Services vendors are constantly trying to push you towards the most expensive offerings (that give them the greatest kickbacks, sorry, partner commissions), upsell you on as many modules (that you don’t need) as possible, drag out the implementation (which they know will be easy because they know you’re not organizationally ready to support an implementation because they didn’t prepare you because you didn’t ask), insist on extraneous integrations using custom connectors, and then up-sell you on training for an overly complex system you weren’t ready for.

Then there are GPOs who are claiming they can save you dollars you can’t save on your own, and that you should hand over a whole host of categories to them for an annual six figure access fee and a slice of every transaction, even though you could do just as good on your own managing the larger categories they want to hand over (by the time you factor in the transaction fee and the amortized GPO access fee) and handing the rest over to low cost Amazon Business.

Then there are the marketplaces you need to use for your tail spend that try to convince you to pay preferred access fees, priority order processing fees, expedited shipping fees (for carriers they control), etc. All extra costs for non-priority goods and MRO.

And, of course, when the sales person at the supplier thinks that you don’t have any other immediate options, they come up with fees you never heard of in order to guarantee that order.

In other words, every vendor and supplier is trying to make a fool out of you every day. There’s nothing else they can try to pull on April Fools day that they haven’t already. Multiple times.

In other words, we’re the one profession that doesn’t have to worry about being an April fool, as we deal with tricksters, cons, and frauds every other day of the year.

While Your Supply Chains Are Impacted by War, They are Not At War!

And just because autonomous AI has become a standard tool of the current conflicts, that doesn’t mean that autonomous AI should be a standard tool in your supply chains. AI, defined properly, most definitely should, but not autonomous AI. And even then, only with human oversight!

This rant is inspired by THE PROPHET who tells us that The War in Iran is an AI War. Your Procurement and Supply Chain War Should Be as Well. And, despite parts of it appearing in LinkedIn comments, it is being expanded and reposted now to emphasize our previous article (on Friday) that essentially stated YOU SHOULD NEVER TRUST YOUR AI.

First of all, procurement and supply chain management isn’t a war. It’s a tense conflict between buyer needs and supplier leverage, but not a war.

Secondly, the fact that “AI never stops for a coffee break or to complain about leave not being granted.” is not on its own a valid justification for using it.

Because, by the same token, it also doesn’t care if a strike accidentally hits a school and murders hundreds of innocent children. (Al Jazeera, BBC, and Haaretz)

Nor does it care if multiple civilians get killed in a drone strike just to relieve a human soldier of a guilty conscience as they didn’t order the killing of the target and make the decision that resulted in civilian deaths. (NPR, The Guardian, The Times of Israel)

Given that AI has no ethics and no real intelligence to evaluate a situation beyond data it is provided and the question it is asked, is it really good enough to plan an operation on its own? I’d say it is not. (And also that it was applied without a full understanding of its weak points and how to use it properly.) (And if you want a great post about how critical human command decisions are, check out Michael Salehi‘s post and how the right decision always requires judgement, experience, and accountability — which an AI does not have.)

This is why Anthropic wants some safeguards, why you should too, and why you should be just as careful about where and how you use it in your supply chain. There are two realities with AI:

Properly applied augmented intelligence is a gift from heaven.

If you take the augmented intelligence approach, it can process all the data, give you recommendations, give you a synopsis of the reasoning, and allow you to dig into that reasoning, ask questions about risk and indirect ramifications, and explore the broader picture when you need to.

AI is not human, not ethical, not flawless, and not responsible.

You still need to review the synopsis, dig in when something appears to be off (and even if it’s just an uneasy feeling — your “intuition” can often be just as valid as the AI output), and verify the decision. And often these tools will allow what would take weeks to be done in minutes. But sometimes you’ll find there isn’t enough data, and you won’t be able to act confidently right away.

Now, THE PROPHET didn’t like my response, and countered with a number of questions, which I gladly answered and will repeat here because two of those questions missed the point, and including them helps illustrate what the real questions are.

“Would you take action?”: Yes!
(I don’t care if you agree or disagree with my viewpoint, or THE PROPHET‘s viewpoint, as this is not the point.)

“Would you use all tools available?”: YES!
(Again, I don’t care if you agree or disagree with my viewpoint, or THE PROPHET‘s viewpoint, as this not the point either.)

“Would you trust the tools blindly?“: No!

“Would you rush them into deployment without proper field testing and safeguards?”: NO!

That’s the point. All the hype and promises are resulting in an implicit trust of AI when it should be “Trust … But Verify!“. It’s usually the omission of just one extra step, which is usually just a few minutes of extra human review, that is the difference between success and accuracy vs. failure and widespread destruction. And this is true both in war and in business decisions that impact your supply chain.

This is why I continue to so strongly caution against the use of “autonomous AI” when it is largely built on systems that are flawed at the core, where hallucinations are part of the core function, and one subtle change in a prompt or query can result in a completely different output.

The reality is that, while you need modern tech platforms, constant intelligence monitoring, and pre-defined mitigation strategies just to survive, you usually don’t need AI. (Or at least not the “AI” they are selling … which, as you guessed, isn’t “AI” at all.)

What you do need to do is prepare for AI. If you do that, which involves:

  • getting your data under control
  • building an infrastructure for connectivity, process, and data integration
  • updating your processes for modern environments
  • training your talent accordingly

You will find that you have

  • put data at the core of not just category strategy, but overall operations
  • expanded your definition of risk to include price, partners, and related information flows
  • identified where automation fits; where optimization, analytics, and machine learning fits; and where “AI” doesn’t actually add any additional value
  • figured out that Employees backed by Augmented Intelligence and agents with escalating, but still restricted in critical situations, automation privileges as they learn from those human are best
  • developed a much better understanding of multi-tier exposure
  • begun the process of transitioning to a new, alert, organizational state where you are continually monitoring, optimizing, and re-planning your supply chain in response to emerging disruptive threats … and, as Koray Köse (who we may have to start calling The Oracle due to the insightful nature of his posts) points out, this is where you need to be

… and this is everything THE PROPHET says you need. Most importantly, all of this just might be accomplished without any modern AI (and definitely no BS AI Employees) at all!

What’s Wrong With 22% of Organizations? Why Do They Trust AI?

In a recent Horses for Sources Piece on The HFS AI Trust Curve: AI isn’t failing … leadership is, the byline is 78% of organizations do not trust their AI.

What the h3ll? 100% of organizations should not trust their AI when

  1. only 6% of organizations are seeing success (MIT, McKinsey) and
  2. there is no true Artificial Intelligence.

As a result, AI should NOT be trusted!

However, properly designed adaptive robotic automation, Machine Learning, and appropriately gated and guard-railed AI which sends exceptions for humans to deal with when the rules don’t cover the situation, the gaps are beyond what should be dealt with automatically with no approved precedents, and the only resolution you can trust is a human one is an AI that should be deployed since, while it might not be 100% perfect, it can still be applied with confidence as the guardrails will ensure no significant failures.

In other words, while I don’t agree that Agentic AI should be embraced to make decisions, because IBM had it right back in 1979:

a computer can never be held accountable, therefore a computer must never make a management decision
 

I do agree that the vast majority of back office tasks are just bit pushing and can be appropriately defined with flexible, parameterized rules, with machine learning that learns the tolerances over time, which means that agentic AI should be widely applied throughout a back-office, and that organizations that don’t embrace this level of AI are going to fall behind, but the trust in technology should not extend to decision making. Just decision execution.

And if 78% of organizations don’t trust their agentic systems to execute decisions, then that is a problem — they are going to fall behind, they won’t embrace SaS (Software as Services) where it makes sense, their overhead costs will stay high in a tight economy, and they’ll get crushed by the competition who will be able to be more competitive and actually sell in a tight economy.

In other words, despite HFS’ implications, organizations should NEVER trust Agentic AI to make decisions, but they absolutely need to trust the AI to execute the decision. If they don’t, they’re in trouble.

Part of the problem might be the framing of the last step of the current HFS Enterprise Adoption Journey.

Stage 1: Can the AI Model Work?
This is where you start. You have to find a viable model.

Stage 2: Do we Believe the Inputs?
This is where you progress to. You need valid inputs.

Stage 3: Will People Act on it?
This is the next step. If you don’t have organizational readiness, the initiative has failed before it begins.

Stage 4: Is the AI allowed to influence outcomes?
Since there is no such thing as Artificial Intelligence, and a computer should never make a decision, the AI should never be allowed to influence outcomes. It should INFORM outcomes. It’s a slight difference, but an important one. Moreover, it doesn’t really affect how the AI should be implemented. You’re still implementing with the goal that the AI will eventually automate at least 99% of all instances of the task(s) it is designed to execute, and the only difference is that you are deciding what to do with an exception and training the AI to execute your decisions, not being trained by it to accept anything as gospel that it recommends.

This minor change creates the trust matrix you adopt, and puts you on the path to proper Agentic AI automation that will allow your workforce to be up to 10X as productive. Augmented Intelligence, be it in-house or through SAS, is the true future. The tech is there for many tasks now, and you don’t have to wait for a promise that won’t materialize within our lifetime.