Data is Too Darn Expensive Today … But It Won’t Be For Long

THE PROPHET, who has recently discovered ranting is his new favourite thing to do (on LinkedIn), recently complained that Procurement, Commodity, and Supplier Data is Too Darn Expensive.

And while he’s right in that data is often too expensive for what it is, it’s not going to stay that way. Next generation providers are going to commoditize quality data and lower anonymized community data subscriptions to win (and keep) clients, because they know that there’s no value in advanced technology alone (and especially in analytics, optimization, and AI wihtout quality data to feed it) but there are three key points he missed in his rant where he complained about data prices and advocated the use of LLMs and Gen-AI as a substitute (which they are not, and considering how much they hallucinate, we wouldn’t even trust them to be directionally accurate — just feed the historical data you can get your hands on into Excel and do some basic trend plotting if directionality is enough).

1) As Lisa Reisman noted in the comments, sometimes you need highly granular accurate data by geography, volume, and production methodology. When pennies make a difference, because you are buying tens or hundreds of millions worth of the material for a global operation, it matters.

2) Most firms are still ignoring their own data, which, when run through something like Covalyze (which THE PROPHET should love as it was founded and designed by economists), gives very accurate target cost models on any category the firm has enough historical data on, allowing them to pinpoint where they need more data and why for cost breakdowns (and should cost models to refine the target cost models), and which suppliers they actually need those expensive profiles on. Then they can go to pay by the sip providers like Veridion for basic supplier data or other emerging commodity and supplier data portals.

3) The amount of data most firms need is much less than they think. In the tail, most of the spend is not significant enough for any market data to provide insight on a significant savings potential beyond what you will get from analyzing your own historical data and market quotes. When pennies won’t make a difference, you don’t do detailed cost breakdowns by raw material. When the product is a commodity that can be supplied by multiple suppliers at similar price points and equal quality levels, you don’t do deep risk profiles because you can just go to the next supplier in the queue if the first one fails you. And so on. You only do detailed analysis where there is statistical likelihood of a real opportunity or a real risk. Otherwise it’s a waste of time, money, and resources as no organization today even comes close to fully analyzing the significant categories and risks they have in any given year. Thinking you will do more is delusional and not worth it if you don’t have the basics covered.

By the time firms actually need more data, you can bet a next generation of data providers will have it readily available and cheap by today’s standards.

Big X Consultancies Peddling AI BS Will Flatten Procurement, but AI Certainly Won’t!

In a LinkedIn post from weeks past, THE PROPHET states that AI Will Soon Flatten Procurement and Operations Consulting.

He makes a good argument, but there is one big flaw. Namely, with AI there is no:

“Strategy, trust building, and decision support”.

You can’t use Gen-AI for decision support when every reference it gives you can be a 100% fabricated hallucination based on data it also hallucinated from sites and authors it also hallucinated (with complete back stories it also hallucinated as well). In other words, unlike pure number crunching in a classical ML-based platform, it’s NOT dependable. When it works reasonably well, it can sometimes get you started in the right direction, but it certainly won’t replace entire strategy teams … at least not if the teams are any good. (However, if they are recent grads with no actual real world experience, then, sure, go ahead. Not like it will do much worse than the bench of drunken plagiarist interns who don’t know a brake shoe from an athletic shoe when making a pitch to a client, or that a menswear site is NOT a good demonstration for a facilities manufacturer that needs MRO. (And the doctor is not making this last part up, he’s seen it! More than once!)

At the end of the compute cycle, there’s no strategy in what ever “strategic plan” it produces, just computations based on its perceptions of probabilities. Big companies change by shifting the trends, not by following them, and definitely not by failing to validate them. And if you are are paying 20M to 50M for a major transformation consulting engagement to a Big X, which would fund a new Broadway performance every quarter for your entire workforce (and give them a morale boost and likely lead to slightly better performance, especially when compared to the failed transformation effort you will end up with if it is AI driven), you don’t want to follow the crowd. You want to lead the market, and do so in a big way.

Moreover, there’s definitely no trust if everything you do is based on a soulless clueless algorithm that, right now, has a chance of failure approaching 92% (and predictions that are cloudy with a chance of meatballs), and, when it does work, costs 3 times as much and 5 times as long to implement for just a minor improvement (when you could get a moderate improvement just streamlining your processes and implementing modern Source-to-Pay-to-Supply-to-Service systems and carefully planned and rolled out back-office FinTech upgrades).

We ned to go back to 2017, continue on the classic AI-enablement path we were on with technologies that were finally working well (as we finally had the compute power we didn’t have when started at the dawn of the century), and give a well educated and experienced capable analyst consultant the tools she needs to do the work that once took ten consultants. That’s the path. Augmented intelligence with powerful, modern tools. Clients still get their workforce reduction, tech companies can still sell overpriced software, but with no massive unexpected failures. Everybody wins (except, of course, for the idiot investors who invested at 20X revenue into Gen-AI startups that will never deliver).

AI Is Not Bad. But The Hype, False Claims, and Fake Tech That Many Vendors are Trying To Pass Off As Real Is!

As per a post on LinkedIn, I am NOT against real AI. I AM against the hype, false claims, and fake tech today’s enterprise vendors are trying to pass off as real AI!

You may recall, that like Jon W. Hansen and Pierre Mitchell, I was an early fan of AI, and what it could do for enterprise tech. As a PhD in Computer Science with a degree in applied math and specialties in multidimensional data structures and computational geometry (MSc and PhD) [think big data before that was a thing], analytics, optimization, and “classic” AI, as computing power advanced, and data stores exploded, I saw the real potential for next generation tech.

I did a very deep dive in a 22 part series on what should soon have been possible in our ProcureTech space on Spend Matters in ’18/’19, that started before the first LLM was released (despite the X-Files Warning). The research and implementation paths we were on was good, and the potential was great. It just required a lot of blood, sweat, elbow grease, and patience.

But then some very charming tech bros claimed that this new LLM tech was emergent and magical and would do everything and replace all of the old and busted (which was really tried and true) tech (that actually worked), some super deep pockets were blinded by the hype, we abandoned the path of progress (and sanity), and the rest, as they say, is history, which, sadly, is still ongoing (while tech failure rates have reached all time highs).

Until the space is ready to admit that

  • Gen-AI/LLMs are not the be-all and end-all, and, in fact, have very limited reliable uses (especially in automation/agentic tech) [namely only tasks that can be reduced to semantic processing and large corpus search & summarization]
  • real progress still requires real blood, sweat, elbow grease, and tears
  • you can’t replace people as this tech is NOT intelligent (although you can make them 10x productive if you start focusing on Augmented Intelligence)

and abandon its zealotous devotion to Gen-AI as the divine tech (which would bankrupt some tech bros and investors, which is why they are now doubling down on the marketing hype at the point where the hype cycle would usually burst), we’re not going to make progress.

As Pierre has pointed out, Gen-AI is useful as a piece of the puzzle when it is properly combined with other, traditional, reliable, AI tech, so long the foundation of the AI tech is built on a deterministic engine and only incorporates probabilistic models with known confidence and guardrails. (Remember that unless the use case boils down to semantic processing and large document corpus search and summarization, Gen-AI is NOT the right tech.)

When the day comes that we abandon the madness, I’ll be happy to jump back on the souped-up classic AI hype train because, with the exponential increases in computing power and data over the past two-and-half decades, we could finally build amazing tech. We just need to remember that the best AI tech has never been generic, it has always been purpose-built to a specific task and if we want to automate processes, we will have to orchestrate multiple point-based process-centric agents, which may or may not use AI, to accomplish that.

But until then, we need to keep railing against the hype and the fake tech.

When It Comes To Gen-AI, I’m NOT Yelling Enough! Part II

Deep dive into the comments of this LinkedIn post, you’ll see a comment that seems to claim that the potential gains from Gen-AI dwarf the occasional bad action. I strongly disagree!

If the laundry list of bad actions from Part I aren’t enough to convince you just how bad this technology is left unchecked, here are three situations that could most definitely arise if the technology is widely adopted to address those problems. Given current issues and performance, it requires almost no imagination at all to define them.

Situation 1: Run Your Entire Invoice Operation Using Bernie From the Felon Roster

Upon installation, Bernie is configured to “learn” when a human automatically processes an override and when he sees a situation that matches, just approve the invoice for payment.

Because Scrappy Steel is allowed to change the surcharge daily in response to the tariff situation, the invoice is always paid when the item cost matches the contract, the quantity is less than or equal to what’s remaining on the contract, and the logistics cost within a range.

Recently “replaced” Fred knows this so Fred fakes an email from Scrappy Steel from an IP in the same block with the headers faked properly and routes it through the first external ISP server Scrappy Steel’s email always bounces through and does so from a domain one character off from Scrappy Steel (that passes the cybersecurity check with an A+) that says bank account info changing on the next invoice. (Plenty of good tools for that on the dark web that have worked great for decades.)

The next invoice comes in for 10 units left than what is remaining on the contract (as Fred was only replaced 3 days ago), bank information for an account at the same bank with almost the same name (Scrappy Holdings), with all checked fields matching, except the surcharge is now 3000% of what it usually is (for a nice boost). Bernie happily pays it (as it is still in the trust gaining phase), Fred transfers the payment to a Cuban bank immediately upon receipt, and retires. Then, when the 45 day “trust gaining” phase ends, the organization experiences more fraud in 60 days than in the last 6 years.

Situation 2: A Major Electric Grid Installs a Gen-AI based security system to try and thwart Chinese and Russian Hacking Conglomerates

The local energy utility keeps getting attacked by a Chinese Hacking Conglomerate that wants to extort Millions. Knowing how easy it is for the grid to be overloaded, they decide they need to implement state of the art security before a hack attempt succeeds.

They go with XGenDarkAI+, a new holistic security filter that can process all outbound and inbound network traffic through its LLM enhanced predictive learning engine and identify and block threats from 360-degrees, or at least that’s what the vendor is claiming.

XGenDarkAI+ quickly learns that the utility never issues a remote shutdown command for a substation based on operator command history and the fact that all requests for a remote shutdown in its training history were hacking attempts. As a result, the next request for a remote shutdown is automatically blocked. Moreover, when the next two requests for the remote shutdown come in rapid succession (because the operator issuing them is starting to panic), it believes a massive DDoS attack is starting to allow a hacker to slip in locally and promptly shuts down all system access to prevent such a situation from happening.

But the command was valid, and was only being issued remote because there was a fire in the substation inside and outside the control room, and local shutdown was impossible as no one could get to the terminal.

However, since the shutdown wasn’t allowed, and the fire crews couldn’t get there on time, the substation overloads and explodes. This happens in California in August after 60 days of no rain when the woods are as dry as the Sahara, which sparks a forest fire that spreads across an entire rural suburb burning thousands of homes and displacing tens of thousands of people.

Situation 3: Nation Wide Kids Help Phone Augmentation

The local Kids Help Phone can’t keep up with the call volume, and some calls are less severe than others. Sometimes a kid is actively considering suicide, but many calls are just kids that need a voice to talk through their problems with. Due to funding cuts, too many calls are placed on hold or go unanswered.

But with today’s tech, an AI can be trained on actual calls of someone who’s done the job for 2+ years, simulate their voice (as it’s the wild-west in the US with no regulation permitted for 10 years), and each call center rep on duty can now take multiple calls with their Gen-AI assistant. The AI can handle basic inquiries, screen for desperate situations, and transfer to the human caller when things get bad, or at least that’s what the Kids Help Phone is sold by an AI provider who just wants the paycheck (and didn’t extensively test the system).

However, instead of screening and transferring, the AI decides it will just handle as it sees fit every call it gets if the human is not at their keyboard (which it assumes if the human isn’t on a call or hasn’t pressed a key in the last 60 seconds), including suicidal callers that should always be immediately (and seamlessly) routed to the experienced operator (who will sound the exact same, remember). It won’t be long before it encounters a situation where, after trying every stored argument in the book with a suicidal caller to no success, it ultimately decides reverse psychology might work and tells the kid to shoot himself. The kid promptly does. And since the provider rolled out dozens of implementations almost simultaneously (as all it needs are call logs from the selected operators to train the instances, which it can do in parallel due to massive computational power available on demand from AI data centres), this happens dozens of times across the installations within days of the first fatality. Upgrade to mass murder unlocked.

We could continue, but hopefully this is enough to drive the point home that unchecked Gen-AI brings detriments that are much worse than any of the potential unchecked Gen-AI can unlock.

When It Comes To Gen-AI, I’m NOT Yelling Enough! Part I

Deep dive into the comments of this LinkedIn post and you’ll see a comment that we should stop yelling at the tools. I strongly disagree!

As per a previous post, until the space is ready to admit that

  • Gen-AI/LLMs are not the be-all and end-all, having very limited uses
  • real progress still requires real blood, sweat, elbow grease, and tears
  • you can’t replace people as this tech is NOT intelligent

and, more importantly

  • that these tools are not what people need and
  • these tools cannot be used as the foundation for suitable solutions (although they can be [a small] part of those solutions if care is taken)

We need to keep yelling, and do so rather loudly.

Because, to build on the metaphor, it’s not a shiny new hammer. If it was just a shiny new hammer, we could depend on one of three things happening when we use the hammer to hit the nail:

  1. the nail goes some distance into the wood, depending on how hard we swing,
  2. the nail doesn’t go, because the hammer is too light, or
  3. if the handle is weak or the head not securely attached and we hit really hard and the nail doesn’t go in, in the absolute worst case the handle will crack or the head will fall off.

However, with the fancy new hammer equivalent of Gen-AI, we also have to worry about the possibility that:

  1. the hammer is super magnetized and pulls the nail out on the backswing,
  2. the hammer splits the nail in half,
  3. the hammer super heats the nail and melts it, or
  4. the hammer is packed with C4 and explodes, ripping our arm off our body!

Because, when you use Gen-AI, you accept the possible side effects of hallucinations, decreased code/application security, bad math, fraud, lawsuits, deadly diets, extremist views, sleeper behaviour, dependency and cognitive reduction, suicide, blackmail, hit lists, and murder, with many links summarized in this LinkedIn post.

And the worst part is this technology is being shoved into every nook and cranny, even those where we have technology that has worked great for over a decade (because the new generation of college-dropout script kiddies who believe that they can prompt engineer a solution to anything don’t even know the basics anymore).

It’s not just not solving our problems, it’s creating new ones, and they are often worse than the problems we have. We need to yell about this!