Category Archives: AI

Advanced Procurement Today — No Gen-AI Needed!

Back in late 2018 and early 2019, before the GENizah Artificial Idiocy craze began, the doctor did a sequence of AI Series (totalling 22 articles) on Spend Matters on AI in X Today, Tomorrow, and The Day After Tomorrow for Procurement, Sourcing, Sourcing Optimization, Supplier Discovery, and Supplier Management. All of which was implemented, about to be implemented, capable of being implemented, and most definitely not doable with, Gen-AI.

To make it abundantly clear that you don’t need Gen-AI for any advanced enterprise back-office (fin)tech, and that, in fact, you should never even consider it for advanced tech in these categories (because it cannot reason, cannot guarantee consistency, and confidence on the quality of its outputs can’t even be measured), we’re going to talk about all the advanced features enabled by Assisted and Augmented Intelligence that were (about to be) in development five years ago and are now available in leading best of-breed systems. And we’re continuing with Procurement.

Unlike prior series, we’re identifying the sound, ML/AI technologies that are, or can, be used to implement the advanced capabilities that are currently found, or will soon be found, in Source to Pay technologies that are truly AI-enhanced. (Which, FYI, may not match one-to-one with what the doctor chronicled five years ago because, like time, tech marches on.)

Today we continue with AI-Enhanced Procurement that was in development “yesterday” when we wrote our first series five years ago but is now available in mature best of breed platforms for your Procurement success. (This article sort of corresponds with AI in Procurement Tomorrow Part I, AI in Procurement Tomorrow Part II, and AI in Procurement Tomorrow Part III that were published in November, 2018 on Spend Matters.)

TODAY

OVERSPEND PREVENTION

By integrating trend analysis on demand and price, the platform can easily predict the date the budget will be exhausted, and if that’s before the end of the year, it can proactively pause an order for budgetary review EVEN IF it would be automatically approved in a last-gen system because it was still within budget and from an approved supplier.

POLICY IDENTIFICATION and ENFORCEMENT

One reason fake-take (better known as intake) solutions are so popular, besides the fact they make tail spend procurement easy (which we’ll discuss in more detail in our next part), is that they make it easy to identify and follow organizational procurement policies, especially since they will even guide a user through the correct process once the product / service need is identified.

At the end of the day, this is just guided buying with integrated access rules (who can request / buy something), budget rules (what budgets do they have or have access to), approval rules (who needs to approve and when), as summarized in, and extracted from, policy handbooks (which can be done with traditional semantic processing and human verification).

AUTOMATIC INVISIBLE BUYING

In last-gen platforms you had to define items you wanted on auto-reorder, define specific rules for each, and manually maintain this list, and associated rules, on an ongoing basis. But, at the end of the day, for example, MRO is MRO is MRO and commodity stock is commodity stock is commodity stock and there’s no reason that you shouldn’t be able to turn over the entire category to the platform. After all, if you’re ordering the item regularly, as we described in Yesterday’s Smart Automatic Reordering, you have enough data to compute demand trends, price trends, delivery times, and EOQs (economic order quantities) and, as long as everything is within a threshold of predictability, the system should just re-order for you — and if something appears to be going off the rails, pause automatic re-order and alert a buyer to examine the situation and either do a manual re-order (which could include accepting the system suggestion), change the rules or thresholds for automatic reorders, or redefine the category / reassign the product or service.

AUTOMATIC OPPORTUNITY IDENTIFICATION

As noted in “AI In Procurement Tomorrow: Part II“, a high-performing organization tackles at most 1/3 of spend strategically on an annual basis, due to lack of manpower and time. The fact of the matter is that, unless you have a true best-of-breed spend analysis system and the experience to use it efficiently and effectively (as well as sufficiently cleansed and complete data to work on), it’s a significant effort just to do the spend analysis required to identify and fully qualify the market opportunity and shape it into an appropriate market event.

But there’s no reason that the platform couldn’t encode all of the standard analytic workflows used by best-practice consultants, identify the top product/services/categories with the most spend not under contract/management, look at the spend variability, look at current market prices and trends, look at average historical community savings data (from community, consultancy, and GPO intelligence), and evaluate and rank opportunities. And the best platforms do. (Are the rankings 100%? No — no platform has complete market data or complete knowledge of every variance to a market situation, but 90% is more than enough as that will free the buyers up to keep up with market dynamics and do real exploratory analysis that is not easily automated.)

SUMMARY

Now, we realize some of these descriptions, like yesterday’s, are also quite brief, but again, that’s because this is not entirely new tech, as the beginnings have been around for a few years, have been in developments and discussed as “the future of” Procurement tech before Gen-AI hit the scene, and all of these capabilities are pretty straight-forward to understand (especially with many of the fake-take and Gen-AI providers marketing these claims, even though they are not entirely realizable within their platforms). And, if you want to dive deeper, the baseline requirements for most of these capabilities were described in depth in the doctor‘s November 2018 articles on Spend Matters. The primary purpose of this article, as with the last, was to explain how more sophisticated versions of traditional ML methodologies could be implemented in unison with human intelligence to create smarter Procurement applications that buyers could rely on with confidence.

Advanced Procurement Yesterday — No Gen-AI Needed!

Back in late 2018 and early 2019, before the GENizah Artificial Idiocy craze began, the doctor did a sequence of AI Series (totalling 22 articles) on Spend Matters on AI in X Today, Tomorrow, and The Day After Tomorrow for Procurement, Sourcing, Sourcing Optimization, Supplier Discovery, and Supplier Management. All of which was implemented, about to be implemented, capable of being implemented, and most definitely not doable with, Gen-AI.

To make it abundantly clear that you don’t need Gen-AI for any advanced enterprise back-office (fin)tech, and that, in fact, you should never even consider it for advanced tech in these categories (because it cannot reason, cannot guarantee consistency, and confidence on the quality of its outputs can’t even be measured), we’re going to talk about all the advanced features enabled by Assisted and Augmented Intelligence (as we don’t really have true appercipient [cognitive] intelligence or autonomous intelligence, and we’d need at least autonomous intelligence to really call a system artificially intelligent — the doctor described the levels in a 2020 Spend Matters article on how Artificial intelligence levels show AI is not created equal. Do you know what the vendor is selling?) that have been available for years (if you looked for, and found, the right best-of-breed systems [many of which are the hidden gems in the Mega Map]). And we’re going to start with Procurement.

Unlike prior series, we’re going to mention some of the traditional, sound, ML/AI technologies that are, or can, be used to implement the advanced capabilities that are currently found, or will soon be found, in Source-to-Pay technologies that are truly AI-enhanced. (Which, FYI, might not match one-to-one with what the doctor chronicled five years ago because, like time, tech marches on.)

Today we start with AI-Enhanced Procurement that was available yesterday (and, in fact, for at least the past 5 years if you go back and read the doctor‘s original series, which will provide a lot more detail on each capability we’re discussing. (This article sort of corresponds with AI in Procurement Today Part I and AI in Procurement Today Part II published in November, 2018 on Spend Matters.)

YESTERDAY

TRUE AUTOMATION

Not sorry to burst the Gen-AI believers’ bubble, but true automation has existed in leading Procurement technology for almost two decades, using tried-and-true rules-based RPA that supports advanced rule construction using the full breadth of boolean logic, mathematical formulae construction, and flexible (regex, clustering, etc.) pattern matching.

SMART AUTO RE-ORDER

Threshold re-order points, adaptive trend analysis (based on sales data for quantity, expected delivery time and economic order quantity for interval and volume determination), and contract/preferred suppliers can handle this better than most stock clerks for MRO / commodity stock items.

GUIDED BUYING

All you need to do this amazingly well is RPA, rules based on contract/preferred/budget, and semantically aware keyword/phrase matching, and, if you want a NLI (Natural Language Interface), traditional semantic processing to extract the key-words/phrases that are the appropriate nouns (and items of interest).

SMART (ADAPTIVE) AUTOMATIC APPROVALS

This is just RPA using a rules based workflow, thresholds, and exception-based decision pattern analysis to allow the thresholds to be adjusted within a range based on an approval and/or the platform to infer the thresholds/rules actually being applied by the approver using pattern identification (based on significant factor analysis or fingerprinting) across exceptions to suggest the necessary rule modifications.

ERROR PREVENTION

This just requires valid pattern definition, context-based range analysis, and outlier detection (using clustering, curve fitting, or trend analysis). Anything that can’t be done with the right mix of these methods can’t be done reliably.

M-WAY MATCH

Anything you can’t do with RPA using rules-based workflow, identifier matching, and confidence-based pattern matching and suggestion SHOULD NOT BE DONE. Moreover, anything that can’t be matched with certainty should be flipped back to the supplier for correction/completion (if key identifiers were missing), possibly with a suggestion/question (for e.g. does this invoice correspond to PO 123XYZ?).

SUMMARY

Now, we realize this was very brief, but again, that’s because this is not new tech, that was available long before Gen-AI, which should be native in the majority (if not the entirety) to any true best-of-breed Procurement platform, that is easy to understand — and that was described in detail in the doctor‘s 2019 articles for those who wish to dive deeper. The whole point was to explain how traditional ML methods enable all of this, with ease, it just takes human intelligence (HI!) to define and code it.

GEN-AI IS NOT EMERGENT … AND CLAIMS THAT IT WILL “EVOLVE” TO SOLVE YOUR PROBLEMS ARE ALL FALSE!

A recent article in the CACM (Communications of the ACM) referenced a paper by Dan Carter last year that demonstrated that the claims of Wei et.al in their 2022 “Emergent Abilities of Large Language Models” were unsubstantiated and merely wrong interpretations of visual artifacts produced by computing graphs using an inappropriate semi-log scale.

Now, I realize the vast majority of you without advanced degrees in mathematics and theoretical computer science won’t understand the majority of technical details, but that’s okay because the doctor, who has advanced degrees in both, does, can verify the mathematical accuracy of Dan’s paper, and the conclusion:

LLMs — Large Language Models — the “backbone” of Gen-AI DO NOT have any emergent properties. As a result, they are no better than traditional deep learning neural networks, and are, at the present time, ACTUALLY WORSE since our lack of deep research and understanding means that we don’t have the same level of understanding of these models, and, thus, the ability to properly “train” them for repeatable behaviour or the ability to accurately “measure” the outputs with confidence.

And while our understanding of this new technology, like any new technology, will likely improve over time, the realities are thus:

  • no amount of computing power has ever hastened the development of AI technology since research began in the late 60s / early 70s (depending on what you accept as the first paper / first program), it’s always taken improvements in algorithms and the underlying science to make slow, steady progress (with most technologies taking one to two DECADES to mature to the point they are ready for wide-spread industrial use)
  • the technology currently takes 10 times the computing power (or more) to compute “results” that can be readily computed by existing, more narrow, techniques (often with more confidence in the results)
  • the technology is NOT well suited to the majority of problems that the majority of enterprise software companies (blindly jumping on the bandwagon with no steering wheel and no brakes for fear of missing out on the hype cycle that could cause a tech market crash unequally by any except the dot-com bust of the early 2000s) are trying to use it for (and yes, the doctor did use the word “majority” and not “all” because, while he despises it, it does have valid uses … in creative (writing, audio, and video) applications [not business or science applications] where it has almost unequalled potential compared to traditional ML designed for math and science based applications)

And the market realities that no one wants to tell you about are thus:

  • former AI evangelists and some of the original INVENTORS of AI are turning against the technology (out of a realization that it will never do what they hoped it would, that its energy requirements could destroy the planet if we keep trying, and/or that maybe there are some things we should just not be meddling with at our current stage of societal and technological evolution), including Weizenbaum and Hinton
  • Brands are now turning against AI … and even the Rolling Stone is writing about it
  • big tech and companies that depend on big tech (like Pharma) are starting to turn against AI … and CIOs are starting to drop Open AI and Microsoft CoPilot because, even when the cost is as low as $30 a user, the value isn’t there (see this recent article in Business Insider)

Now, the doctor knows there are still hundreds of marketers and sales people in our space who will consistently claim that the doctor is just a naysayer and against progress and innovation and AI and modern tech and blah blah blah because they, like their companies, have gone all in on the hype cycle and don’t want their bubble burst, but the reality is that

the doctor is NOT against “AI” or modern tech. the doctor, whose complete archives are available on Sourcing Innovation back to June 2006 when he started writing about Procurement Tech, has been a major proponent of optimization, analytics, machine learning, and “AI” since the beginning — his PhD is in advanced theoretical computer science, which followed a math degree — and, after actually studying machine learning, expert systems, and AI, he used to build optimization, analytics, and “AI” systems (including the first commercial semantic social search application on the internet)

what the doctor IS against is Gen-AI and all the false claims being made by the providers about its applicability in the enterprise back office (where it has very limited uses)

because the vast majority of the population does not have the math and computer science background to understand

  1. what is real and what is not
  2. what technologies (algorithms) will work for a certain type of problem and will not
  3. whether the provider’s implementation will work for their problem (variation)
  4. whether they have enough data to make it work

and, furthermore, this includes the vast majority of the consultants at the Big X and mid-sized consultancies who graduate from Business Schools with very basic statistics and data analytics training and a crash course in “prompt engineering” who can barely use the tech, couldn’t build the tech, and definitely couldn’t evaluate the efficacy and accuracy of the underlying algorithms.

The reality is that it takes years and years of study to truly understand this tech, and years more of day-in and day-out research to make true advancement.

For those of you who keep saying “but look at how well it works” and produce 20 examples to prove it, the reality is that it’s only random chance that it works.

With just a bit of simplification, we can describe these LLMs as essentially just super sophisticated deep neural networks with layers and layers of nodes that are linked together in new and novel configurations, with more feedback learning, and structured in a manner that gives them an ability to “produce” responses as a collection of “sub-responses” from elements in its data archive vs just returning a fixed response. As a result they can GENerate a reply vs just selecting from a fixed one. (And that’s why their natural language abilities seem far superior to traditional neural network approaches, which need a huge archive of responses to have a natural sounding conversation, because they can use “context” to compute, with high probability, the right parts of speech to string together to create a response that will sound human.)

Moreover, since these models, which are more distributed in nature, can use an order of magnitude more (computational) cores, they can process an order of magnitude more data. Thus, if there is ten to one hundred times the amount of data (and it’s good data), of course they are going to work reasonably well for expected queries at least 95% of the time (whereas a last generation NN without significant training and tweaking might only be 90% out of the box). If you then incorporate dynamic feedback on user validation, that may even get to 99% for a class of problems, which means that it will appear to be working, and learning, 99 times out of 100 instead of 19 out of 20. But it’s NOT! It’s all probabilities. It’s all random. You’re essentially rolling the bones on every request, and doing it with less certainty on what a good, or bad, result should look like. And even if the dice come “loaded” so that they should always roll a come out roll, there are so many variables that there are never any guarantee you won’t get craps.

And for those of you saying “those odds sound good“, let me make it clear. They’re NOT.

  • those odds are only for typical, expected queries, for which the LLM has been repeatedly (and repeatedly) trained on
  • the odds for unexpected, atypical queries could be as low as 9 in 10 … which is very, very, bad when you consider how often these systems are supposed to be used

But the odds aren’t the problem. The problem is what happens when the LLM fails. Because you don’t know!

With traditional AI, you either got no response, an invalid response with low confidence, or a rare (compared to Gen-AI) invalid response with high confidence, where the responses were always from a fixed pool (if non-numeric) or fixed range (if numeric). You knew what the worst case scenario would be if something went wrong, how bad that would be, how likely that was to happen, and could even use this information to set bounds and tweak the confidence calculation on a result to minimize the chance of this ever happening in a real world scenario.

But with LLMs, you have no idea what it will return, how far off the mark the result will be, or how devastating it will be for your business when that (eventually) happens (which, as per Murphy’s law, will be after the vendor convinces you to have confidence in it and you stop watching it closely, and then, out of the blue, it decides you need 1,000 custom configurations of a high end MacBook Pro in inventory [because 10 new sales support professionals need to produce better graphics] in a potentially recoverable case or it decides to change your currency hedge on a new contract to that of a troubled economy (like Greece, Brazil, etc.) because of a one day run on the trading markets in a market heading for a hyperinflation and a crash [and then you will need a wheelbarrow full of money to buy a loaf of bread — and for those who think it can’t happen, STUDY YOUR HISTORY: Germany during WWII, Zimbabwe in 2007, and Venezuela in 2018, etc.]). You just don’t know! Because that’s what happens when you employ technology that randomly makes stuff up based on random inputs from you don’t know who or what (and the situation gets worse when developers [who likely don’t know the first thing about AI] decide the best way to train a new AI is to use the unreliable output of the old AI).

So, if you want to progress, like the monks, leave that Genizah Artificial Idiocy where it belongs — in the genizah (the repository for discarded, damaged, or defective books and papers), and go find real technology built on real optimization, analytics, machine learning, and AI that has been properly researched, developed, tested, and verified for industrial use.

The Gen-AI Crash Can’t Come Soon Enough!

Author’s note: this first appeared as a LinkedIn post, elaborated upon in the comments of THE REVELATOR‘s post it referenced.

$1 trillion rout hits Nasdaq 100 over AI jitters in worst day since 2022!

This is a headline from the Economic Times this week. And a foreshadowing of things to come.

As far as the doctor is concerned, the impending The Gen-AI Cr@p market collapse can’t happen fast enough! Too many people don’t remember the 80s and how all the AI “promises, promises, that were made were the promises, promises they betrayed” …

because processing power, new languages/models/constructions, and expert mimicry is not enough!

The reality is that, until we have a fundamentally better understanding of human intelligence, or can at least assemble and properly support as many cores as humans have neurons [which, FYI, we shouldn’t conceive of as we couldn’t produce the energy requirements with current technology globally to power it] (and not the equivalent of a pond snail at best … look where that got us, it’s golden nugget of insight is we should eat one rock a day), there is zero chance of a new AI “breakthrough” actually approximating anything close to intelligence.

All of the true advancements in our lifetime are going to come from human intelligence (HI!) (that creates better algorithms, models, processes, etc. and then properly, manually, embeds those enhancements in next generation tech).

Remember, they’ve been promising us true AI since the [19]70s … and they are no closer now then the great minds who created, and in their wiser years abandoned, AI (which has materialized as Artificial Idiocy) because some pursuits are still beyond the grasp of mice and men (and others shouldn’t be attempted)!

Every 3 to 5 years they promise us that the brand new shiny tech is the Staples Big Read Easy Button, and every 3 to 5 years this brand new shiny tech fails to deliver. Gen-AI is just the latest in a long line of over-hyped, under-performing tech whose “hype” cycle is almost over and the next tech that is going to bring us a great market crash (which, giving the ridiculous amount of money dumped into this technology which will never be appropriate for the Enterprise, could bring about a crash that might rival the great dot.com crash of 2000 – and if you don’t remember that, you really should look it up — a lot of software providers, especially those whose solutions provided limited actual value relative to the investment made [or money wasted on “marketing” and “brand”] bit the dust).

The even sadder reality of the situation is that we don’t need the tech. In almost every business domain, there has been software which, with a bit of manpower and human intelligence, has solved the majority of our current business problems, even the most complicated global supply chain/trade problems. All we had to do was stop using the monolith technology from two-plus decades ago and take a small chance on newer, better, more powerful players who started to solve real problems with software the average Jane could use.

In Procurement, the vast majority of companies aren’t using the tech we had ?????? ???? years ago! (When the doctor built the leading strategic sourcing ???????????? solution (the first with multi line item support) and THE REVELATOR had a leading ML (machine-learning) based application for ????????? ????? ???????? ?????????????? and realization [for everyone else, think guided sourcing strategy, like Levadata does for electronics, based on market and organizational data, with execution support]).

If the average organization even had this V 1.0 technology, they’d do SO MUCH better across the board (and now we are at V 3.0 in most Procurement applications; in optimization, what I did at Iasta [acquired by Selectica, rebranded Determine, who sunset what they didn’t understand] and consulted on (in other players) after was V2 and Trade Extensions (acquired by Coupa) gave us V3 with full supply chain support and modelling capability beyond your dreams (and now maybe Coupa’s understanding with Arne and Fredrik (founders) gone … but there are those of us who still understand the phenomenal vision and realization thereof of the great Arne Andersson).

Also, the reality is that if anyone understood what Coupa Supply Chain Optimization [Llamasoft] or Logility [Starboard] could do in the right hands … THE REVELATOR‘s parts management dreams and scenario-based Procurement guidance from the late 90s and early 00s would come true.

(And we don’t need no fake-take to make it happen! Proper catalog-enhanced true SaaS solutions have been built with integrated intake for the last decade. You just have to look beyond the same old, same old 10 to 20 vendors that Gartner and Forrester tell you about every year (pretending that the other 656 don’t exist). Vroozi [see our 2-part summary: Part I and Part II] has had this capability since day one, and once all these Gen-AI and fake-take plays come crashing down because they don’t actually enable true Procurement or have any real Procurement capability under the hood, you’re going to see a new generation of true Strategic Procurement providers rise up and offer something that every enterprise, and mid-size enterprises in particular, needs and can benefit from. And when this reckoning comes, it will humble any organization still on one of these powerless platforms. So the time to find a real platform is now!)

If You Still Don’t Believe That Gen-AI is Bad for Procurement …

Then maybe you should do the math.

It’s very expensive for what it doesn’t do. You can pay 10K a month or more just for a conversational interface to search your data or push data into your applications. For 10K a month, you can get a decent core P2P application or source-to-contract application that, well, actually does something.

It’s even more expensive to train these systems on your policies, connect them to your applications, test that basic requests generate reasonable responses, train it to guide your users to get to an eventual answer, and so on. This could easily be more than a year or three of license fees.

But the true costs are in the utilization. Every time a user asks a question, or responds to a question posed by the Gen-AI to try and elicit the users intent, it takes compute time. LOTS of compute time. At least 10X the compute time of a standard search engine or keyword based retrieval system. In some cases, 30X. (The wattage required is easily 10 to 30 times traditional Google search.) So if you’re a mid-sized organization with more than 1,000 employees, a portion of your cloud computing costs, which average between 2.4 Million and 6 Million a year (according to CloudZero), is going to increase 10X to 30X. Let’s say 5% of that was basic search and inquiry, 120K to 300K. Almost inconsequential. But multiply it by 10 to 30, and you’ve just added another 1 Million to 9 Million to your bill. Think about that.

That “low-cost” Gen-AI “chatbot” that makes enterprise search and application interface “easy” (but not as easy as a well designed workflow, FYI), that you think costs 10K a month after implementation, training, and most importantly, cloud computing costs could actually be costing you 100K a month (or even 500K). For what? A fancier Google?

As Procurement professionals, you can, and should, do the math. So even if you don’t believe the doctor when he says Gen-AI is a fallacy, then believe the math.

The math says Gen-AI is just NOT worth it.