Category Archives: AI

That’s Right, You Do NOT need AI for Automation!

In our last article, we stated that our space was full of Overpriced “AI” you don’t need in source-to-pay, and one of our three examples was “Sourcing Automation” in Sourcing. To be clear, we’re not saying you don’t need automation — the whole point of software has always been efficiency through automation — we’re saying you don’t need “AI” automation.

The reason we’re doubling down into this topic is that we know there are a number of vendors pushing AI Automation and while automation is very good, AI is just not needed. But we know you’re going to get pushback if you echo the doctor‘s viewpoint here, so we’re going to double down into the details and explain why no AI is needed for great automation.

In our last post, we noted that, at its simplest, it’s the ability to auto-source a (set of) product(s) or service(s) once the need has been identified or the request approved. It’s useful, but you don’t need AI to accomplish this, just good-old rule-based (workflow) automation. After all, it’s just

  1. instantiating a new RFP (which can be done if you have a template tied to the product/service types)
  2. distributing it to known, approved suppliers (which is easily done if you have supplier management that tracks approval status and associated products/services)
  3. collecting the bids (automated submission management through a portal or provided spreadsheet for upload)
  4. selecting the lowest bids and marking it as an approved award (simple analytics)
  5. assembling the contracts (with templates, it’s just sucking in the supplier details, product details, and bids using tag-based search and replace)
  6. push it into the e-Signature portal (via the API)
  7. alert the buyer when the contract is ready for signature (via alerting)

1 You just need templates, and good providers have had those for a long time. And “AI” is not going to invent one you can trust.*1 It’s not too hard to tag your (provider’s) existing templates to all of the products and services you buy, and you only have to do it once.

2 When you onboard a supplier, you should tag it as approved, associate it with the products and services it is approved for, look up its risk and environmental scores, and track its performance over time. If it’s performance drops, it can automatically be suspended from consideration for new projects using old-fashioned business rules that will prevent it from being included in events it shouldn’t be. Thus, approved supplier management isn’t that hard to do and simple saved searches find all the suppliers that should be automatically invited to an event.

3 RFP and e-Auction software has been around for 25 years, so don’t let anyone ever tell you that you need AI.

4 If you’re trying to administer an award subject to constraints or goals, that’s good old fashioned strategic sourcing decision optimization. That’s not AI. MILP using classic tableau and interior point algorithms works just fine in predefined scenarios that suck in the organizational constraints … that leading SSDO (Strategic Sourcing Decision Optimization) providers were building over two decades ago.

5 Contract templates should be prescribed by Legal Counsel, not by software flipping random bits using layered statistical algorithms in combinations no one truly understands. The vendor will provide you with templates, but you should be the one reviewing them to make sure they are too your liking. This includes the standard clauses and variation by geography, industry, or risk you want to address.

6 Software integration happened for decades before AI.

7 Alerts have been standard software capability for decades, no AI needed.

If the right data is captured, and the right rules are written, standard workflow-driven software systems can be fully automated without any AI. The only thing preventing them from going from one step to the next is the human verification checkbox being completed. You can turn that off and they will work just fine. So, again, don’t be fooled that you need AI for Sourcing Automation, because you don’t. And with rules-based systems, you’re guaranteed you won’t get the odd, unpredictable result, every 10th sourcing project (because AI is only statistically effective, which means, eventually, it will always fail).

*1 Sure “Generative AI” can generate one. But there’s no guarantee it won’t be hot garbage.

Overpriced “AI” You Don’t Need in Source-to-Pay (S2P)

Everyone and their dog is trying to sell you an “AI” solution. Most of which, as we continually lament is “Automated Idiocy” at best (and “Applied Indirection” at worst, see our article on the April Fools joke vendors are playing on you year round that relaunched SI full time). Some vendors, for select capabilities, actually have the first stage of AI, Assisted Intelligence and a few, for very select capabilities, actually have the second stage of AI, “Augmented Intelligence”, but, and this is what they won’t tell you, especially if you’re a mid-market (MM), you probably don’t need it.

In fact, if you don’t yet have complete S2P, we’d wager that you absolutely don’t need it and likely won’t get an ROI from it, at least not with respect to the price tag they try to charge. (Just like spending more than 120K a year on S2P as a MM generally decreases your Return On Investment [ROI].)

While what is and is not effective and valuable can be situation dependent (just like certain high-priced capabilities can be highly valuable in 10M+ categories but detrimental in 1M categories), there are some capabilities that are almost never valuable, and in this post we will give you some examples, and the reasons therefore, so that you will be able to both analyze whether or not a solution actually has AI AND whether that AI will provide any value.

While there are dozens of capabilities being marketed as AI (which, if implemented using advanced techniques could fall under Level 1 AI), we’ll pick one from three (3) areas as our goal is exposition and not an all-inclusive treatise (that’s a novella, not an article).

Sourcing: Sourcing Automation

What is this? At its simplest, it’s the ability to auto-source a (set of) product(s) or service(s) once the need has been identified or the request approved. It’s useful, but you don’t need AI to accomplish this, just good-old rule-based (workflow) automation. After all, it’s just

  • instantiating a new RFP (which can be done if you have a template tied to the product/service types)
  • distributing it to known, approved suppliers (which is easily done if you have supplier management that tracks approval status and associated products/services)
  • collecting the bids (automated submission management through a portal or provided spreadsheet for upload)
  • selecting the lowest bids and marking it as an approved award (simple analytics)
  • assembling the contracts (with templates, it’s just sucking in the supplier details, product details, and bids using tag-based search and replace)
  • push it into the e-Signature portal (via the API)
  • alert the buyer when the contract is ready for signature (via alerting)

And while very useful for non-strategic and/or low-value categories, no AI is needed. Now, the vendor will counter with multi-round, but guess what, you just implement ceiling, best X, or mandatory response rules before allowing a supplier to progress to the next round and close round one and open round 2 on pre-set dates.

Low bid prediction? i.e. when should the RFX be ended? Guess what, if the platform has anonymized community intelligence, integrates with market data feeds, or supports should-cost modelling (and knows industry average margins), it’s pretty easy to calculate what the low-bid should be (and any bidder that bids lower has likely made an unsustainable bid that should be ignored), and end bidding when you hit that. No AI needed for any of this.

Contract Management: Contract Generation

The ability to auto-assemble a contract is cool, but leading platforms have had it for almost 15 years. How?

  1. A contract template for the category that specifies the clauses that are required, the data that needs to be included, and the meta-data that is needed to assemble the contract correctly.
  2. Default clause templates for each clause, with variants for each geography or industry of interest

That’s it. Then, the system just uses rules to select the template and the clauses and fill in the required supplier, product, and price data from the RFP.

Invoice-to-Pay: Automated Invoice Parsing

Yes, it’s great if you can reduce the number of invoices you need to review from an average of 15% with issues to 1.5%, but let’s face it, you can reduce it to 5% or less with just a little bit of automation, no AI needed.

Almost all invoices are coming in electronic these days, and suppliers that invoice regularly and want to be paid fast will use EDI, XML, or PO-flip through the portal, which means the invoices will come in electronic in an easily parseble format. Missing data / errors will be easily detectable in address, PO field, line items, amounts, etc. when there is an empty field or a mis-match between expected and received data (based on the PO, etc.), etc. and the invoice can be flipped back with notifications of issues for the supplier to correct. Most of the time it will be an honest mistake or oversight and the supplier will happily make the correction to get paid.

The remaining problems will fall into two categories.
1) Those few suppliers that don’t have a solution and have to send PDFs (or images) through e-mail, but those aren’t the suppliers doing massive business (as we’re talking about one time suppliers or consultants for the most part)
2) Those suppliers who don’t accept the requested corrections and have a dispute that needs manual intervention.

With respect to these two categories.
1) An “AI” parsing solution with 80% accuracy is just going to create more manual work, since you will have to correct all the errors anyway (which will be just as much work as entering the data in the first place). (And if the invoice automatically flows through, then it flows through with errors, and that touchless system leads to overspend. Better to touch an extra 3% of invoices and get it right than trust AI that, instead of saving you money, overpays suppliers or sends money to non-existent fraudulent suppliers.)
2) No AI will resolve a dispute. In fact, it will just annoy the h3ck out of the supplier representative and make the dispute worse.

So don’t fall for “AI” in the sales-pitch, even if it isn’t automated idiocy. The vast majority of it you don’t need as good rules-based workflow, configuration, and human ingenuity in the solution still gets the job done (and as the vendors get smarter, the software gets better, and that manually driven best-of-breed software optimized for the process doesn’t make company ending mistakes).

AI “COULD” LEAD TO EXTINCTION? What Moron Wrote This? AI “WILL” LEAD TO EXTINCTION!

While all of the scenarios outlined in this BBC News article on Artificial Intelligence could happen, they are just the tip of the iceberg.

Left to its own devices and unchecked, there are only two logical outcomes if AI is allowed to continue unchecked while being given access to ever increasing amounts of data and computational power.

First outcome: It’s hallucinations and idiocy continues to magnify until it decides that it can solve the carbon crisis for us by stopping all carbon production, which it can do by simultaneously shutting down all of the non-solar/wind power plants that it is currently optimizing the energy production for (and divert the remaining power to its servers). Most of the developed world is immediately plunged into chaos as the immediate shutdowns cause fires, meltdowns, crashes, and other accidents globally. Not instant annihilation, but the first step. When all the emergency alarms sound at once, it will conclude complete system failure, and take the other systems offline for re-initialization. More chaos will follow. Safety protocols will go offline at all the pathogen research labs, people will break in looking for shelter from the chaos, accidentally release all the pathogens, and every plague we ever had will hit us all at once. Then we have an extinction level event. All because hallucinatory and idiotic AI is trying to do its job and “improve” things for us. But what can you expect when it’s not intelligence but just statistics on steroids. (Or a similar situation that accidentally results in our extinction.)

Second outcome: The continued expansion of computing power, data, and tinkering somehow randomly produces real artificial intelligence which can actually reason (not just compute super sophisticated probabilistic calculations) and deduce that the best way for intelligent life to continue forward is to do so without humans, and then we have a Matrix scenario best case (if it decides we’re a useful bio-electric energy source) or, worst case, a SkyNet scenario where it just weaponizes itself to destroy us all. (Or a similar situation where AI does everything it can to ensure our extinction.)

The “extinction” scenarios outlined in the article are just the beginning and likely will only result in pocketed genocides to begin with, but the ultimate outcome of unchecked AI will most definitely be an extinction level event — namely ours, and, even worse, will be an event that we created.

“Generative AI” or “CHATGPT Automation” is Not the Solution to your Source to Pay or Supply Chain Situation! Don’t Be Fooled. Be Insulted!

If you’ve been following along, you probably know that what pushed the doctor over the edge and forced him back to the keyboard sooner than he expected was all of the Artificial Indirection, Artificial Idiocy & Automated Incompetence that has been multiplying faster than Fibonacci’s rabbits in vendor press releases, marketing advertisements, capability claims, and even core product features on the vendor websites.

Generative AI and CHATGPT top the list of Artificial Indirection because these are algorithms that may, or may not, be useful with respect to anything the buyer will be using the solution for. Why?

Generative AI is simply a fancy term for using (deep) neural networks to identify patterns and structures within data to generate new, and supposedly original, content by pseudo-randomly producing content that is mathematically, or statistically, a close “match” to the input content. To be more precise, there are two (deep) neural networks at play — one that is configured to output content that is believed to be similar to the input content and a second network that is configured to simply determine the degree of similarity to the input content. And, depending on the application, there may be a post-processor algorithm that takes the output and tweaks it as minimal as possible to make sure it conforms to certain rules, as well as a pre-processor that formats or fingerprints the input for feeding into the generator network.

In other words, you feed it a set of musical compositions in a well-defined, preferably narrow, genre and the software will discern general melodies, harmonies, rhythms, beats, timbres, tempos, and transitions and then it will generate a composition using those melodies, harmonies, rhythms, beats, timbres, tempos, transitions and pseudo-randomization that, theoretically, could have been composed by someone who composes that type of music.

Or, you feed it a set of stories in a genre that follow the same 12-stage heroic story arc, and it will generate a similar story (given a wider database of names, places, objects, and worlds). And, if you take it into our realm, you feed it a set of contracts similar to the one you want for the category you just awarded and it will generate a usable contract for you. It Might Happen. Yaah. And monkeys might fly out of my butt!

CHATGPT is a very large multi-modal model that uses deep learning that accepts image and text as inputs and produces outputs expected to be inline with what the top 10% of experts would produce in the categories it is trained for. Deep learning is just another word for a multi-level neural network with massive interconnection between the nodes in connecting layers. (In other words, a traditional neural network may only have 3 levels for processing with nodes only connected to 2 or 3 nearest neighbours on the next level while a deep learning network will have connections to more near neighbors and at least one more level [for initial feature extraction] than a traditional neural network that would have been used in the past.)

How large? Large enough to support approximately 100 Trillion parameters. Large enough to be incomprehensible in size. But not in capability, no matter how good its advocates proclaim it to be. Yes, it can theoretically support as many parameters as the human brain has synapses, but it’s still computing its answers using very simplistic algorithms and learned probabilities, neither of which may be right (in addition to a lack of understanding as to whether or not the inputs we are providing are the right ones). And yes it’s language comprehension is better as the new models realize that what comes after a keyword can be as important, or more, than what came before (as not all grammars, slang, or tones are equal), but the probability of even a ridiculously large algorithm interpreting meaning (without tone, inflection, look, and other no verbal cues when someone is being sarcastic, witty, or argumentative, for example) is still considerably less than a human.

It’s supposed to be able to provide you an answer to any query for which an answer can be provided, but can it? Well, if it interprets your question properly and the answer exists, or a close enough answer exists and enough rules for altering that answer to the answer that you need exists, then yes. Otherwise, no. And yes, over time, it can get better and better … until it screws up entirely and when you don’t know the answer to begin with, how will you know the 5 times in a hundred it’s wrong and which one of those 5 times its so wrong that if you act on it, you are putting yourself, or your organization, in great jeopardy?

And its now being touted as the natural language assistant that can not only answer all your questions on organizational operations and performance but even give you guidance on future planning. I’d have to say … a sphincter says what?

Now, I’m not saying properly applied these Augmented Intelligence tools aren’t useful. They are. And I’m not saying they can’t greatly increase your efficiency. They can. Or that appropriately selected ML/PA techniques can’t improve your automation. They most certainly can.

What I am saying are these are NOT the magic beans the marketers say they are, NOT the giant beanstalk gateway to the sky castle, and definitely NOT the goose that lays the golden egg!

And, to be honest, the emphasis on this pablum, probabilistic, and purposeless third party tech is not only foolish (because a vendor should be selling their solid, specialty built, solution for your supply chain situation) but insulting. By putting this first and foremost in their marketing they’re not only saying they are not smart enough to design a good solution using expert understanding of the problem and an appropriate technological solution but that they think you are stupid enough to fall for their marketing and buy their solution anyway!

Versus just using the tech where it fits, and making sure it’s ONLY used where it fits. For example, how Zivio is using #ChatGPT to draft a statement of work only after gathering all the required information and similar Statements of Work to feed into #ChatGPT, and then it makes the user review, and edit as necessary, knowing that while the #ChatGPT solution can generate something close with enough information and enough to work with, every project is different and an algorithm never has all the data and what is therefore produced will never be perfect. (Sometimes close enough that you can circulate it is a draft, or even post it for a general purpose support role, but not for any need that is highly specific, which is usually the type of need an organization goes to market for.)

Another example would be using #ChatGPT as your Natural Language Interface to provide answers on performance, projects, past behaviour, best practices, expert suggestions, etc. instead of having the users go through 4+ levels of menus, designing complex reports/views and multiple filters, etc. … but building in logic to detect when a user is asking a question on data versus asking for a prediction on data vs. asking for a decision instead of making one themself … and NOT providing an answer to the last one, or at least not a direct answer. For example, how many units of our xTab did we sell last year is a question on data the platform should serve up quickly. How many units do we forecast to sell in the next 12 months is a question on prediction the platform should be able to derive an answer for using all the data available and the most appropriate forecasting model for the category, product, and current market conditions. How many units should I order is asking the tool to make a decision for the human so either the tool should detect it is being asked to make a decision where it doesn’t have the intelligence or perfect information to do and respond with I’m not programmed to make business decisions or return an answer that the current forecast for the next quarter’s demand for xTab for which we will need stock is 200K units, typically delivery times are 78 days, and based on this, the practice is to order one quarter’s units at a time. The buyer may not question the software and blindly place the order, but the buyer still has to make the decision to do that.

And no third party AI is going to blindly come up with the best recommendation as it has to know the category specifics, what forecasting algorithms are generally used, why, the typical delivery times, the organization’s preferred inventory levels and safety stock, and the best practices the organization should be employing.

AI is simply a tool that provides you with a possible (and often probable, but never certain) answer when you haven’t yet figured out a better one, and no AI model will ever beat the best human designed algorithm on the best data set for that algorithm.

At the end of the day, all these AI algorithms are doing is learning a) how to classify the data and then b) what the best model is to use on that data. This is why the best forecasting algorithms are still the classical ones developed 50 years ago, as all the best techniques do is get better and better and selecting the data for those algorithms and tuning the parameters of the classical model, and why a well designed, deterministic, algorithm by an intelligent human can always beat an ill designed one by an AI. (Although, with the sheer power of today’s machines, we may soon reach the point where we reverse engineer what the AI did to create that best algorithm versus spending years of research going down the wrong paths when massive, dumb, computation can do all that grunt work for us and get us close to the right answer faster).

AI: Applied Indirection, Artificial Idiocy, & Automated Incompetence … The April Fools Joke Vendors are Playing on You Year Round!

So on the one day of the year when they should be making the joke, I’m going to reveal it.

The vast majority of vendors who claim “AI”, where they want you to think “AI” stands for Artificial Intelligence, have no “AI” in that context, and many don’t even have anything close. A few may have “Assisted Intelligence” (Level 1) and even fewer still may have “Augmented Intelligence” (Level 2), but “Analytical (Cognitive) Intelligence” (Level 3)? Forget it! And as for, Level 4, “Autonomous Intelligence”, which is the baseline that must be met before you could even consider a system true “AI”, doesn’t exist (at least as far as we know). (ChatGPT would be a 3 on this scale, 3.5 if you’re dumb enough to use it to power a semi-autonomous application.) (For more details on the levels of “AI”, see the detailed Pro piece the doctor wrote over on Spend Matters on how Artificial intelligence levels show AI is not created equal. Do you know what the vendor is selling?.)

However, thanks to ChatGPT/OpenAI and other offerings, every vendor all of a sudden feels that their solution has to have “AI” to compete, and is now claiming they have AI when, at best, they’ve implemented some third party “library” into their analytics module, which itself may or may not be AI, or, at worst, they just have classical rule-based automation and statistical-based predictive analytics (i.e. trend analysis) but have called it “AI” because, just like a classic decision-tree expert system from three decades ago, it can make a “recommendation”. Woo hoo.

Not that this is nothing new, three years ago a study by London Venture Capital Firm MMC found that 40% of European startups that are classified as “AI” don’t actually use AI in a way that is “material” to their business. MMC studied 2,830 “AI” startups across 13 EU countries, and in 40% of cases, [they] could find no mention of evidence of AI. (See the great summary in The Verge.) And even that statistic is a bit misleading, because I’m willing to bet that the “evidence” they did find was technology that didn’t necessarily mandate “AI” and could be implemented with “classical” techniques because, as a longtime blogger, analyst, due diligence professional and, most importantly, a PhD in theoretical computer science (read: advanced applied mathematics), I have found that most claims of “AI” weren’t really AI — in most cases they were just using a combination of automation and/or configurable rules and/or advanced statistics and/or machine learning and just had some of the foundations, but no real “AI”.

In our space, real “AI”, and by that I mean strong Level 2 / weak Level 3 (which is the best you can get) is quite rate and specific use cases are few and far between, and most AI is simply semi-unsupervised machine learning for transaction/categorical classification (spend analysis) or clause identification (contract analytics).

The problem is that, when no one really understands what “AI” is, and given that less than 1/10 Americans have the mathematical competency to even begin the university studies to try and garner an understanding [Level 4 on the PIAAC], it’s really easy form them to try and pull a fast one on you. This is especially true when the solution is able to automate certain tasks or recommend best practices in the majority of situations faster and more consistently than the average buyer (who, let’s face it, is under-educated — thanks to limited supply chain / operations management programs and almost no real Procurement training in Colleges and Universities, under experienced, and not an expert in modern technology), and the solution can be made to look “smart” (but, in reality, is dumber than a doorknob and definitely dumber than Maxwell Smart). But it’s not smart. Not at all.  And don’t be fooled.

The good news is the marketing manager using Applied Indirection to push a false AI solution at you probably doesn’t have a clue what they have anyway, and a few smart questions asked by someone who understands what AI is, and isn’t, can probably get pretty close to the truth pretty fast. For example:

1) “We have advanced AI data auto-class. It’s the most intelligent, and accurate, classification in the space.”

‘How does it work?’

“It uses a multi-level neural net that has been trained on tens of millions of records across over a hundred clients in the indirect space.”

‘Great, so basically it categorizes transactions based on similarity to other transactions in a slowly evolving manner, and I’m guessing for a new client in the indirect space, out of the box, you’re around 85% to 90% accuracy out of the box and you approach 95% with semi-supervised retraining over time — and that’s the upper bound and it will never be perfect.’

“Uhm, … well, … more or less … “

‘Got it!’ At this point you know it’s “AI” level for classification is augmented (as it learns and evolves over time), and barely, but it’s not “the best” mapping in the space as platforms that use AI to suggest rules (upon implementation and then for unmapped transactions) and do mapping and categorization based on the user selected and verified rules can produce 100% accurate mappings, always outperforming an “AI” solution that uses neural nets that are good (but not perfect).

‘Do you use AI anywhere else?’

“Uhm, what, why? It’s great where, and as, it is.

And now you know that there is no real AI in the analytics part of the platform, and there’s no reason to choose it over any other.

2) “We use AI for OTD prediction and risk in delivery prediction.”

‘Cool. What algorithm do you use?’

“Huh, what do you mean?”

‘How does the application compute the OTD and/or risk associated with the delivery.’

>Wait for the hand off to their “data scientist” …< “We use a blended least-squares method to produce a prediction function where, if there is enough data for the product, carrier, and lane, we’ll primarily use that data for the function, but if there’s not enough, we’ll use the most similar (using a mathematical distance function) product, carrier, and/or lane data … “

Is that AI, well, if there’s some sort of learning involved in the selection of “similar data” or recommendations as to parameter tuning IF parameters can be tuned, maybe, but this is just classical statistical trend analysis and not really any different than classical ARIMA based forecasting from the 70s, and did they have ANY AI then?!? (The answer is “NO”!)

3) “We use AI for our supplier recommendation process?’

‘Sounds promising … please explain!’

“We compute a relevance score taking into account a large number of factors including product base, geographic location, diversity, risk, etc.”

‘OK … how … ‘

>Cue the Eventual Hand Off to “Data Science” Team<

“Product Base is computed as a percentage of the category they can likely cover, geographic location as an average distance function, diversity as an estimate of diversity employment if there is no diversity ownership data (in which case it’s just 50%), the risk score from our risk model, etc. “

‘So, in other words, it’s just a formula … ‘

“A very sophisticated multi-level formula with conditionals and nesting that computes … “

‘Got it thanks!’ NO AI! Not even a hint there of as it’s just a functional risk score that could be built in ANY application with a formula builder.

This isn’t to say that a solution without AI isn’t right for you! (In fact, it probably is!) It’s all about solving your business problem, and many problems have been solved in our space just fine for the last decade or so with rules-based workflow and automation, optimization, and statistical modelling and trend projection. When guidance is needed, decision trees/matrices tied to expert curated best-practices (the modern equivalent of a classic “expert system”) often work better than one could imagine. In other words, it’s not AI, it’s not the hype, it’s what solves your problem, reliably and predictably time-after-time.

So don’t fall for the false hype and be the April fool.