Daily Archives: April 1, 2023

AI: Applied Indirection, Artificial Idiocy, & Automated Incompetence … The April Fools Joke Vendors are Playing on You Year Round!

So on the one day of the year when they should be making the joke, I’m going to reveal it.

The vast majority of vendors who claim “AI”, where they want you to think “AI” stands for Artificial Intelligence, have no “AI” in that context, and many don’t even have anything close. A few may have “Assisted Intelligence” (Level 1) and even fewer still may have “Augmented Intelligence” (Level 2), but “Analytical (Cognitive) Intelligence” (Level 3)? Forget it! And as for, Level 4, “Autonomous Intelligence”, which is the baseline that must be met before you could even consider a system true “AI”, doesn’t exist (at least as far as we know). (ChatGPT would be a 3 on this scale, 3.5 if you’re dumb enough to use it to power a semi-autonomous application.) (For more details on the levels of “AI”, see the detailed Pro piece the doctor wrote over on Spend Matters on how Artificial intelligence levels show AI is not created equal. Do you know what the vendor is selling?.)

However, thanks to ChatGPT/OpenAI and other offerings, every vendor all of a sudden feels that their solution has to have “AI” to compete, and is now claiming they have AI when, at best, they’ve implemented some third party “library” into their analytics module, which itself may or may not be AI, or, at worst, they just have classical rule-based automation and statistical-based predictive analytics (i.e. trend analysis) but have called it “AI” because, just like a classic decision-tree expert system from three decades ago, it can make a “recommendation”. Woo hoo.

Not that this is nothing new, three years ago a study by London Venture Capital Firm MMC found that 40% of European startups that are classified as “AI” don’t actually use AI in a way that is “material” to their business. MMC studied 2,830 “AI” startups across 13 EU countries, and in 40% of cases, [they] could find no mention of evidence of AI. (See the great summary in The Verge.) And even that statistic is a bit misleading, because I’m willing to bet that the “evidence” they did find was technology that didn’t necessarily mandate “AI” and could be implemented with “classical” techniques because, as a longtime blogger, analyst, due diligence professional and, most importantly, a PhD in theoretical computer science (read: advanced applied mathematics), I have found that most claims of “AI” weren’t really AI — in most cases they were just using a combination of automation and/or configurable rules and/or advanced statistics and/or machine learning and just had some of the foundations, but no real “AI”.

In our space, real “AI”, and by that I mean strong Level 2 / weak Level 3 (which is the best you can get) is quite rate and specific use cases are few and far between, and most AI is simply semi-unsupervised machine learning for transaction/categorical classification (spend analysis) or clause identification (contract analytics).

The problem is that, when no one really understands what “AI” is, and given that less than 1/10 Americans have the mathematical competency to even begin the university studies to try and garner an understanding [Level 4 on the PIAAC], it’s really easy form them to try and pull a fast one on you. This is especially true when the solution is able to automate certain tasks or recommend best practices in the majority of situations faster and more consistently than the average buyer (who, let’s face it, is under-educated — thanks to limited supply chain / operations management programs and almost no real Procurement training in Colleges and Universities, under experienced, and not an expert in modern technology), and the solution can be made to look “smart” (but, in reality, is dumber than a doorknob and definitely dumber than Maxwell Smart). But it’s not smart. Not at all.  And don’t be fooled.

The good news is the marketing manager using Applied Indirection to push a false AI solution at you probably doesn’t have a clue what they have anyway, and a few smart questions asked by someone who understands what AI is, and isn’t, can probably get pretty close to the truth pretty fast. For example:

1) “We have advanced AI data auto-class. It’s the most intelligent, and accurate, classification in the space.”

‘How does it work?’

“It uses a multi-level neural net that has been trained on tens of millions of records across over a hundred clients in the indirect space.”

‘Great, so basically it categorizes transactions based on similarity to other transactions in a slowly evolving manner, and I’m guessing for a new client in the indirect space, out of the box, you’re around 85% to 90% accuracy out of the box and you approach 95% with semi-supervised retraining over time — and that’s the upper bound and it will never be perfect.’

“Uhm, … well, … more or less … “

‘Got it!’ At this point you know it’s “AI” level for classification is augmented (as it learns and evolves over time), and barely, but it’s not “the best” mapping in the space as platforms that use AI to suggest rules (upon implementation and then for unmapped transactions) and do mapping and categorization based on the user selected and verified rules can produce 100% accurate mappings, always outperforming an “AI” solution that uses neural nets that are good (but not perfect).

‘Do you use AI anywhere else?’

“Uhm, what, why? It’s great where, and as, it is.

And now you know that there is no real AI in the analytics part of the platform, and there’s no reason to choose it over any other.

2) “We use AI for OTD prediction and risk in delivery prediction.”

‘Cool. What algorithm do you use?’

“Huh, what do you mean?”

‘How does the application compute the OTD and/or risk associated with the delivery.’

>Wait for the hand off to their “data scientist” …< “We use a blended least-squares method to produce a prediction function where, if there is enough data for the product, carrier, and lane, we’ll primarily use that data for the function, but if there’s not enough, we’ll use the most similar (using a mathematical distance function) product, carrier, and/or lane data … “

Is that AI, well, if there’s some sort of learning involved in the selection of “similar data” or recommendations as to parameter tuning IF parameters can be tuned, maybe, but this is just classical statistical trend analysis and not really any different than classical ARIMA based forecasting from the 70s, and did they have ANY AI then?!? (The answer is “NO”!)

3) “We use AI for our supplier recommendation process?’

‘Sounds promising … please explain!’

“We compute a relevance score taking into account a large number of factors including product base, geographic location, diversity, risk, etc.”

‘OK … how … ‘

>Cue the Eventual Hand Off to “Data Science” Team<

“Product Base is computed as a percentage of the category they can likely cover, geographic location as an average distance function, diversity as an estimate of diversity employment if there is no diversity ownership data (in which case it’s just 50%), the risk score from our risk model, etc. “

‘So, in other words, it’s just a formula … ‘

“A very sophisticated multi-level formula with conditionals and nesting that computes … “

‘Got it thanks!’ NO AI! Not even a hint there of as it’s just a functional risk score that could be built in ANY application with a formula builder.

This isn’t to say that a solution without AI isn’t right for you! (In fact, it probably is!) It’s all about solving your business problem, and many problems have been solved in our space just fine for the last decade or so with rules-based workflow and automation, optimization, and statistical modelling and trend projection. When guidance is needed, decision trees/matrices tied to expert curated best-practices (the modern equivalent of a classic “expert system”) often work better than one could imagine. In other words, it’s not AI, it’s not the hype, it’s what solves your problem, reliably and predictably time-after-time.

So don’t fall for the false hype and be the April fool.