Category Archives: AI

Have all the Big X fallen for Gen-AI? Or is this their new insidious plan to hook you for life?

Note the Sourcing Innovation Editorial Disclaimers and note this is a very opinionated rant!  Your mileage will vary!  (And not about any firm in particular.)

Almost every single Big X and Mid-Sized Consulting firm  is putting “Gen-AI” adoption in their top 10 (or top 5) strategic imperatives for Procurement, and its future, and that it’s essential for analytics (gasp) and automation (WTF?!?).

It’s absolutely insane. First of all there are almost no valid uses for Gen-AI in business (unless, of course, your corporation is owned by Dr. Evil), and even less valid uses for Gen-AI in Procurement.

Secondly, the “Gen” in “Gen” AI stands for “Generative” which literally means MAKE STUFF UP. It DOES NOT analyze anything. Furthermore, automation is about predictability and consistency, Gen-AI gives you neither! How the heck could you automate anything. You CAN NOT! Automation requires a completely different AI technology built on classical (and predictable) machine learning (where you can accurately calculate confidences and break/stop when the confidence falls below a threshold).

Which begs the question, have their marketers fallen for the Gen-AI marketing bullcr@p hook, line, and sinker? Or is this their new insidious plan to get you on a never-ending work order? After all, when it inevitably fails a few days after implementation, they have their excuses ready to go (which are the same excuses being given by these companies spending tens of millions on marketing) which are the same excuses that have been given to us since Neural Nets were invented: “it just needs more content for training“, “it just needs better prompting“, “it just needs more integration with your internal data sources“, rinse, lather, and repeat … ad infinitum. And, every year it will get a few percentage points better, but if it gets only 2% better per year, and the best Gen-AI instance now is scoring (slightly) less than 34% on the SOTA scale, it will be (at least) 9 (NINE) years before you reach 40% accuracy. In comparison, if you had an intern who only performed a task acceptably 40% of the time, how long would he last? Maybe 3 weeks. But these Big X know that once you sink seven (7) figures on a license, implementation, integration, and custom training, you’re hooked and you will keep pumping in six to seven figures a year even though you should have dropped the smelly rotten Gen-AI hot potato the minute you saw the demo (and asked them for a more traditional enterprise application they can deliver with guaranteed value).

So, maybe they aren’t misled when it comes to Gen-AI. Maybe they are just shrewd financial managers because it’s their biggest opportunity to hook you for life since they convinced you that you should outsource for “labour arbitrage” and “currency exchange” (and not materials / products you can’t get / make at home) and other bullsh!t arguments that no society in the history of the world EVER outsourced for. (EVER!) Because if you install this bullcr@p and get to the point of “sunk cost”, you will continue to sink money into it. And they know it.   Or do they?

In our view, the sad reality is that while one or two financial managers may have gone deep enough down the Gen-AI rabbit hole to figure this out, most of them likely just don’t see the downside for them or their clients.  Given all the hype the creators of these Gen-AI* models are pushing, with prolific examples only of success cases and upside, with very little education on the realities (because few of us are highlighting all of the risks of Gen-AI and failures when misapplied), maybe all they are seeing are promises that are just too good to ignore.

So, please, ignore the Gen-AI until you’ve validated a use case and instead remember When You Should Use Big X. Every solution and services provider has strengths and weaknesses. Please use them for their strengths, be successful, and increase the project success rate. (Post-Edit: As of 2024, technology project failure is at an all-time high. We don’t want to see any more of it!)

*Remember that AI, and Gen-AI in particular, is a fallacy.

The Gen AI Fallacy

For going on 7 (seven) decades, AI cult members have been telling us if they just had more computing power, they’d solve the problem of AI. For going on (seven) 7 decades, they haven’t.

They won’t as long as we don’t fundamentally understand intelligence, the brain, or what is needed to make a computer brain.

Computing will continue to get exponentially more powerful, but it’s not just a matter of more powerful computing. The first AI program had a single core to run on. Today’s AI program have 10,000 core super clusters. The first AI programmer had only his salary and elbow grease to code, and train the model. Today’s AI companies have hundreds of employees and Billions in funding and have spent 200M to train a single model … which told us we should all eat one rock per day upon release to the public. (Which shouldn’t be unexpected as the number of cores we have today powering a single model is still less than the number of neurons in a pond snail.)

Similarly, the “models” will get “better”, relatively speaking (just like deep neural nets got better over time), but if they are not 100% reliable, they can never be used in critical applications, especially when you can’t even reliably predict confidence. (Or, even worse, you can’t even have confidence the result won’t be 100% fabrication.)

When the focus was narrow machine learning/focussed applications and accepting the limitations we had, progress was slow, but it was there, was steady, and the capabilities, and solutions improved yearly.

Now the average “enterprise” solution is decreasing in quality and application, which is going to erase decades of building trust in the cloud and reliable AI.

And that’s the fallacy. Adding more cores and more data just accelerates the capacity for error, not improvement.

Even a smart Google Engineer said so. (Source)

Why Do Outsourcing and AI Go So Wrong?

In a recent post on how We Need to Hasten Onshoring and Nearshoring, Jon The Revelator was inspired to ask the following question:

even though outsourcing and AI have merit when properly implemented, why do things go so wrong?

This was after noting, in another post, that we have suffered year-by-year, decade-by-decade disappointment when 80% (and even higher) of initiatives fail to achieve the expected outcome.

Because in both cases [and this assumes the case where the organization is implementing real, classic, traditional AI for a tried-and-true use case and not modern Gen(erative) A(rtificial) I(diocy)], things have gone wrong, and sometimes terribly wrong, on a regular basis.

So, the doctor answered.

Fundamentally, there are two reasons that things consistently go wrong.

The first reason is the same reason things go so wrong when you put an accountant in charge of a major aerospace company or a lawyer in charge of a major hobby gaming company (when the first has zero understanding of aerospace engineering and the second of what games are and what fans want from them).

Like the accountant and the lawyer, they don’t understand their organizational and stakeholder/user needs!

The second major reason is that they don’t understand what these “solutions” actually do and how to properly qualify, select, and implement them. And, most importantly, what to realistically expect from them … and when.

A GPO is not a GPO is not a GPO — these Group Purchasing Organizations specialize by industry and region; and in making an impact by category and usage. They are not everything for everyone.

AI is not AI is not AI (unless it’s all Gen-AI, then it’s all bullcr@p). Until Gen-AI, the doctor was promoting ALL Advanced Sourcing Tech, including properly designed, implemented, and tested AI, because the right AI was as close to a miracle as you’ll get. (And the wrong AI will bankrupt you.) Now, any AI post 2020 is suspect to the nth degree.

Simply stated, the failures are because they all think they can press the big red easy button and throw it over the wall. But you can’t manage what you don’t understand! And until the world remembers this, these failures will continue to happen on a consistent basis.

And, as organizations continue to press that Gen-AI powered “easy” button while outsourcing more and more of their critical operations, expect to see a resurgence of the big supply chain disasters, like the ones we saw in the 90s and the 00s (including the ones which wiped out Billion $ companies). Hard to believe that only nine years ago the doctor was worried about companies relying on outdated ERPs ending up in the supply chain disaster record books, given how many of the disasters were the result of a big-bang ERP implementation. However, the risks associated with Gen-AI makes ERP risks look like training wheel risks!

As a result, it’s more critical that you select the right provider and / or the right solution if you want a decent chance of success. (The worst part of all this is that while there have been spectacular failures, most of the failures were not the result of selecting a bad provider or a bad solution, but the result of selecting the wrong provider or the wrong solution for you. (Remember, provider sales people are not incentivized to qualify clients for appropriateness, they are incentivized to sell. It’s your job to qualify them for you. In other words, even though there are bad providers and bad solutions out there, they are considerably fewer than there were in the days when Silicon Snake Oil was all the rage.) In the majority of failures, primarily those that weren’t spectacular failures, the providers were good providers with good people, but when the solution they offer is a square peg for your smaller round hole, what should be expected?

More Valid Uses for Gen-AI … this time IN Procurement!

Some of you were upset that my last post on Valid Uses for Gen-AI weren’t very Procurement centric, arguing that there were valid uses for Gen-AI in Procurement and that the doctor should have focussed on, or at least included, those because why else would almost every vendor and their dog be including “AI” front and center on their web-site (about 85%+)!

Well, you’re right! To be completely fair, the doctor should acknowledge these valid uses, even if they are very few and very far between. So he will. Those of you following him closely will note that he mentioned some of these in his comment on LinkedIn to Sarah Scudder’s post on how “AI is a buzzword“.

AI is a lot more than a buzzword, but let’s give Gen-AI it’s due … in Procurement … first.

With Gen-AI you can:

1. Create a “you” chat-bot capable of responding to a number of free-form requests that can be mapped to standard types.
This is especially useful if the organization employs one or more annoying employees who always waits too long to request goods and then, after you place the order, insist on emailing you every day to ask “are they here yet” in reference to their request, even though you flat out told them the boats are coming by ship, it takes 24 days to sail the goods across the ocean once they are on the ship, typically 3 days to get them to the port, 3 to 14 days to get them on that ship, 3 to 7 days to get the ship into a dock, 3 to 4 days to unload the ship, and 3 to 4 days from the fort, for a minimum delivery time of 35 days, or 5 weeks, and asking week one just shows how stupid this employee is.

2. Similarly, you can create a “you” chatbot for RFP Question Response.
More specifically, you can create a bot that can simply regurgitate the answers to sales people who won’t read the spec and insist on emailing you on a daily basis with questions you already answered, and which they would realize if they weren’t so damn lazy and just read the full RFP.

3. Create meaningless RFPs from random “spec sheets”.
Specifically, take all those random “spec sheets” the organizational stakeholder downloaded from the internet just so you can check a box, send it out, and make him happy. (Even though no good RFP ever resulted from using vendor RFP templates or spec sheets.) Which is especially useless if you have a subscription with a big analyst firm that includes helping you identify the top 5 vendors you are going to invite to the RFP where you will focus on the service, integration, implementation, and relationship aspects as the analyst firm qualified the tech will meet your needs. (After all, sales, marketing, human resources, and other non-technical buyers love to be helpful in this way and don’t realize that just about every “sales automation”, “content management”, and “application system” has all of the same core features and you can usually make do with any one of a dozen or more low-cost “consumerized” freeware/shareware/pay-per-user SaaS subscriptions.)

4. Or, do something slightly more useful and auto-fill your RFPs with vendor-ish data.
You could use the AI to ingest ALL of a vendor’s website, marketing, and sales materials as well as third party summaries and reviews and auto-fill as much of your RFP as you can before sending it to the vendor, and then approximately score each field based on key words, to ensure that the vendor is likely capable of meeting all of your minimum requirements across the board before you ask them to fill out the RFP and, more importantly, spend hours, or days, reviewing their response.

5. Identify unusual or risky requests or clauses in a “ready to go” contract.
Compare the contract draft handed to you by the helpful stakeholder to the default ones in your library that were (co-)drafted by actual Procurement professionals and vetted by Legal and don’t have unusual, risky, or just plain stupid clauses. For example, an unvetted draft could have a clause that says your organization accepts all liability risk, you agree to pay before goods are even shipped, you’ll accept substitute SKUs without verification, etc. (because the helpful stakeholder just took the vendor’s suggested one-sided contract and handed it to you).

6. Automatic out-of-policy request denial.
Program it to just say “denied” for any request that doesn’t fall close to organizational norms.

7. Generate Kindergarten level summaries of standard reports for the C-Suite.
Got a C-suite full of bankers, accountants, and lawyers who don’t have a clue what the business actually does and need simplified reports translated to banker-speak and legalese? No problem!

Of course, the real question is to ask not what Gen-AI can do for you but what can you do without Gen-AI because the doctor would argue that you don’t need Gen-AI for any of this and that the non-Gen-AI solutions are better and more economical!

Let’s take these valid uses one-by-one:

1. You could hire a virtual admin assistant / AP clerk in the Phillippines, Thailand, or some other developing country with okay English skills to do that for 1K a month!
Furthermore, this full time worker could also respond to other, more generic, requests as well, and do some meaningful work, such as properly transcribing hand-written invoices (or correcting OCR errors), etc. And give your employees the comfort of a real, dependable, human for a fraction of the cost of that overpriced AI bullsh!t they are trying to shove down your throat.

2. Classic “AI” that works on key phrases in the hands of the admin assistant will work just as well.
It will find the most appropriate data, and then the admin can verify that the question can be answered by the paragraph(s) included in the RFP, or that the sales person actually read the RFP and is asking for a clarification on the text, or a more detailed specification. The sales person gets the desired response the first time, no time is wasted, and you haven’t p!ssed off the sales person by forcing him to interact with an artificially idiotic bot.

3. When they said the best things in life are free, they weren’t referring to vendor RFPs.
In fact, those free RFPs and spec sheets will be the most expensive documents you ever handle. Every single one was designed to lock you into the vendor’s solution because every single one focussed not on what a customer needed, but the capabilities and, most importantly, features that were most unique to the vendor. So if you use those RFPs and sheets, you will end up selecting that vendor, be that vendor right, or wrong, for you. The best RFPs and spec sheets are the ones created by you, or at least an independent consultant or analyst working in your best interest. No AI can do this — only an intelligent human that can do a proper needs, platform, and gap analysis and translate that into proper requirements.

4. Okay, you need AI for this … but … traditional, now classic, AI could do that quite well.
Modern Gen-AI doesn’t do any better, and the amount of human verified documents and data you need to sufficiently train the new LLMs to be as accurate as traditional, now classic, AI, is more than all but a handful of organizations have. So you’re going to pay more (both for the tech and the compute time) to get less. Why? In what world does that make sense?

5. Okay, you need NLP at a minimum for this, but you don’t need more. And you barely need AI.
All you have to do is is use classical NLP to identify clause types, do weighted comparisons to standard clauses, analyze sentence structures and gauge intent, and identify clauses that are missing, deviating from standard, and not present in standard contracts. And, as per our last use, do it just as well without needing nearly as much data to effectively train. Leading contracts analytics vendors have been doing this for over a decade.

6. Even first generation e-Procurement platforms could encode rules for auto-approval, auto-denial, and conditional workflows.
In other words, you just need the rules-based automation that we’ve had for decades. And every e-Procurement, Catalog Management, and Tail Spend application does this.

7. Any semi-modern reporting or analytics platforms can allow the templates to be customized to any level of detail or summary desired.
And if you have a modern spend analysis platform, this is super easy. Furthermore, if your C-Suite is filled entirely with accountants, bankers, and lawyers who don’t understand what the business does, because they fired all the STEM professionals who understood what the business actually does, then your organization has a much bigger problem than reporting.

In other words, there isn’t a single use case where you actually need Gen-AI, as traditional approaches not only get the job done in each of these situations, but traditional approaches do it better, cheaper, and more reliably with zero chance of hallucination.

At the end of the day you want a real solution that solves a real problem. And the best way to identify such a solution is to remember that Gen-AI is really short for GENerated Artificial Idiocy. So if you want a real solution that solves a real problem, simply avoid any solution that puts AI first. This way you won’t get a “solution” that is:

  • Artificial Idiocy enabled
  • Artificial Idiocy backed
  • Artificial Idiocy enhanced
  • Artificial Idiocy driven

As Sarah Scudder noted on “AI is a buzzword“, AI is a delivery mechanism which, scientifically speaking, is a method by which the virus spreads itself. This is probably the best non-technical description of what AI is ever! And the best explanation of why you should never trust AI!

Valid Uses for Gen-AI!

the doctor has been told he’s too hard on Gen-AI. He doesn’t think he’s hard enough, but there are those who keep insisting that Gen-AI has some valid uses. And they’re right, it has some. Not the uses that you need it for, but actual uses nonetheless.

So today, in a rare moment of weakness, he’s going to acknowledge those uses. Soak it in. He may never do so again.

1. Ensure your insurance / bank only covers and lends to people you like.
One of the great things about Gen-AI is that almost all models are biased, and it’s really easy to train them to be as biased as you want. Only want your health insurance to accept only young people between 25 and 40 with no family history or indicators of any illness whatsoever? No problem. Don’t want your bank to approve a loan to anyone who isn’t an all American Christian white? No problem. Race-Biased Gen-AI to the rescue!

2. Have it make up a new story for your child who constantly wants new stories every night.
Train it on thousands of stories kid suitable and it will make up a new story every night (with a high probability of most those stories being safe and suitable — chances are only a few will scare them into therapy). Your kid will be happy (at least until they get scared into therapy) and your brain will get the rest it needs at night (so it can start worrying about how it’s going to pay for that therapy). Put those constant hallucinations to use. It’s your own personal Scheherazade, with just a little bit of Grimm and occasionally a bit of King (Stephen).

3. Incite the mob.
Need a mob behind you to get your cause front page on the headlines? Incite a mob to cover your theft attempt at a corporate headquarters above a luxury department store? Maybe even help you overthrow a capitol? No sweat! Program that Gen-AI to be as hateful and incitory as possible and have it pump out fake news propaganda 24/7 until you have the mob you need on your side and there you go!

4. Scam the Scammers. (Or at least keep them busy and out of your inbox.)
Most scammers will keep trying as long as someone is responding to them (and eating up their time). Guess what AI has a lot of — GPU time. Most models have 10,000 (or more) GPUs at their disposal. That’s a lot of scammers an AI can tie up for you. (Especially if they can’t differentiate easy pickings Grandpa Joe from a very agreeable but completely broke GrandpAI Joe.)

5. Take down a rival’s network.
Simply train in some sleeper behaviour for a few months into the future, and once the competition is done with their tests and trust it … poof … down goes their network.

And if you want to be truly evil, you can always use Gen-AI to

6. Ensure your terror campaign is as lethal as possible.
We’ve read the stories of how even recent tests of self-driving systems decided to ignore the shadows of what were actually people RIGHT in front of them and drive into those shadows at full speed. A few minor alterations and instead of avoiding people-like figures and shadows, it will be the murderous trolley that tries to kill as many as possible. And who says you have to limit it to trolleys? Use it to program bomb-bearing drones and it will seek out the densest crowd possible. And so on. And yes, we went to a very dark place, but just where do you think AI is taking us? There are currently NO bright outcomes. Ponder that before you go singing its praises.

Of course, if you just want to be a little chaotic around the house, and only take that first step down the dark path, just hook up it’s hallucinatory outputs to a random direction generator and use it to:

7. Power your Roomba.
Your pets will think it’s truly possessed!

So there you go — 7 valid uses of Gen-AI. You decide how many of them you want to use.