Daily Archives: June 4, 2024

Why Do Outsourcing and AI Go So Wrong?

In a recent post on how We Need to Hasten Onshoring and Nearshoring, Jon The Revelator was inspired to ask the following question:

even though outsourcing and AI have merit when properly implemented, why do things go so wrong?

This was after noting, in another post, that we have suffered year-by-year, decade-by-decade disappointment when 80% (and even higher) of initiatives fail to achieve the expected outcome.

Because in both cases [and this assumes the case where the organization is implementing real, classic, traditional AI for a tried-and-true use case and not modern Gen(erative) A(rtificial) I(diocy)], things have gone wrong, and sometimes terribly wrong, on a regular basis.

So, the doctor answered.

Fundamentally, there are two reasons that things consistently go wrong.

The first reason is the same reason things go so wrong when you put an accountant in charge of a major aerospace company or a lawyer in charge of a major hobby gaming company (when the first has zero understanding of aerospace engineering and the second of what games are and what fans want from them).

Like the accountant and the lawyer, they don’t understand their organizational and stakeholder/user needs!

The second major reason is that they don’t understand what these “solutions” actually do and how to properly qualify, select, and implement them. And, most importantly, what to realistically expect from them … and when.

A GPO is not a GPO is not a GPO — these Group Purchasing Organizations specialize by industry and region; and in making an impact by category and usage. They are not everything for everyone.

AI is not AI is not AI (unless it’s all Gen-AI, then it’s all bullcr@p). Until Gen-AI, the doctor was promoting ALL Advanced Sourcing Tech, including properly designed, implemented, and tested AI, because the right AI was as close to a miracle as you’ll get. (And the wrong AI will bankrupt you.) Now, any AI post 2020 is suspect to the nth degree.

Simply stated, the failures are because they all think they can press the big red easy button and throw it over the wall. But you can’t manage what you don’t understand! And until the world remembers this, these failures will continue to happen on a consistent basis.

And, as organizations continue to press that Gen-AI powered “easy” button while outsourcing more and more of their critical operations, expect to see a resurgence of the big supply chain disasters, like the ones we saw in the 90s and the 00s (including the ones which wiped out Billion $ companies). Hard to believe that only nine years ago the doctor was worried about companies relying on outdated ERPs ending up in the supply chain disaster record books, given how many of the disasters were the result of a big-bang ERP implementation. However, the risks associated with Gen-AI makes ERP risks look like training wheel risks!

As a result, it’s more critical that you select the right provider and / or the right solution if you want a decent chance of success. (The worst part of all this is that while there have been spectacular failures, most of the failures were not the result of selecting a bad provider or a bad solution, but the result of selecting the wrong provider or the wrong solution for you. (Remember, provider sales people are not incentivized to qualify clients for appropriateness, they are incentivized to sell. It’s your job to qualify them for you. In other words, even though there are bad providers and bad solutions out there, they are considerably fewer than there were in the days when Silicon Snake Oil was all the rage.) In the majority of failures, primarily those that weren’t spectacular failures, the providers were good providers with good people, but when the solution they offer is a square peg for your smaller round hole, what should be expected?