Why Are We Inundated By AI Slop?

And I don’t just mean the slop produced by AI, which we should all know by now is 100% AI slop, but all of the human and “expert” guidance produced, or co-produced, by real people that isn’t much better!

In one way the answer is simple: there is a considerable lack of knowledge and understanding about AI, even among the firms and practitioners who are touted as, or claim to be, “the experts”. There is both a failure to realize this as well as admit this.

But let’s back up. Recently, THE REVELATOR asked, in response to a Gartner post (screenshots below, because Gartner has a habit of deleting posts where THE REVELATOR asks hard questions or points out major issues, asked for my “thoughts” on the infographic that referenced a two year old paper. A two year old paper that didn’t even mention a number of critical concepts that should have been discussed in reference to the AI capability and tooling breakdown the infographic presented, and all but one of those concepts should have been mentioned if it was a serious evaluation of AI technology at the time.

My thoughts on the matter would be obvious to anyone who’s read more than a handful of my articles, but I decided to step back and assume the real question was not “is this bad” but “why does this keep happening” — why do Gartner, and almost every other analyst and consulting firm (because it’s not just Gartner, so they shouldn’t be singled out), keep producing content that just doesn’t cut it — that doesn’t address the core issues, outline the challenges, discuss the plethora of failures (with an 88% tech project failure rate in the last published study with indications it could now be as high as 92% in AI), or provide any deep understanding of AI technology and how to differentiate between it?

The reason is two-fold. At best, the big firms have only a handful of employees who have a real understanding of the technology, but

  1. 100 times as many analysts and consultants taking advisory on the matter from vendors (who we have already told you have lured big analyst firms astray) and clients who know even less, and this is the workforce powering
  2. the relentless marketing machine (powered by AI content writers) that believes they have to pump out multiple articles a day to be relevant (even though not one of those articles has an original thought, insight, or suggestion on how to better make use of this technology because all AI bots can do is regurgitate someone else’s ideas and content)

The reality is that very few people understand advanced technology, especially new (or recently sexy) advanced technology. To truly understand this technology, you need the equivalent of a PhD — either years studying it in an academic environment or the equivalent number of years studying it in R&D labs or proof-of-concept implementation pilots.

A few years of “prompt engineering” an LLM or configuring pre-built models on Sci-Kit that “work the majority of the time for the use cases they tested” doesn’t cut it. Not even close!

You need to understand the core algorithms and the fundamental mathematics that underlies them, and that’s not easy. Even classical curve-fitting, nearest neighbor, clustering, regression, and knowledge graphs can be much more intricate than you think. The complexity intensifies when you migrate to multi-layer (feedback) (deep) neural networks, semantic technology built on ML(F)(D)NNs, and now LLMs which don’t just use very advanced statistical processing to map an input of a fixed type to an output in a fixed set (that can computed with mathematical confidence) but an arbitrary input to a generated output using layered feedback statistical calculations on parts of the input that are statistically stitched together (like Frankenstein’s monster, but worse) to make parts of the output, which means that hallucinations are a core feature of these platforms (as well as behavior that is much, much worse). Furthermore, if you’re trying to put it all together, them, unless you understand the limitations in interplay between different algorithms and models … good luck. (And, unless you understand the underlying mathematical models and their strengths, and limitations, good luck with that too!)

And this isn’t easy, especially when you need to start asking questions about computability (and decidability).

To put this in perspective, I have an earned PhD in Computer Science (specializing in data structures and computational geometry, but also included study of late 90s “AI” (including ML, Expert Systems, and Neural Networks) and when you earn one of these degrees, don’t wimp out (and try to stick to coding or “software engineering”, and take all of the (cross-listed with Mathematics) logic and theory courses, at least when I studied, you studied the classics in fundamental algorithms, automata, P vs NP, computability and decidability. If you do well in these advanced courses, you leave with the nagging feeling that you still don’t really understand what you studied (and tested on) — and you don’t! For example, it’s not just P vs NP — it’s P vs NP Hard vs NP Complete. And P isn’t always P, because if it’s n^8, well, that might as well be NP Hard for practical purposes! And categorization in NP is way harder in practice than it is in theory. And advanced algorithms often perform no better than stupid simple ones and it takes years to “see” why. And so on.

It takes years to get a grip on and really understand the fundamentals, which is what you need to understand to get a grip on what you can and can’t do with advanced algorithms in the fields of optimization, predictive analytics, and AI — which each take additional years of study, research, development and implementation experience to understand what they can and can’t do and evaluate new developments from technical papers, not marketing BS and fairy tales weaved by master storytellers that would leave PT Barnum in awe.

Script kiddies, “prompt patsies” (they are not prompt engineers, that is utter BS), consultants, and analysts with no formal background in CS or appropriate areas of STEM and limited experience beyond installing someone else’s software and doing a few parametric modifications don’t understand this. Not even close! (And don’t even have the background to understand where there understanding is [more] limited!) But yet, this is what most of the firms are asking of their consultants and analysts everyday, which is why we get so much AI slop that completely misses the point.

You have too many people without the deep background and experience being told that everything they do has to be “AI” (even if they have no clue what it means) because of all of the funding being poured into it, too many more “influencers” (or should I say silicon snake oil peddlers) trying to take advantage of the confusion, not enough deep understanding, and almost no one willing to cut through the noise and say “wait a minute; the AI they are selling is not the AI you are looking for“.

That, and everyone forgetting, as happens every hype cycle, that context matters. But that’s another article, because context doesn’t matter if you don’t know what you’re doing.


The original post of joy pain.