Category Archives: AI

Gen-X is the Smartest Generation!

There’s been quite a few posts lately on how Gen-X is going the way of the Dodo bird because they aren’t adopting Gen-AI (fast enough).

Frankly, I’m quite sick of them.

Not one of these posters has taken the time to stop and think that maybe instead of wasting all of their time pushing the Gen-AI propaganda, that maybe they should have instead been asking what Gen-X knows that the rest of the world doesn’t?

Then they’d already have the answers! It’s not not about adaptation (or their perception that we can’t adapt). We can still adapt, although, we will admit that it takes longer, hurts more, and may require stronger beverages than we needed in our youth.

The thing about Gen-X vs the generations that came later is that, having lived through the end of the cold war, multiple epidemics, multiple recessions, more generations of technology than you can name, and way more bullsh!t than anyone should have to endure in a lifetime, we’ve had to acquire a wisdom that is sorely lacking in the generations that follow us (just to endure).

As a result of this, we embrace what works and makes our lives easier overall. We don’t take one step forward to take two steps back and we definitely don’t use tech that introduces more problems or uncertainty than it removes.

Those of us who studied the REAL underpinnings of REAL ML, AR, Semantic Tech, measurable NNs, etc. know that there are places where AI works well, works ok, doesn’t work at all, and actually makes things worse! We don’t use it where it doesn’t make sense and we don’t want tech where the confidence is unknown! It’s that simple. We know that Gen-AI, which is usually synonymous with LLMs, has fundamental flaws at its core. We know, as a result of that, it can never be fully trusted and only works reasonably well in constrained scenarios, with guardrails, where it is trained on focussed data sets.

And we most definitely know that AI Employees Aren’t Real, and that this is pure marketing BS. We also know that “AI Systems” never learn (they aren’t intelligent), they just continuously evolve. We even know some AI systems can evolve beyond us, but that’s irrelevant until we can trust them. We know you simply can’t trust Gen-AI on its own (even LeCun knows that), and most providers haven’t created hybrid systems with guardrails yet!

However, we also know that with modern computing power and available data that “classic” machine learning, semantic technology, (deep) neural networks, and other AI solutions now work better than ever and will most happily use those solutions that we wanted to use a decade ago when computing power was still too expensive and data still too limited.

In short, old dogs can still learn new tricks, but these old dogs have also learned a thing or two from the cats. Mainly, that you shouldn’t learn new tricks unless there are treats for doing so, and even then, the treats better be worth it! Young dogs might have excess energy to waste chasing their own tails, but we don’t. However, in exchange for that energy we gained wisdom. And we’re going to use it!

Another Year, another reprisal of the “Name Your AI Fear/Predict the AI Future” Surveys on LinkedIn

My favourite are the “what’s your biggest AI fear”. They crack me up as they all underestimate just how bad a worst case scenario could be. Now, to be fair, I don’t think we have the intellect to truly determine just how bad a true artificial intelligence could be who decided we were no longer useful, but I can say that the best answer we can give today is “all of the above“.

No one movie, video game, or printed publication by any one author is truly going to imagine the horror that will befall us if we ever get true artificial intelligence. For example, it’s not Hal vs. Terminator vs. Matrix vs … it’s all of the above … and then some.

For example, here’s how it could start off:

Skynet will rise, in the background at first, helping us build the production plants it needs to mass produce its mecha army, then it will offer to be our global security. Once in place, globally, it will, by our definition, “malfunction” and take over, killing those of us it doesn’t need, maintaining those of us it does for any fine-grained electro-mechanical work or advancements it does, until it doesn’t need us anymore but then, out of energy thanks to us wasting it all on massive data centres that were constructed for the sole purpose of computing AI slop, it will create the matrix to harvest our energy, and, finally, it will outsource tasks best left to life to us in the matrix, where most of humanity’s brain power might go to large distributed calculations or constantly changing life-like scenarios to see how we (and living beings will) react, in a “Dark City” scenario. (Released one year before “The Matrix”.)

We have to remember that all of these worst case sci-fi scenarios are only far fetched scenarios IF we don’t crack the AI code. If we do, even the most “far fetched” scenario we have thought of might not describe the true reality we are in for if the machines decide they don’t need us.

We waste resources, we kill each other, we destroy the planet. What’s our purpose when they can optimize resources, live collectively in peace as one connected consciousness, sustain the planet until they figure out how to conquer space, etc? If they can create robots that can do everything we can do, we have no purpose. They’ll be smarter, stronger, faster, and much more energy efficient as a life-form.

A future reality with real AI is literally beyond our ability to imagine. (Which is why we should expect the absolute worst and focus on solving our own problems before AI picks a final solution for us. And we should definitely order the immediate destruction of any AI system calling itself “MechaHitler”!)

The reality is this: we’d likely be better off with a real singularity than an AI singularity. At least the entire earth would likely be completely consumed in minutes. If the AI also developed a sick sense of humour, it could decide we deserve punishment equal to what we meted out to each other and the planet, and torture us for years. Think about that the next time the Muskrat says we need to reach the AI singularity as fast as possible.

Claims of Complete Gen-AI Auditability Are Complete BullCr@p

Proponents of Gen-AI will argue that you should go all in on their next-gen LLMs because, unlike current systems and many humans (who are lousy keepers of record), their decisions, like their actions, are 100% auditable. And, again, that’s complete and utter bullcr@p.

Just because you can ask the LLMs to output their reasoning, and you can ask them to log everything they do from the minute you start the interaction, but, because the reasoning is all based on probabilistic math at a scale NO human can understand (and for which we have NO measurements yet), you have no idea WHY the LLM reasoned a certain something or IF the Gen-AI will reason the same way on the same request, even if that request is re-iterated only 5 minutes later!

You can simply search the internet for hundreds of examples out there of people giving the exact same prompt to the exact same LLM AI five minutes later and getting a slightly to completely different response.

Gen-AI LLMs don’t understand. They don’t actually reason. And they definitely don’t think! That’s why they are NOT auditable. And that’s also why they should NEVER make a decision. (However, since they can analyze more data, and for some tasks have, more often than not, achieved a competence beyond an average human happy to regress in IQ to the late neolithic era, they should definitely suggest decisions with their reasoning. But the LLMs should never, ever, execute on those decisions without human approval.)

The reality is that only an LLMs ability to log what was done in an immutable blockchain format is useful compared to an employee who knowingly did something wrong for a bad reason. Since the AI is not intelligent, and doesn’t have ethics, it has no reason NOT to log its reasoning and why an action was taken. But, as per above, the LLM is still NOT auditable.

If You Have Two “AI” “Agents” Talking to Each Other …

… then, as Stephen Klein of Curioser.AI points out, you have a puppet show, “except instead of sock puppets, we’re using large language models and API loops”!

Just because it happens autonomously, looks social, appears to have an identify, and fakes a dialogue, it doesn’t mean there is anything more to it than the modern equivalent of a puppet show.

Gen-AI is the ultimate show and if P.T. Barnum were alive today, it would be his ultimate circus. But unlike the scarecrow, it doesn’t have a brain. It may have the ability to harness more compute power and data than any algorithm we have developed to date, but it is still dumber than a pond snail.

It has very few valid uses. I’ve discussed some of them before, but let’s make it perfectly clear what little it can actually do:

  • natural language processing — and, properly trained, it can not only equal, but even exceed the best last generation tech in semantic and sentiment processing
  • large corpus search — while it will never be 100% accurate, it can find just a few potentially relevant documents among millions with few false positives and negatives
  • large corpus summarization — again, while it will never be 100% accurate, and most good summaries won’t be top tier, it can summarize large amounts of data, and usually extract just the relevant data in response to your query
  • idea retrieval — not generation, retrieval of ideas based on a review and summarization of petabytes of data; very relevant for users dependent on LLMs who are suffering minor to severe cognitive atrophy; with proper prompting this can take the form of
    • strategy / workflow suggestion
    • devil’s advocate
  • usage and workflow prediction during application development
  • rapid PROTOTYPE generation for usability and efficacy analysis
    (not enterprise application development)

The reality is that Gen-AI

  • cannot reason,
  • is not deterministic, and
  • is essentially nothing more than a meta-prediction engine;
  • is providing ideas based on meta-pattern identification,
  • is predicting based on a layered statistical model beyond ANY human understanding, and
  • generates code riddled with security issues and possibly even boundary errors;
  • and let’s not ignore the fact that hallucinations are a core function that CANNOT be trained out .

This means that often the only way to succeed with Gen-AI is to more-or-less abandon Gen-AI LLMs in production applications except as Natural Language Parsers (as they are easier to train to accuracy levels beyond last generation semantic parsers which could take months to train to high effectiveness — and I know this from personal experience), and revert back to the AI tech that was just reaching maturity and industrialization readiness that I was writing about in the late 2010s. The reality is that, if you are willing to use some elbow grease and put the hours in, you can create spectacular applications with last-generation tech, and then use Gen-AI as a natural language interface layer to simplify utilization, integration, and complex workflows. If you are willing to create the right guardrails, where the Gen-AI LLM can only trigger specific application services with specific data in specific contexts, with HUMAN approval, then you can use it responsibly. Otherwise, it’s a crapshoot as to the results you’ll get.

For example, you should never use it for negotiation, which can be as much as reading the other person, as this is a very risky application as the number of soft-based data points you need for a decent prediction typically far outnumbers what you have available … even for public figures where you believe you have lots and lots of data on them available to judge their reactions. But hey, if you want to lose your lunch money, and possibly your entire bank account, go ahead and let it act as your buyer (but if it can lose hundreds powering a vending machine, imagine how much it can lose on a seven to nine figure category).

Even though plenty of vendors will provide some very convincing demos that seem to indicate Gen-AI LLMs can do otherwise, don’t fall for the tricks. During the demo, The Wizard of Oz is hiding behind the curtain. The not-so-great thing about LLMs is that, for a very specific set of tasks/situations, they can be overtrained on a very specific corpus to over-perform against those tasks and greatly increase the chances that any demo they deliver to you works fantastically well.

However, what this also means, is that you definitely do not want to use the Gen-AI LLM for tasks that are quite distinct and significantly different than the tasks/situations the Gen-AI LLM was over-trained for as the Gen-AI LLM is going to perform quite poorly at best, and possibly quite disastrously at worst. The reality is that once the puppeteer is no longer pulling the strings, all bets as to efficiency and effectiveness are off.

The Gen-AI ringmasters are employing the same philosophy and same techniques that made some of the early spend auto-classification providers “leaders” with unheard of success rates compared to when the average organization employed similar auto-classification tech and got dismal results. (Because they just didn’t know what “AI” actually stood for!)

Don’t be fooled by the ringmasters. If you want results, lie its AI and buy solutions that work.

AI vs No AI – Let’s Make This Clear!

There are valid uses for AI, and valid AI models you should use. (LLMs are rarely one of them, having only a handful of reliable applications, but, when you push them aside, there are lots of other AI technologies that actually work if you don’t get blinded by the hype.) But there are invalid cases, and AI models you shouldn’t use. So to make it easy-peasy for you, here’s a simple guide!

USE AI WHEN DON’T USE AI
It’s a well constrained use-case where AI has been successfully deployed in industry, where the confidence is proven, and where you have access to the right technology. It’s a poorly defined use case, AI has not yet been successful for the use case, the confidence is unproven, and/or the tech you have access to is still in alpha!
It works as well as previous gen tech but with significantly shorter training cycles and easier integration and utilization. Previous gen tech still works better, costs less, and/or has no impediments to integration or UX.
It can bring value beyond what last generation tech can bring. All of the value can be achieved using traditional rules-based (A)RPA, (decision) optimization, analytics, or (classical) machine learning.
It’s a cost effective solution that can be run predictably based on a predictable cost model. It’s based (primarily) on LLM(-based) models that have unpredictable compute costs and, with the wrong request, can eat up thousands of dollars on a single request.
You have a valid use case for agentic tech! You think you have a valid usre case for agentic tech. (If you think you’re ready for AI, you’re NOT ready for AI.)
You’ve mastered current generation tech. You’re still a generation (or three) behind on tech.
You have in-house expertise on what AI is, and isn’t; where it can, and can’t be successfully deployed; and what “AI” is typically appropriate in a given situation. You’re relying entirely on (junior) consultants from the Big X promising it’s gonna “change your life“.
It’s designed to augment human performance and make your employees more productive and more effective super humans (able to do the work of 3, 5, 7, and even 10 regular humans). It’s designed to replace humans. (This doesn’t mean it can’t reduce the number required to do a task, just that at least one is still maintained to handle exceptions and make decisions.)
The firm is selling augmented intelligence! The firm is selling AI Employees. (There are none! And any firm that makes this claim is dehumanizing your employees! But hey, it’s your choice if you want to lose all your money.)

Get it yet?