Category Archives: AI

This Should Be Obvious But Expert in the Loop …

… is Human in the Loop. Not another (AI) system in the loop, no matter how specialized that system is or how well it is trained!

The future is Augmented Intelligence, NOT Artificial Intelligence (which doesn’t exist and won’t exist any time soon until brilliant researchers come up with a few more insights that get us closer to understanding

  1. what intelligence actually is and
  2. modelling it.)

The algorithms might be getting more accurate in average use cases, but the illusion of intelligence, no matter how grand, is still NOT intelligence. (And, even worse, The Wizard of Oz has been replaced by a very poor digital facsimile.)

Done right, Augmented Intelligence will still let your organization reduce its non-value-add tactical workforce by 80% to 90% because the right tools will enable the strategic experts to be 3, 5, 7, and even 10 times as productive and oversee all the tactical work that needs to be done using an exception based approach where every instruction that is given forms a rule that allows the system to automatically deal with the same, and similar, exceptions should they arise again in the future in a predictable and repeatable fashion.

Instead of having to oversee a team of tactical grunts that just take up space (because they don’t have the education, experience, or raw capability required to make good strategic decisions, manage projects, and identify value), a strategic expert can instead focus her time on value-centric activities and training a protege or two who will be one that posses the right mix of EQ and TQ to grow into, and take over, her expert role (when she moves on and up).

In the near future, there will be no more bodies in seats just to push bits around, because that’s what software does best. Number crunching and thunking. NOT analyzing strategically and thinking. (I admit most humans don’t do that well either, especially these days, because they are too attracted to the principle of least action and/or enjoying the cognitive decline from ChatGPT, but those willing to practice strategic thinking daily still do it way better than a machine ever will based on our current approaches to AI). [And while there might be fewer of us each year that are willing to think, there are still enough of us to get the job done if you let us select tools that work. Not necessarily AI. Tools that work.]

How You Know Your Education System Is Broken!

Only 40% of employees say they’d be fine NEVER using AI again! (As per a recent Section AI survey in the Wall Street Journal of 5,000 white collar workers, as reported in a recent post by Stephen Klein who also noted that the majority of employees say it only saves them 2 hours or less per week. Furthermore, he also mentioned a Workday study that reported every 10 hours “saved” by AI resulted in 4 hours being lost due to required error corrections, flawed output revision, and necessary verifications, which means there aren’t much savings at all. [Specifically, for an average employee to actually save 10 hours, they’d have to save almost 16 hours, which would take them two months to achieve!])

Gen-AI is failing 94% of the time. It’s causing serious cognitive apathy and decreasing our IQs far beyond what Twitter achieved on its introduction (where it reduced our collective attention spans to that of a goldfish). It’s direct and indirect costs to run 8 hours a day are often more than to just hire another person (due to compute requirements that are 20X to 200X that of Google for a basic query, and the extreme amount of energy and water [for cooling] required on grids that are already stressed and ecosystems where fresh water is running out).

Chat-GPT. Claude. Grok. Rufus. Gemini. Meta. DeepSeek. Perplexity. Co-pilot. Poe. Le Chat. They’re all over applied due to over promises when they all have fundamental issues (like hallucinations) that cannot be trained out (as the issues are a result of their core design and programming), limited data sets (and now that AIs are being used to generate additional training data, performance is getting worse), limited guidance, and no guardrails.

There’s always been a time and a place for proper AI, but it’s not now, it’s not everywhere the investors losing Billions on Open AI and competitors are telling you, and it’s not the “AI” they are pushing.

Every time a new advancement in tech comes along, we always forget how long it takes to get from prototype to safe for unmonitored regular industrial and home use, be it hardware or software. With AI, it’s always been about two decades between a new algorithm being invented, and a production ready system with known performance, limits, and guardrails being ready for the mass market. In other words, this tech shouldn’t even be out of the research labs yet! We definitely shouldn’t have every major consultancy trying to push it as the cure-all for every problem throughout your entire enterprise. (Or new start-ups claiming they can offer you AI Employees!)

How many more examples of (silicon) snake oil do we need before we accept there is no panacea for all your ailments — be they physical, mental, or industrial — abandon this current iteration of Gen-AI, and go back to the targeted, mature, solutions that were finally ready for prime time (as we finally had enough processing power, data, and research behind us to deploy them with confidence)?

And even though the technology might work as much as 12% of the time, as per a PwC study that found that 12% of 4,454 CEOs surveyed reported both revenue gains and cost reductions, that’s not much of a validation of the technology — especially since those gains and cost reductions could have nothing to do with AI at all (and the pilot success of 6% from a recent McKinsey is a much more reliable metric here).

If you want real success, find a (A)RPA solution that works, lie its AI and buy it while you wait another decade for this technology to mature to the point its reliable, guarded, and safe for mass market adoption and widespread application. (Or wait for an AI-enabled SaS provider to come along who will do the 24/7/365 human monitoring required for you and make its software is usable and safe through this monitoring. Because all the current generation of LLM[-powered Agentic AI] tech is doing is increasing the need for human monitoring, not decreasing it.)

Without Human Smarts, There Will Be No (Usable) AI!

And I’m so happy I’m not the only one pushing this theory. Mr. Stephen Klein recently published a great post on The Age of Pretend.

In the post he notes that:

Everyone assumes AI’s biggest bottleneck is compute. … That assumption is wrong. The real bottleneck … is architecture, specifically, a design decision made in 1945. … The real constraint: the von Neumann bottleneck. Modern computers separate memory and processing. Data has to move back and forth between them. For most software, that’s fine.
For AI, it’s catastrophic.

Some numbers the industry rarely highlights:

  • Accessing off-chip memory consumes ~200× more energy than the computation itself
  • Roughly 80% of Google TPU energy goes to electrical connections, not math
  • A 70-billion-parameter model moves ~140 GB of data just to generate one token”

LET THAT SINK IN. Us old timers remember “640K out to be enough for anyone”! The Apollo Guidance Computer — you know, the one that was installed on each Apollo Command Module and Lunar Module in the Apollo Missions, had 2K Core RAM Memory and a 36K ROM. Even today, unless you have an iPhone 17, your phone probably only has 128 GB of storage. That means, even with the processing power of your phone (that dwarfs most computers us old timers have ever owned), you can only process ONE token. (Now do you understand why the data center [energy] demands for your Gen-AI chat-bots are destroying the planet? Anyway, we digress …)

This means that (Gen-)AI has hit a wall. Computer Architecture supports massive compute at scale, massive storage at scale, but not massive transfers at scale.

So what does this mean?

Do you remember the days of RAM drives? Not only did it speed things up, but it kept your machine cooler because, as Stephen noted, less energy accessing data in RAM than on disk.

And do you remember the fun of Assembly? (Okay, that’s sarcasm!) Once you learned to maximize register usage (i.e. re-sequencing processing so that you minimized reads from, and writes to, memory), your code got faster still (and machines stayed cooler longer, which was obvious by the lack of noisy fans spinning up).

We’ve known about this problem for decades. (Eight decades to be exact!) It’s too bad today’s students don’t study the basics and understand it’s not strength that determines computational speed and energy requirements, it’s data scale — whether the data fits in memory or not, whether “significant” chunks fit in the onboard GPU memory or not. (And specifically, can you scale the data down enough for the efficiency you require?)

But this is still the key point in Stephen’s article:
The next major improvements will likely come from smarter algorithms.”

We might need brute force to detect patterns we can’t (yet) see, but the only way to truly advance is to understand those patterns and code optimal, light-weight algorithms that exploit fundamental rules to allow us to process data quickly and efficiently.

Until we figure that out. You’ll never have usable AI (and definitely never have REAL AI as not only will it never be intelligent, but it will never, ever, get anywhere close).

Tired of All the Fake AI Experts?

Want to know how to weed them out and make them go away?

Just ask them to define these terms, off the top of their head, on the spot, without looking anything up, using any tools, or accessing any network connected devices (and definitely no Gen-AI LLM access):

  • computability
  • decidability
  • NP-completeness
  • optimization, inc. local optimization vs. global optimization
  • clustering, with at least 3 different examples
  • curve fitting
  • fourier transform
  • neural network
  • deep neural network
  • transformer
  • ontology
  • semantic analysis
  • sentiment analysis
  • boolean logic and theory of logical variables
  • automated reasoning

and they don’t define every single term mathematically precise, then tell them to f*ck 0ff because they don’t know a damn thing!

You CAN Afford to Wait for AI. But you can’t afford to wait to

  • get your data under control
  • build an infrastructure to allow for greater connectivity between apps within your enterprise and its greater ecosystem
  • update your processes
  • acquire and train the right talent with the knowledge they need to compete in the modern world
  • get digital and implement modern, current, generation technology based on best practices, proven (A)RPA ([Adaptive] Robotic Process Automation), and last-gen “AI” tech like optimization, predictive analytics (based on clustering and curve fitting), and point based neural networks with proven reliability and mathematically understood confidence where those apps are needed (and not a Gormless AI)

The reality is that you have to operate as lean and mean as possible. And

  • without good data, you can’t make good decisions
  • without good connectivity, you’re manually re-entering data across systems or missing critical external data you need to make good decisions
  • without good processes, you are inefficient and if not already, about to be circling the drain
  • without good talent, you are running on fumes at best, your ability to compete is at risk, and you can never improve
  • without modern tech, you are at a continual disadvantage and will continually fall behind

So you can’t wait to

  • institute Master Data Management (MDM)
  • enforce Open APIs in your solutions and acquire integration and orchestration solutions
  • review and modernize your processes where necessary
  • focus on acquiring, train, and retaining top talent
  • modernizing your tech to CURRENT generation proven tech, not experimental HYPE tech

BUT YOU CAN WAIT ON “GEN-AI. It’s about getting the job done as efficiently and effectively as possible … with a low error rate and no significant risk! 99 times out of 100, you don’t need experimental “AI” to do that. Only the investors who spent millions/billions/trillionsw on unproven tech and the consultancies who need massive projects to employe bodies do … but that’s not to help you. That’s to recoup their wasted dollars. And that’s NOT your problem.