How You Know Your Education System Is Broken!

Only 40% of employees say they’d be fine NEVER using AI again! (As per a recent Section AI survey in the Wall Street Journal of 5,000 white collar workers, as reported in a recent post by Stephen Klein who also noted that the majority of employees say it only saves them 2 hours or less per week. Furthermore, he also mentioned a Workday study that reported every 10 hours “saved” by AI resulted in 4 hours being lost due to required error corrections, flawed output revision, and necessary verifications, which means there aren’t much savings at all. [Specifically, for an average employee to actually save 10 hours, they’d have to save almost 16 hours, which would take them two months to achieve!])

Gen-AI is failing 94% of the time. It’s causing serious cognitive apathy and decreasing our IQs far beyond what Twitter achieved on its introduction (where it reduced our collective attention spans to that of a goldfish). It’s direct and indirect costs to run 8 hours a day are often more than to just hire another person (due to compute requirements that are 20X to 200X that of Google for a basic query, and the extreme amount of energy and water [for cooling] required on grids that are already stressed and ecosystems where fresh water is running out).

Chat-GPT. Claude. Grok. Rufus. Gemini. Meta. DeepSeek. Perplexity. Co-pilot. Poe. Le Chat. They’re all over applied due to over promises when they all have fundamental issues (like hallucinations) that cannot be trained out (as the issues are a result of their core design and programming), limited data sets (and now that AIs are being used to generate additional training data, performance is getting worse), limited guidance, and no guardrails.

There’s always been a time and a place for proper AI, but it’s not now, it’s not everywhere the investors losing Billions on Open AI and competitors are telling you, and it’s not the “AI” they are pushing.

Every time a new advancement in tech comes along, we always forget how long it takes to get from prototype to safe for unmonitored regular industrial and home use, be it hardware or software. With AI, it’s always been about two decades between a new algorithm being invented, and a production ready system with known performance, limits, and guardrails being ready for the mass market. In other words, this tech shouldn’t even be out of the research labs yet! We definitely shouldn’t have every major consultancy trying to push it as the cure-all for every problem throughout your entire enterprise. (Or new start-ups claiming they can offer you AI Employees!)

How many more examples of (silicon) snake oil do we need before we accept there is no panacea for all your ailments — be they physical, mental, or industrial — abandon this current iteration of Gen-AI, and go back to the targeted, mature, solutions that were finally ready for prime time (as we finally had enough processing power, data, and research behind us to deploy them with confidence)?

And even though the technology might work as much as 12% of the time, as per a PwC study that found that 12% of 4,454 CEOs surveyed reported both revenue gains and cost reductions, that’s not much of a validation of the technology — especially since those gains and cost reductions could have nothing to do with AI at all (and the pilot success of 6% from a recent McKinsey is a much more reliable metric here).

If you want real success, find a (A)RPA solution that works, lie its AI and buy it while you wait another decade for this technology to mature to the point its reliable, guarded, and safe for mass market adoption and widespread application. (Or wait for an AI-enabled SaS provider to come along who will do the 24/7/365 human monitoring required for you and make its software is usable and safe through this monitoring. Because all the current generation of LLM[-powered Agentic AI] tech is doing is increasing the need for human monitoring, not decreasing it.)