Even the offshore interns!
And since, like Meat Loaf,
I know that I will never be politically correct
And I don’t give a damn about my lack of etiquette
I’m going to come out and say I long for the days when AI meant “Another Indian”. (In the 2000s, the politically incorrect joke when a vendor said they had AI, especially in spend classification, was that the AI stood for “Another Indian” in the backroom manually doing all of the classifications the “AI” didn’t do and redoing all the classifications the “AI” got wrong over the weekend when the vendor, who took your spend database on Friday, promised to have it by Monday).
The solution providers of that time may have been selling you a healthy dose of silicon snake oil, but at least the spend cube they provided was mostly right and reasonably consistent (compared to one produced with Gen-AI). (The interns may not have known the first thing about your business and classified brake shoes under apparel, but they did it consistently, and it was a relatively easy fix to remap everything on the next nightly refresh.)
At the end of the day the doctor would rather one competent real intern than an army of bots where you don’t know which will produce a right answer, which will produce a wrong answer, and which will produce an answer so dangerous that, if executed and acted on, could financially bankrupt or effectively destroy the company with the brand damage it would cause.
After all, nothing could stop me from giving that competent, intelligent, intern tested playbooks, similar case studies, and real software tools that use proper methodologies and time-tested algorithms guaranteed to give a good answer (even if not necessarily the absolute best answer) and access to internal experts who can help if the intern gets stuck. Maybe I only get a 60% or 70% solution at best, but that’s significantly better than a 20% solution and infinitely better than a 0% solution, and unmeasurably better than a solution that bankrupts the business. Especially if I limit the tasks the intern is given to those that don’t have more than a moderate impact on the business (and then I use that intern to free up the more senior resources for the tasks that deserve their attention).
As for all the claims that the “insane development pace” of (Gen)-AI will soon give us an army of bots where each bot is better than an intern, given that the most recent instantiation of Gen-AI released to the market, where 200 MILLION was spent on its development and training, is telling us to eat one ROCK a day (digest that! I sure can’t!), I’d say the wall has been hit, been hit hard, and until we have a real advancement in understanding intelligence and in modelling intelligence, you can forget any further GENeric improvements. (Improvements in specific applications, especially based on more traditional machine learning, sure, but this GEN-AI cr@p, nope.)
When it comes to AI, it’s not just a matter of more compute power. That was clear to those of us who really understood AI a couple of decades ago. AI isn’t new. Researchers were discussing it in the (19)50’s, ’56 saw the creation of Logic Theorist, which was arguably the birth of Automated Reasoning, ’59 saw the founding of the MIT AI lab by McCarthy and Minsky, and ’63, in addition to seeing the publication of “Computers and Thoughts“, saw the announcement of “A Pattern Recognition Program That Generates, Evaluates, and Adjusts Its Own Operators“, which was arguably the first AI program (as AI needs to adjust its parameters to “learn”).
That was over SIXTY (60) years ago, and we still haven’t made any significant advances towards “AI”.
Remember that we were told in the ’70s that AI would reshape computing. Then we were told in the 80s that the new fifth generation computer systems
they were building would give us massively parallel computing, advances in logic, and lay the foundation for true AI systems. It never happened. Then, when the cloud materialized in the 00’s, we saw a resurgence in distributed neural nets and were told AI would save the day. Guess what? It didn’t. Now we’re being told the same bullshit all over again, but the reality is that we’re no closer now then we were in the 60s. First of all, while computing is 10,000 times more powerful than it was six decades ago (as these large models have 10,000 cores), at the end of the day, a pond snail has more active neurons (than these models have cores), and neuronal connections, in its brain. Secondly, we still don’t really understand how the brain works, so these models still don’t have any intelligence (and the pond snail is infinitely more intelligent). (So even when we reach the point when these systems are one million times bigger than they are today, which could happen this century, we still won’t have intelligence.)
So bring back the interns, especially the ones in India. With five times the population of the US, statistically speaking, India has five times the number of smart people, and your chances of success are looking pretty good compared to using an application that tells you to eat rocks.