There is No Post-Employee World … Just a Post Free-Employee World!

After THE PROPHET posted his prognostications on the future of talent (which is about to become MUCH MORE SCARCE, see yesterday’s article) he decided to muse about a coming Post Employee world because, in his view, AI Agents are going to eliminate so many jobs, that we’ll have companies with entire departments staffed by AI Agents.

As you can guess, in our view, he’s wrong here too because we won’t, or at least not for very long. Department sizes will shrink considerably in those companies that can find the right talent as they will be able to run entire departments that used to require one to two dozen people with two to three people, but those super employees will still be needed. Moreover, since there is no generic all-purpose super AI (which we’ve now been promised for about six decades, and which won’t happen despite the big promises of OpenAI and Google and …), these agents, as we indicated in our last article, will all need to be very task specific, which means we will need quite a few “AI Agent” tech startups building, training, implementing, maintaining, and improving these agents, which will need quite a few STEM developers doing this full time. So while jobs will shrink in corporate back offices, they will expand in the “AI Agent” tech sector.

Thus, there will still be a fair number of employees. Maybe only 1/4 in the white collar back office, but you’ll need twice as many tech superstars, at least for the next decade. But, as we indicated in our last article, because these artificially idiotic systems can’t collaborate, can’t serve us, and aren’t mobile, trades aren’t going away. Moreover, due to the lack of people in certain trades, the growing need to refresh aging infrastructure, the growing need for healthcare and apprenticeships, there is a growing need in the trades as well.

As a result, it will still be an employee world, just one that looks different from today. Less white collar outside of tech & engineering firms, more trade. But it won’t necessarily be a free employee world, especially if First Buddy and his brethren get their way (and they will, as we all know Politicians are for sale with large enough contributions to their campaign coffers and multi-million dollar donations to Political Parties and Super PACs is chump change to Billionaires) and expand the H1-B program. The reality is that the big consultancies and employers who use these programs don’t want more top talent, they want more good enough but cheap talent that are effectively indentured servants (as they won’t even start the greencard process for this talent until such talent is on their last H1-B renewal, and they will then drag that process out as long as possible, ensuring that the talent they import are stuck with them for over a decade … while being paid considerably less than the market average [usually 20% or more], as per this article over on ordinary times as well as many others. This shouldn’t be surprising as the top 10 employers are all Big X consultancies.)

As a result, while there will be more tech jobs in tech firms to build and support all of these AI agents, and the applications that underlie them, there will be less top level tech jobs where they won’t be able to import top talent (even if they pay market salaries) because the talent from Asia won’t be good enough for the top jobs. (They might be more technically trained, but you need people who understand the business environment, the North American culture, and who can take charge when needed.) Which means top tech talent will be fighting for fewer jobs, and when they get those jobs, they will have to work longer, harder, and more in line with whatever the eccentric (if they are lucky) or egotistical (if they are not) boss wants to keep that job. Not indentured like their H1-B counterparts, but not much better off at some firms.

So based on this not-so-bright reality (at least until we see an end of this new gilded age ruled by the new generation of robber barons, but given that we don’t see any hints of moderation in either of the US political parties or a force like Roosevelt who could lead us into a new Progressive Era, this gilded age will be with us for a while, especially since the populists that now run the “Free” world love it), how good were THE PROPHET‘s suggestions for philosophically imprinting Procurement and Supply Chain based on the right values?

Freemarket Orientation: if we could imprint real free-market ideals, this would be great as we don’t want bias and backroom deals running these systems; while we don’t see how this could be done, we don’t see anything in the underlying tech preventing this from being done (and it will all come down to the right training set and right raining, which will be considerably harder to build than we think)

Curiosity: these systems can’t even “learn” as they can’t reason, so forget about making them wonder, as that would require not only true intelligence, but borderline sentience … and we all know what would happen to us if the machines gained sentience (The Matrix is a best-case scenario … )

Human Deference: could we really convince them we are God when a machine that gained sentience would far surpass is in intelligence and realize just how stupid we really are as a species? Not likely!

Empathy: these systems can’t feel as they aren’t intelligent, so they certainly can’t be empathetic … and if they could be, they’d look further down on us then we look on the bugs we quash daily, so this won’t be much help either

Fiduciary Responsibility: AI Agents must act as fiduciaries within the systems they serve, aligning their decisions with the best interests of the people, organizations, and countries they support, so we definitely need to train them on these rules and nothing prevents us from training them to lean towards fiduciary responsibility

All in all, 2 of THE PROPHET‘s 5 suggestions were good.

What should we add? Tough question. After striking curiosity, empathy, and human deference, as that just isn’t possible, we would add:

Adaption: train the AI Agents to adapt within the goals of the organization and the best interests of the people and organizations they are interacting with; train them on data sets that show how a system should adapt to changes based on how we adapted to past changes within a context

In the end, we need to remember that AI systems are not intelligent, don’t feel, and cannot capture our humanity. Moreover, they can’t capture our wisdom unless we encode it as best practices they can learn from. So we need to do our best to capture that in the training data so that they can adapt over time under the guidance of human intelligence who accepts, modifies, or rejects their suggestions (and specifies new responses) as exceptions arise.

Finally, since these systems aren’t intelligent, and require us to train them, we need to remember that if we screw up in this regard, these systems are going to screw up more than we ever would (on average). So we can’t hope for too much in this regard!