and how it’s going to pop like every other tech bubble since the first dot com bust!
What Kind of Bubble is AI?
by Cory Doctorow
Cory doesn’t say it, but he makes it pretty clear that when the bubble pops, like every tech bubble that has come before, there may not be much less to salvage when it does (especially since no one is thinking about what happens when it does pop).
So I’ll clarify:
A lot of people are going to lose a lot of money
(and while stupid investors hyping this bandwagon heading for a cliff probably deserve to lose every penny, all of the pensioners in the pension funds they scammed don’t; so if you run a pension fund, please pull out of ridiculously overvalued Gen AI NOW!)
A lot of people are going to lose their jobs
(and it’s going to be more devastating to the tech sector than the Silicon Valley Bank failure this year combined with the recession forecast that resulted in over 250K IT jobs being slashed in the USA alone)
A lot of hardware is going to suddenly go idle
and smaller cloud providers are going to go under when the big name cloud providers all of a sudden drop their prices to the floor just to keep the revenue coming in (resulting in the monopolies of Amazon, Google, and Microsoft controlling most of the servers outside of China and Russia)
The problem is, as Cory clearly lays out, when you take one step back and look at the ridiculous hype from a business/revenue lens, all of the big, exciting use cases for AI are either
a) low dollar [and low-stakes and fault-tolerant] (helping us cheat on our [home]work or generating stock-art for bottom feeders [who won’t pay an artist and don’t mind ripping off the IP from thousands of artists]) or
b) high-dollar but high-stakes and fault-intolerant (self driving cars, radiological cancer detection, worker screening and hiring, etc.)
and when you consider the data center costs of these super-sized models (as these data centers consume MORE energy than a small town), low-dollar AI applications won’t pay the bills and high-dollar AI applications cost MORE to deploy than to just do it the traditional way with an educated and capable human!
E.g. self-driving cars don’t work (and “Cruise” needs to employ 1.5 times as many supervisors as a taxi service would employ drivers to keep their cars, which still hit and critically injure people, relatively safe)
E.g. radiological cancer detection requires a human expert to spend the usual amount of time in diagnosis before consulting the AI, and then, if the AI doesn’t agree, spend that much time again
Not that we’re not stopping you from jumping on the (Gen-)AI bandwagon or selling that silicon snake oil that Open AI and Microsoft AI are selling. We’re just not joining you on the (Gen-)AI bandwagon as the steering algorithm is defective and it’s heading straight for a very high cliff at a very high speed …
Merry Christmas!