For going on 7 (seven) decades, AI cult members have been telling us if they just had more computing power, they’d solve the problem of AI. For going on (seven) 7 decades, they haven’t.
They won’t as long as we don’t fundamentally understand intelligence, the brain, or what is needed to make a computer brain.
Computing will continue to get exponentially more powerful, but it’s not just a matter of more powerful computing. The first AI program had a single core to run on. Today’s AI program have 10,000 core super clusters. The first AI programmer had only his salary and elbow grease to code, and train the model. Today’s AI companies have hundreds of employees and Billions in funding and have spent 200M to train a single model … which told us we should all eat one rock per day upon release to the public. (Which shouldn’t be unexpected as the number of cores we have today powering a single model is still less than the number of neurons in a pond snail.)
Similarly, the “models” will get “better”, relatively speaking (just like deep neural nets got better over time), but if they are not 100% reliable, they can never be used in critical applications, especially when you can’t even reliably predict confidence. (Or, even worse, you can’t even have confidence the result won’t be 100% fabrication.)
When the focus was narrow machine learning/focussed applications and accepting the limitations we had, progress was slow, but it was there, was steady, and the capabilities, and solutions improved yearly.
Now the average “enterprise” solution is decreasing in quality and application, which is going to erase decades of building trust in the cloud and reliable AI.
And that’s the fallacy. Adding more cores and more data just accelerates the capacity for error, not improvement.
Even a smart Google Engineer said so. (Source)