The MIT Technology Review recently published a great article on The Dark Secret at the Heart of AI which notes that decisions that are made by an AI based on deep learning cannot be explained by that AI and, more importantly, even the engineers who build these apps CAN NOT fully explain their behaviour.
The reality is that AI that is based deep learning uses artificial neural networks with hidden layers and neural networks are a collection of nodes that identify patterns using probabilistic equations whose weights change over time as similar patterns are recognized over and over again. Moreover, these systems are usually trained on very large data sets (that are much larger than a human can comprehend) and then programmed with the ability to train themselves as data is fed into them over time, leading to systems that have evolved with little or no human intervention and that have, effectively, programmed themselves.
And what these systems are doing is scary. As per the article, last year, a new self-driving car was released onto New Jersey roads (presumably, because, the developers felt it couldn’t drive any worse than the locals) that didn’t follow a single instruction provided by an engineer or programmer. Specifically, the self-driving car ran entirely on an algorithm that had taught itself to drive by watching a human do it. Ack! The whole point of AI is to develop something flawless that will prevent accidents, not create a system that mimic us error prone humans! And, as the MIT article states, what if someday it [the algorithm] did something unexpected — crashed into a tree. There’s nothing to stop the algorithm from doing so and no warning will be coming our way. If it happens, it will just happen.
And the scarier thing is that these algorithms aren’t just being used to set insurance rates, but to determine who gets insurance, who gets a loan, and who gets, or doesn’t get, parole. Wait, what? Yes, they are even used to project recidivacy rates and influence parole decisions based on data that may or may not be complete or correct. And they are likely being used to determine if you even get an interview, yet alone a job, in this new economy.
And that’s scary, because a company might reject you for something you deserved only because the computer said so, and you deserve a better explanation than that. And, fortunately for us, the European Union thinks so too. So much so that companies therein may soon be required to provide an adequate, and accurate, explanation for decisions that automated systems reach. They are considering making it a legal right for individuals to know exactly why they were accepted for, or declined, anything based on the decision of an AI system.
This will, of course, pose a problem for those companies that want to continue using deep-learning based AI systems, but the doctor thinks that is a good thing. If the system is right, we really need to understand why it is right. We can continue to use these systems to detect patterns or possibilities that we would miss otherwise, many of which will likely be correct, but we can’t make decisions based on this until we identify the [likely] reasons therefore. We have to either develop tests, that will allow us to make a decision, or use other learning systems to find the correlations that will allow us to arrive at the same decision in a deterministic, and identifiable, fashion. And if we can’t, we can’t deny people their rights on an AI’s whim, as we all know that AI’s just give us probabilities, not actualities. We cannot forget the wisdom of the great Benjamin Franklin who said that it is better 100 guilty persons should escape than that one innocent person should suffer, and if we accept the un-interrogable word of an AI, that person will suffer. In fact, many such persons will suffer — and all for not of a reason why.
So, in terms of AI, the doctor truly hopes that the EU stands up and brings us out of the wild digital west and into the modern age. Deep Learning is great, but only as a way to help us find our way out of the dark paths it can take us into and into the lighted paths we need.