As per this recent K@W article, automakers are working hard to advance the state of the automobiles, and that’s a good thing, but should the ultimate goal be autonomous, driver-less vehicles?
While park-assist, remote control parking, trailer assist, construction site assistant, blind spot monitor and the pre-crash occupant protection system and other safety systems are a good thing, there’s a difference between adding technology to alert a driver to danger and creating a car that drives itself in an effort to remove the human element. While human error is the leading cause of accidents, it’s not the only cause of accidents. Sometimes an animal, or human, jumps in front of the vehicle or an object falls in front of the vehicle, sometimes something breaks and the vehicle can’t be stopped, and sometimes a natural disaster happens.
If there’s not enough time to stop, a computer is not going to be able to stop the car; if something breaks, a program can’t fix it; and if an unexpected event occurs, will the algorithm know how to deal with it? For example, even if there’s no time to stop, a human might be able to take evasive action and avoid hitting a person who steps in front of the vehicle without warning. But if the only choice is hitting a person or hitting a building, will the algorithm make the right choice? (Cars and buildings can be fixed, dead people can’t). Or will it keep calculating to infinity in hopes of finding a collision-free path of action, and hit the human in the process. What if the failure causes a disconnect between the core processor and the brakes? What if an earthquake happens? Will the algorithm be able to interpret the readings correctly?
But more importantly, what if the system crashes? The average car already has more lines of code in its operating system than in an average computer operating system. As per this article over on the MIT technology review, many cars have a hundred million lines of code in their operating system. For comparative purposes, Windows 7 has about 40 Million lines of code. How many lines of code is it going to take to create an operating system that can drive an autonomous vehicle that performs well enough for a government to consider allowing it on the road? Hundreds of Million, if not a Billion. That’s a lot o code. How do you adequately test that much code? You don’t. You can never guarantee that the code is error free and that system won’t crash. You can only test until the probability is high enough for you to accept as likely to be error free in practice.
And what happens if an unexpected event happens at 70 MPH on the highway and the system crashes? Nothing good.
But the real concern is what happens when the OS is hacked? Your computer gets hacked, you lose your personal and confidential information and the computer becomes inaccessible to you until you unplug (and reboot) it. If your car gets hacked, it becomes inaccessible to you until you cut the power. You can’t do that at 70 MPH, and since all cars are being built with 4G, bluetooth, wifi, etc. — if a hacker gets control of it while you are on the road, he can crash your car into another and there will be nothing you can do.
And if the hack is the result of a bug in the OS that allows for a massive zero-day exploit, a hacker could take control of all cars on the road on the same communication network, and cause them all to accelerate until they hit something. If tens of thousands of vehicles were hacked and subverted all at once in a zero-day exploit, the widespread damage that could be caused would be hundreds or thousands of times worse than most terrorist groups currently achieve when they manage to hijack a plane or blow up a single building.
In other words, removing the human completely from the picture doesn’t increase safety, it decreases it. If we must have autonomous vehicles, then they better all come with an old-school code-free manual override switch that, in an emergency, will let us turn the computer off so we can drive home safely and tell those darn kids to get off our lawns.