As the doctor pointed out back in 2014, calling #badwolf on self-driving cars is well-founded. Just last month we had more accidents involving self-driving cars (from Tesla and GM) where a Tesla “ploughed into the rear” of a fire engine in Culver City and where a GM car collided with a motorcycle in San Francisco.
And when you get injured, as in the case of the motorcycle driver, who do you sue? If the car is self-driving, then there’s no driver, just source code. Source code isn’t an entity, so all you’re left with is suing GM, as the cyclist whose motorbike was hit with the GM car is doing (as per this article in Engadget and this article in Popular Science). But is it the company? When technically it’s the software — written by who knows how many employees who used who knows what from open source to speed up development, which was again contributed to by who knows how many authors?
But you can’t sue software, it’s not an entity, at least not a legal one, and that can’t happen at least until we grant it intelligence … and the right to own assets. So, it’s GM, but are they liable under the law? And, if not, how can the individual in the vehicle, not driving, be held liable?
And what happens if the “AI” becomes artificially intelligent and decides to “improve its own code” or the code gets co-mingled with the company’s “sentiment analysis” technology and all of a sudden gains a strong “dislike” for the self driving cars of the competition and, using it’s limited action-reaction processing algorithms, determines the best course of action is to “crash into the competition cars”. What then? We’re driving cars with a “kill” switch we have no control over!
And we’ll never know if there is one! With 99M+ lines of code in an average self-driving car OS, how would you ever find the kill switch until it triggered? And if it triggered en-masse, all of a sudden we have Maximum Overdrive on a global scale! Are you ready for that? the doctor is not!