It was bound to happen sooner or later … An autonomous automobile (auto-auto) equipped with Google’s self-driving technologies caused a crash while in autonomous mode. As described in a February 29, 2016 Wired Magazine article, “Google’s Self-Driving Car Caused Its First Crash,” by Alex Davies, available at http://www.wired.com/2016/02/googles-self-driving-car-may-caused-first-crash/, the SUV crashed into a bus on Valentine’s Day 2016.
A few weeks earlier, on January 12, 2016, reporter Alex Davies had written another article, “Google’s Self Driving Cars Aren’t as Good as Humans—Yet,” in which pointed out that humans behave differently from computer programs. Also, I point out below, humans in different cultures also behave differently from each others, as do humans within a culture but in different cohorts.
About a year earlier, on December 15, 2014, Greg Miller of Wired Magazine wrote an article proposing that “Autonomous Cars Will Require a Totally New Kind of Map,” at http://www.wired.com/2014/12/nokia-here-autonomous-car-maps/. Miller suggests that maps will have to be a hundred times more detailed than those used for GPS navigation (centimeters vs. meters) and will have to include real-time information on accidents, traffic jams and lane closures. They will also need to account for human psychology, Miller claims. Such changes may well mitigate the risk of accidents of the type that just occurred with one of Google’s auto-autos.
I have maintained, and still do, that much more of the intelligence has to be built into the infrastructure, as set forth in my May 26, 2015 BlogInfoSec column “Smart Cars, Smarter Roads.” Trying to build fail-safe cars where the intelligence is solely contained in the vehicle is costly, complex and subject to error, malfunction, and failure. Yes, the big companies want all the data to be centralized so that they can benefit from the information financially; but it is far more effective and efficient to have local sensors distributed throughout the transportation grid. If a local beacon had been aware of sandbags blocking off a lane, it could have informed the Google car of the situation, as well as all computer-controlled interconnected cars in the area, and the crash could likely have been avoided. Enhancing Google’s software is not going to do it in all situations.
One of the biggest challenges with these vehicles is to teach them how to behave like humans. But which humans? Or which humanoids?
I came across a very revealing article in The New Yorker of February 22, 2016 about wealthy Chinese living in Vancouver, Canada. Jianyang Fan’s article, “Annals of Wealth: The Golden Generation—Why China’s super-rich send their children abroad,” describes the lifestyle of a group of young Chinese women from wealthy families living in Vancouver, including one who said the following while driving along with the reporter in her Maserati Gran Turismo:
“It’s like this: when I am driving here [Vancouver] and need to make a turn, I turn on my signal light and do it. It’s the most normal thing in the world. When I first drove in Asia, I flashed my signal and immediately people, instead of slowing down, all sped up to cut me off. It was so maddening, and then, after a little while, I became like everyone else. I never signal when I turn in Asia. I just do it. You don’t have a choice.”
I am thinking that Canadians are much more courteous on the road than practically any other nation. However, the main point here is that driving cultures differ much as national cultures do. Consequently, developers of software for autonomous vehicles would need to account for driving protocols in different cultures, within cultures, and from one individual to another. This is something that I don’t believe they can do, and therefore we will end up imposing strict standard rules on all vehicles. This can only be achieved when every vehicle falls under the control of software, in which case it won’t matter how individuals behave. That’s okay as long as the rules of the road are consistent and comprehensive, and there is no chance of systems going rogue due to software/hardware errors or hijacking. Right.