Google—Not Being Evil, Not Doing Harm

In a couple of columns in December 2008, I questioned whether or not Google was being true to its maxim of not being evil. At that time, I confused their motto of “Don’t be evil” with the physicians’ Hippocratic Oath of “Do no harm.” But I also argued that infringement of privacy could indeed be harmful in a physical sense.

Since 2008 I have given a great deal of thought to the differences between security-critical and safety-critical software systems and the risks that result from combining these into systems of systems and cyber-physical systems … in fact I have written a book on the subject, namely, “Engineering Safe and Secure Software Systems,”… see . The book is scheduled for release at the end of November 2012.

It occurred to me that Google’s world has also changed and that the moniker “Do no harm” might now apply. I’m talking about Google’s venture into driverless (or autonomous) vehicles. All of a sudden Google is dealing with safety-critical control systems, which steer, accelerate, and decelerate vehicles. If these control systems were to malfunction or fail, there could be disastrous consequences.

It’s confusing to see Google staffers get involved in areas well outside their usual scope of expertise, but it does make sense when you consider how they can integrate many of their existing capabilities in mapping, location, navigation, and the like, with the directing of vehicles. In fact, Google has stated that they don’t intend to make vehicles, but are likely to partner with manufacturers by making available their software and data. Of course, with Google’s cash, they could buy a car manufacturer if they so wished and if it was thought it would be a profitable venture.

Post a Comment

Your email is never published nor shared. Required fields are marked *