- BlogInfoSec.com - https://www.bloginfosec.com -

Vindication for Toyota? Proving the Negative

In my February 16, 2010 Bloginfosec column “Negative Testing Revisited – Vehicle Control Systems (Part 1),” I describe and discuss the concerns about the software controlling the brakes on Toyota regular-engine and hybrid vehicles and Ford hybrids. The supposition was that there were possibly software design problems or that electromagnetic interference was contributing to the failures.

Now, one year later, newspapers reported that the extensive testing, which has been performed on the control systems, including reviews of design and coding and subjecting circuitry to electromagnetism, came up empty. Big sigh of relief. Toyota stock up.

But, wait! As you delve into the February 9, 2011 articles in, say, The Wall Street Journal and The New York Times, a different picture is painted in regard to results of the NASA/NHSTA study. The WSJ article (“U.S. Points to Toyota Driver Error: Investigators Clear Car Electronics for Instances of Unintended Acceleration; Pedals, Mats Contributed,” by Mike Ramsey, Josh Mitchell and Chester Dawson) quotes NASA’s lead engineer, Michael Kirsch, as saying “… an electronics failure couldn’t be entirely ruled out …” but would be “incredibly unlikely.” Where does “incredibly unlikely” sit on the SIL (Safety Integrity Level defined in IEC EN 61508) scale? One in a million, ten million, a billion … ? Well, there are millions of such vehicles driving billions of miles a year, so that even incredibly unlikely events may happen.

The NYT article (“U.S. Inquiry Finds No Electronic Flaws in Toyotas” by Matthew L. Wald) also quotes Mr. Kirsch. Here he says that “It’s very difficult to prove a negative.” Hello. Isn’t that the bane of security metrics? You can never know if a system or network is absolutely secure. That is why I am a proponent of functional security testing. It pushes us to a higher level of assurance, though not to absolute certainty. Incidentally, my article “The Need for Functional Security Testing” is now available online at the CrossTalk journal website at … http://www.crosstalkonline.org/storage/issue-archives/2011/201103/201103-Axelrod.pdf [1]
The lesson I take from the testing of Toyota’s electronics is that, as with security testing, there is a level at which manufacturers of equipment, software, etc. are willing to say that they have tested enough and are willing to take the risk that the products are secure or safe enough. When something bad happens, it frequently is because a decision-maker’s a priori risk estimates were understated. At that point, depending on the nature of the incident (space shuttle explosion, vehicle brake failures, contaminated medications, system crashes), they are willing to expend relatively large amounts of money and effort to find the causes and correct them. While this response is usually required in order to satisfy regulators and regain lost credibility with customers, business partners, and so forth, it comes after the hit has already occurred so that the cost of the incident adds to the cost of remediation. The resulting cost often far outweighs what it would have taken to do it right the first time.

As with safety, so it is with security. How much and what kind of testing is enough? There is an old adage referring to computer software development as follows: “There’s never enough time or money to do it right the first time. There’s always enough time and money to do it over.” If the testing on the Toyota systems had been done before rollout, it would probably still have cost as much to do the testing, but look at all the costs of the recalls and the big hit to the company’s reputation, which they incurred. All that might have been avoided.