Disclaimer: The opinions of the columnists are their own and not necessarily those of their employer.
C. Warren Axelrod

Self-Driving Software … Test, Test, Test

A spokesman for Mobileye, the company that developed the Autopilot software for Tesla, remarked that they hadn’t tested their software for the particular scenario in which a Tesla car slammed into a tractor-trailer, proceeded under the trailer and drove independently for some distance, decapitating the car and killing its reportedly distracted driver in the process.

It is to be expected that there will be combinations of conditions that will lead to tragic accidents. It just isn’t economically feasible to come up with scenarios or use cases for every possible operational combination and permutation for complex cyber-physical systems and test each and every one of them. I have often mentioned the time when I was asked to propose scripts for security testing a customer-facing transaction-processing system. The regular set of functional tests for the system comprised about 600 cases. I said that, in order to test every possible (first-order) security case, we would need to test 10,000 cases, which was impractical given the deadline for delivering the system. We finally agreed on sequentially testing smaller samples until we reached an acceptable level of confidence, which is a standard design of experiments approach. The system never failed on any first-order situation, although we did encounter a problem when a user followed a one-in-a-million sequence of actions, yielding a third- or fourth-order situation. The result was an unintended disclosure of information … far from the life-threatening events that are encountered in the physical world.

While the Tesla accident occurred when there was the confluence of a number of low-probability conditions, the fact that such a case might only occur once in a 100 million miles driven does not mean that it should not have been tested or that a fail-safe mechanism should have been in place. At first blush, it would seem that some form of dead-man’s brake, as used in trains, for example, could be implemented whereby the Autopilot system would not work unless the driver had at least one hand on the steering wheel at all times.

This also reminds me of two situations when I was the Chief Information Officer of a medium-sized financial services company.

In the first example, we were asked by a trader how long it would take to build a particular system that he wanted. Our answer was: “four-to-six months.” “Oh,” he said, “My [dedicated] programmer said that he could do it in two weeks.” To which I replied: “Yes, we can build the application in two weeks also. But the remaining 90 percent of the time is required not only to ensure that the application does what you specified but also to build a system that will respond properly if something goes wrong, such as a malfunction or failure, that is, that the system will not do what it is not supposed to do.” At that time, performance and availability predominated. Today, much of the emphasis for data-processing systems is on security and integrity and for control systems it is on safety. It’s interesting to note that the above work incident likely triggered my interest in “functional security testing,” about which I have written a number of times recently.

The second situation occurred when my staffing levels were being reviewed by the CIO of our parent company. “Why do you have as many testers as developers?” I was asked. He continued: “We have one QA person for every ten programmers.” My answer was that a developer might change a single line in a program, but that change could impact many systems, each one of which needs to be tested. Interestingly, I never had to pull an application once it had been moved into production, whereas he frequently had to retreat to a prior version or live through weeks and months of user complaints (which didn’t seem to bother him).

Many software companies roll out unfinished or “beta” versions of their products, as did Tesla with its Autopilot system, relying on customers to find and report errors and malfunctions. It’s usually a lot cheaper than subjecting the software to exhaustive internal testing … and besides, there is the pressure to get to market before your competitors (so-called “time to value”). That may be okay if the bugs in the system result in mere inconvenience, minor calculation errors or a little embarrassment. It’s not okay for security-critical and safety-critical systems, when lives are at stake, as I point out in my book “Engineering Safe and Secure Software Systems” (Artech House, 2012).

Post a Comment

Your email is never published nor shared. Required fields are marked *

*
*