There are so many software-intensive system failures and compromises being reported these days that one has to wonder whether the testers were “out to lunch” when they should have been concentrating on making sure that the systems for which they were responsible needed testing.
In my recent book, Engineering Safe and Secure Software Systems (Artech House, 2012), I emphasize the importance of the types of intensive hazard risk analysis and verification and validation processes to which safety-critical systems are supposed to be subjected. It is also critical that adequate safety and security requirements are included at the start of projects.
The recent news reports about fuel leaks, battery fires and other issues relating to Boeing’s 787 Dreamliner have been particularly disturbing since the aviation industry supposedly has the gold standard when it comes to the development of safe and resilient airplanes … note that I didn’t include information security, since we probably won’t know the strength of protection against cyber attacks until one happens and is reported. I opined on reports that malware might have caused the fatal crash of a Spanair aircraft in my September 20, 2010 BlogInfoSec column, “The Infosec Game Has Changed – 154 Dead!” However the official final report on the causes of the crash placed most of the blame on the crew and, when it came to the computer systems, the finding was that the following was a contributory factor (see http://en.wikipedia.org/wiki/Spanair_Flight_5022 ):
“The absence of any warning of the incorrect take-off configuration because the TOWS [Take Off Warning System] did not work. It was not possible to determine conclusively why the TOWS system did not work.”
Well, that’s not particularly conclusive, is it? Whether the system malfunction or failure was unintentional (poor design, software bug, inadequate verification and validation) or deliberate (malware, insider malfeasance) is important information to enable the problem to be fixed or protected against in the future.
The Dreamliner is indeed a game changer but is disconcerting in terms of safety and resiliency. A principal concern is that the FAA (Federal Aviation Administration) delegated the testing of battery systems to the manufacturer (see The Wall Street Journal article “Crisis at Boeing” by Andy Pasztor and Jon Ostrower, January 18, 2013). Whatever happened to IV&V (INDEPENDENT verification and validation)? Furthermore, many physical control systems, which previously had considerable dependency on mechanical operation have been replaced with “fly by wire” fully electronic controls, which means that adequate electrical power must be available at all times. Hence the need for powerful state-of-the-art Lithium-ion batteries that provide more energy for a given battery weight than prior technologies.
However, what struck me, as someone with an information security background, is that the Dreamliner reportedly has brand new avionics software. This gives rise to many questions… Was an IV&V phase included in the software development lifecycle? Have security requirements and vulnerabilities been considered? Does the design of the software systems change relationships and interconnections between the less critical systems (such as the on-board entertainment systems) and highly critical flight control systems? If so, has the entire system been subjected to adequate security testing?
Furthermore (and this is also relevant to current aircraft), we must ask if the rush of transportation carriers (particularly aircraft, but also trains and road vehicles) to provide on-board Wi-Fi is opening up a new vector for attacks. The need for more extensive independent testing of the security of safety-critical systems is becoming even more important as the use of software systems intensifies. Are the testers keeping up, or are we just increasing risk without being aware that it is happening?