A researcher has come up with exploits, as described in Zeljka Zorz’s April 10, 2013 blog post “Hacking airplanes with an Android phone,” which enable someone using a smart phone with particular apps to take over the flight management systems of aircraft … see http://www.net-security.org/secworld.php?id=14733. However, in an April 15, 2013 column “FAA and EASA say hijacking planes using an app is not possible” at http://www.net-security.org/secworld.php?id=14749, Zeljka Zorz and Berislav Kucan report that regulatory agencies and avionics software vendors deny that such attacks are possible, asserting that the real-world environment is much more secure than that of the lab used by Hugo Teso, a researcher for the German security consultancy n.runs.
Whether or not the hacks effected in the lab environment are currently feasible, they are probably technically possible under the right set of conditions and become increasingly likely as systems are linked together forming systems of systems and cyber-physical systems, where the latter term is used to mean systems that result from combining Web-facing applications and embedded control systems.
In my book “Engineering Safe and Secure Software Systems” (Artech House, 2012), I describe how security and safety software engineers usually have different perspectives and tend to communicate within their groups rather than with other groups. This lack of communication across the safety and security disciplines means that safety-critical systems, such as avionics systems and autonomous (driverless) vehicle systems, may well not be secure enough when they are connected to Web applications, as is increasingly happening. This is because these two categories of system engineers have very different approaches to how they design, develop and test their software systems. With security-critical systems, the main focus is on protecting information assets, particularly intellectual property and nonpublic personal information, whereas for safety-critical systems, the goal is to prevent their malfunctioning or failure from harming humans and/or the environment.
It is just such hopping to one system from another system that creates the possibility that major damage might be incurred or inflicted. It is perhaps somewhat analogous to the recent appearance of the newest strain of bird flu where the virus is transmitted from chickens, which are asymptomatic, to humans, for whom the disease is often fatal. Thus, a hacker may access a control system, such as a flight management system, via a front-end Web-facing application. The Web application may not be damaged in any way, but the repercussions for the safety-critical system could be horrendous.
So what is the answer here? In the best case scenario, security-critical and safety-critical systems are not connected or connectable. However, even if they do not appear to be interconnected, a hacker might be able to bridge across from one to the other as Teso demonstrated.
It is also interesting to note that Teso claims that legacy systems are much more vulnerable to such attacks than are newer systems. He explains that newer systems receive more patches than do older ones. This is in contrast to the likelihood that legacy safety-critical systems are prone to be safer than newer systems because they were developed using programming languages, such as Ada, which intrinsically have higher integrity and resiliency than systems developed in currently-used programming languages. However, such legacy systems are unlikely to have much, if any, security built-in, since their designers and developer probably do not think that they needed to consider the possibility of external parties gaining access, much as in the 1960s, 1970s and 1980s, developers did not anticipate the need to use larger date fields … which resulted in the Y2K problem.
So now we have major security issues relating to safety-critical systems because their creators did not anticipate universal access as is provided by the Internet. The risk-mitigation effort for building security into safety-critical systems is potentially huge … orders of magnitude greater than Y2K remediation was, since the latter did not require that programmers, who were making the corrections, know much about the applications themselves. However, for the securing of systems that control aircraft, oil refineries, nuclear power plants, and the like, systems engineers must understand what the control systems do and where the vulnerabilities might lie. Thus we can expect a huge effort and enormous costs to be incurred to secure these systems properly. However, it is another case of pay now or pay later … and the tab down the road will be that much greater.