C. Warren Axelrod

Cyberwarfare … Back(up) to Basics

It seems that some folks are talking about reverting to former manual or analog methods should current cyber systems be compromised through cyberattacks by hostile nation states, terrorists or criminal groups. But, as we quickly found out when we were creating Y2K contingency plans, it isn’t that simple. Often the infrastructure, which supported earlier labor-intensive processes, no longer exists. For example, for Y2K we considered having messengers hand-deliver magnetic tapes (like they used to do) if telecommunications networks were to go down, only to realize that we and others no longer had a full complement of tape drives (they had been decommissioned years before) and nobody had kept blank tapes in inventory.

It is interesting to note that sometimes those still using obsolete systems are better off than leading-edge systems if and when they come under attack. An example is the December 2015 cyberattack that took out Ukraine’s power grid. They were able to restore electrical power relatively quickly because the grid was supported by decades-old technology. Modern smart grids would have taken much longer to restore.

How could we allow such circumstances to develop? It’s really quite simple. Much of the justification for replacing older methods with new ones is based on eliminating costs by getting rid of redundant people and antiquated technologies. But the business analyses that favor such proposals rarely consider what to do if the replacements become inoperable.

There have indeed been some wake-up calls of late as we’ve experienced major hacks exposing sensitive data about hundreds of millions of individuals, with ransomware running rampant, and with fake-news websites and social networks affecting how we view the world.

So, what should we do? I believe that suitable levels of physical and logical backup and recovery should be mandated by law for critical services. In financial services, U.S. bank regulators do require that critical core utilities have full operational backup and that backup facilities are “out of region,” which usually means at least 300 miles away from the primary site.

In my experience, the only way to ensure effective backups is to create separate budgets dedicated to backup and recovery. When a business unit or department proposes developing a new or upgraded system, they tend to propose just primary systems since including backup would destroy their budgets. By making fully-tested backup and recovery plans compulsory, and resourcing them both people-wise and financially from a general pool, we are much more likely to see results. Organizations must decide how much they are prepared to spend on contingency systems based on realistic risk analyses. There are many levels of backup and recovery as I discuss in a CrossTalk journal article “Investing in Software Resiliency,” September-October 2009.

At the firm where I was working through the millennial rollover, we created a separate budget and a dedicated team for Y2K remediation, since it hadn’t been anticipated by business units. That arrangement worked very well. I don’t see why this approach could not be generalized and adopted across organizations for all types of contingencies.

The issue of misinformation is a much harder nut to crack. It’s not as if you can establish a backup of verified, accurate news and roll it out when fake information is spotted. The answer lies more with the identification and removal of falsehoods before they hit social and media networks. Congress recently heard testimony from lawyers from Facebook, Google and Twitter. These companies are among those that Farhood Manjoo of The New York Times calls the “Frightful Five,” … Apple, Amazon, Google, Facebook, and Microsoft, along with “second-tier” Twitter. However, in his article “The Upside of Bowing to Big Tech: Being Ruled by Five Giants Has Benefits” on November 2, 2017, Manjoo opines that we are actually better off dealing with this limited number of companies than with a large number of smaller entities. In support of his argument he asks “… isn’t it better that we can blame, and demand fixes from, a handful of American executives when things do go haywire?” Personally, I don’t think that it is better, since this oligopoly of enormously wealthy enterprises can effectively control the dialogue, lobby the politicians, and do the minimum needed to satisfy complaints as long as it does not interfere with profits.

This latter view is supported by Noam Cohen in an October 15, 2017 article in the Sunday Review in The New York Times with the title: “Silicon Valley Is Not Your Friend,” which was in response to Facebook’s Mark Zuckerberg’s post asking forgiveness for “the ways my work was used to divide people rather than bring us together.” Cohen has just published the book “The Know-It-Alls: The Rise of Silicon Valley as a Political Powerhouse and Social Wrecking Ball.” The title says it all. He basically asserts that “… the tech companies [never had] our best interests at heart.”

In an article about the above-mentioned Senate hearing in The New York Times of November 2, 2017 by Cecelia Kang, Nicholas Fandos, and Mike Isaac with the title “Lawmakers Scold Tech Companies over Russia: ‘I Don’t Think You Get It,’” Senator Dianne Feinstein was reported to have made such a comment to the tech companies’ representatives presenting at the hearing. But they do! They get it very well—much better than do our politicians. These tech-savvy companies, with their high-priced lawyers and lobbyists and devoted customers, are able to run rings around a tech-ignorant Congress by using obfuscation and weak promises to do better next time. The underlying reality is that misinformation can favor almost as many people as not—more in some cases—and contributes to the profits of the companies that carry such information.

It is clearly not in the interests of major companies to incur enormous costs and liabilities that would stem from taking responsibility for monitoring bad actors, screening their submissions, and deleting them. It’s just not going to happen unless a greater power is brought to bear, namely, that of accepting that we have been engaged in a cyberwar for some time (which I asserted a decade ago) and that our national security is in very grave danger from past, current and future cyberattacks. At that point, we might invoke the necessary authority to defend ourselves against these attacks in part by compelling big tech, like those blowing off our lawmakers and regulators today, to assume their fitting war-time roles in the protection of our country from its enemies. At that point, requests of these firms by government to engage in defensive and offensive activities will not be so polite—they will be mandatory.

We might finally be seeing some serious activity on the part of governments to define acts of cyberwar and suggest suitable reactions. Stu Sjouwerman’s article “EU to Declare Cyber-Attacks ‘Acts of War’: USA likely to follow” at https://blog.knowbe4.com/eu-to-declare-cyber-attacks-act-of-war.-usa-likely-to-follow describes some initiatives in this space, and Patrick Howell O’Neill writes that the “’Hack back’ bill gains 7 new co-sponsors” at https://www.cyberscoop.com/active-cyber-defense-certainty-act-tom-graves-new-sponsors/ although the premise of counterattacking attackers is possibly dangerous in a world where accurate attribution for cyberattacks is often questionable.

We shouldn’t get our hopes up too much. There have been a number of previous attempts to develop useful definitions of cyberwar and realistic responses to cyberattacks with little to show for them. It will take more than superficial analyses and weak recommendations to really address this problem. We need strong generally-supported international laws and regulations, the funding to create the requisite defenses, deterrents and backup systems, and the will to enforce them effectively. Without these, it’s not going to happen.

Post a Comment

Your email is never published nor shared. Required fields are marked *

*
*