C. Warren Axelrod

Will AI Short Circuit Cybersecurity?

The general tone of Chris Baraniuk’s February 23, 2021 article on the BBC website, “How Google’s hot air balloon surprised its creators,” available at   https://www.bbc.com/future/article/20210222-how-googles-hot-air-balloon-surprised-its-creators is one of wonderment, although he does throw in some caveats about the dangers of AI operating in unexpected ways.

The article describes how engineers discovered a surprising feature relating to the now defunct Google Project Loon, which was intended to make the Internet universally available using balloons rather than satellites. They observed that, on a trip from Puerto Rico to Peru, the balloon began tacking, which is the method used with sailboats for changing direction. Not that that is particularly startling, except that they had never taught the AI how to do that—it did it all by itself! While this was a “gee whiz moment,” it portends what might be one of the greatest dangers of AI (artificial Intelligence) systems, namely, acting autonomously in unanticipated and possibly dangerous ways.

Coincidentally, the National Security Commission on Artificial Intelligence, chaired by none other than Eric Schmidt (ex-chairman of Google), issued its Final Report on March1, 2021. It’s a 725-page tome, which is why I thought that the speed-reading robot in the move “Short Circuit” should be harnessed to read it. Perhaps the committee members are of the old-school that judged the quality and authority of a document by its physical weight (or, in this case, electronic media footprint). The report, either PDF or interactive, is available via Home | NSCAI The Committee members derive mainly from industry (Google, Microsoft, Oracle, Amazon), academia, think tanks, and politics. It is, to say the least, a very extensive report that raises important issues, but one can’t help thinking that it might be self-serving in some cases, especially for the enormous tech companies that have already invested billions in AI and would like to control the degree of government intervention. Singularly (pun?) lacking is the participation of privacy rights groups that have already called out the dearth of transparency in the research for and preparation of the report.

That being said, it is well worth looking at the recommendations from the AI report and seeing whether or not they also apply to cybersecurity risk generally, as well as the cybersecurity, privacy, secrecy and safety risks of AI systems themselves. Here are the report’s recommendations. The first group is under the title “Defending America in the AI Era” and are as follows:

  • Defend against emerging AI-enabled threats to America’s free and open society
  • Prepare for future warfare
  • Manage risks associated with AI-enabled and autonomous weapons
  • Transform national intelligence
  • Scale up digital talent in government
  • Establish justified confidence in AI systems
  • Present a democratic model of AI use for national security

The second group of recommendations are for “Winning the Technology Competition” and are as follows:

  • Organize a White House-led strategy for technology competition
  • Win the global talent competition.
  • Accelerate AI innovation at home
  • Implement comprehensive intellectual property (IP) policies and regimes
  • Build a resilient domestic base for designing and fabricating microelectronics
  • Protect America’s technology advantages
  • Build a favorable international technology order
  • Win the associated technologies competition

While the report is about AI, the recommendations apply equally well, if not more so, to cyberspace and cybersecurity risk. Indeed, if one were to draw a Venn diagram of AI and cybersecurity, there would be a considerable overlap.

Perhaps a new revised version of the report could be published where “AI” is replaced universally with “cybersecurity.” It would be a great start. Of course, the investment dollars would have to be orders-of-magnitude greater for cybersecurity because we are at a much later phase in the cycle.

And then there is the cybersecurity of AI to consider as well as the use of AI in cybersecurity. To be fair, the word “cybersecurity” shows up 85 times in the report, which makes it an essential read for InfoSec folks.

Post a Comment

Your email is never published nor shared. Required fields are marked *

*
*