Amazon

Books

/

Cyber War Meets Nuclear War

The world is heading into a less rules-based order, in which bad actors will have more influence. Cyber space helps the bad actors to undermine the rules. These bad actors achieve in cyber space what would be too risky in any other space. Cyber is the least rules-based space available, so it naturally favours the vicious over the virtuous, and is riskiest for the most exposed societies – the most developed and open societies.

If cyber threats to your privacy, property, communications, and finances are not worrying enough, now consider how cyber threats could impact nuclear weapons.

Could a remote attacker cause a nuclear weapon to detonate, or disable our nuclear deterrent, or redirect our nuclear missiles?

These questions tend to get answered in fiction, which is problematic. (I always advise students to discount fictional depictions, but I doubt most students sacrifice their popular culture. Theresa May doesn’t: this week she answered a question about what she does to relax, by saying she watches the American TV series “NCIS,” which helps to explain her feeble engagement with security despite six years as Home Secretary and two years as premier.)

Fiction about cyber risks will leave you between hysteria and complacency, but the dominant tendency is complacency. Make no mistake: at the interstate level, cyber war has been routine for decades, sometimes overlapping conventional kinetic wars. During the invasion of Iraq in 1991 and 2003, the US used malware to disrupt Iraqi communications and energy supplies. In 2009, Russia disrupted Georgian communications during the war over South Ossetia; in 2014, Russia did the same in Ukraine. In 2012, Saudi Arabia accused Iran of cyber sabotage of its state oil company, which was participating in an embargo. In 2013, South Korea blamed North Korea for cyber sabotage of South Korean banks, following international censure of North Korea’s latest nuclear weapons test. In April 2018, Britain and the US accused Russia of cyber attacks on government networks in retaliation for their airstrikes against Syrian chemical weapons sites.

The cyber risks to nuclear weapons are more secretive, but Western governments still appear to be complacent. In the foreword to the book under review, Des Brown (the Labour government’s Defence Secretary from 2006 to 2008) complains that his government ignored the risks, and subsequent governments have paid little attention.

Surprisingly few academics have investigated how cyber risks compound nuclear risks, so I welcomed Andrew Futter’s Hacking the Bomb: Cyber Threats and Nuclear Weapons.

I was relieved to find that the book has a clearly articulated structure, unlike the chaotic book I reviewed last week, released by the same publisher in the same month. The book is best at helping us to disaggregate the possible ways in which nuclear weapons are exposed in cyber space.

The least risky mission for the attacker is simply to steal information about the other side’s nuclear weapons (chapter 3); the attacker might want to control the use of our nukes (chapter 4) or deter use (chapter 5); attacks will (even unintentionally) further destabilize norms against nuclear use and proliferation (chapter 6).

Perversely, nuclear weapons might be more exposed in cyber space than other weapons. Nukes have been around more than 70 years. For about 60 of those years, computers have been involved, with alarming resistance to change, given that the systems are practically impossible to test under warlike conditions.

However, the cases of nuclear espionage that we know about from court indictments involve physical carriage of secrets by insiders (of whom the most infamous is A.Q. Khan of Pakistan), rather than cyber discovery by outsiders. Chapter 3 dates the first cyber theft of nuclear secrets to 1968, but gives no explanation or makes any theoretical use of this interesting observation. Instead, the same paragraph (page 58) goes on to quote a BBC correspondent (Gordon Corera) before getting theoretical, but, really, who cares what Gordon Corera says? The quote isn’t that interesting anyway.

‘Hacking the Bomb: Cyber Threats and Nuclear Weapons’ by Andrew Futter. 270 pp. Georgetown University Press

Poor sourcing, superficiality, and haste are persistent problems. Chapter 3 offers no empirical basis for assessing the risks. It has a section entitled “Notable cases from the 1980s to today,” which pass by without application. Consequently, no risk is assessed; and no theory is offered, except the fact that downloading documents on to a USB drive is easier than carrying the equivalent information on paper, and that thefts “may” be directed at plans, procedures, decision-makers, and designs.

Chapter 4 is no better: most of it is a series of anecdotes about how states have disabled each other’s (non-nuclear) systems through cyberspace, but no risk is assessed, and no theory is offered, other than the conclusion “that it is becoming increasingly possible to attack systems, and perhaps nuclear systems, directly through cyber means” (page 84).

Chapter 5 is just platitudinous, asking questions such as “Is there a role for nuclear weapons in deterring cyber threats?” US policy already answers “yes,” with complex legal, theoretical, and practical arguments, but the book under review gives them only a paragraph, reviews no other policy, and concludes one page later (page 107) that “it seems highly likely that the relationship between cyber operations, cyberwarfare, and nuclear weapons will become more acute and more important rather than less.”

The book never explains what “highly likely” means or how this assessment was derived. The book just doesn’t engage with risk or security adequately. This is surprising given its focus on assessing the compound of cyber risks and nuclear risks.

A dead giveaway is on page 44, where a paragraph begins: “Although nuclear weapons clearly represent the biggest risks.” The mistake is to reduce risk to the capacity for destruction. In fact, risk is a product of both outcome and probability. Nuclear weapons have the capacity for tremendous destruction, but nuclear explosions are incredibly infrequent. By contrast, cyber attacks are incredibly frequent, such as to be more harmful in aggregate. In 2011, the British government estimated the annual cost of cyber crime at one trillion US dollars.

That’s before we consider unprecedented outcomes (imagine that a remote controller causes toxic changes to drinking water, redirects air traffic control, shuts down energy supplies in mid-winter). Last week, the US Director of National Intelligence said that Russia, China, Iran, and North Korea (in order) are launching daily cyber attacks on America, with a growing chance of a “crippling cyber attack on our critical infrastructure.”

That’s before we consider the costs of controlling those risks: for instance, in 2010, Britain’s Strategic Defence & Security Review announced a National Cyber Security Programme costing £650 million over four-years.

The book’s weakness is on cyber more than nukes. The introduction and first chapter complain frequently that the concept of “cyber” is confused, but doesn’t set its own standard. I thought I had missed the definition, but no definition is indexed. The “concept” of cyber is indexed to the pages I had just read, so I re-read them, but still found no preferred definition. Instead, the section concludes (page 23) that “[t]he conception favoured here is the broad model and an amalgam of the definitions noted above.” That’s not helpful.

Conceptual confusion leads to theoretical and practical confusions. The book correctly introduces “computer networks,” but appears to want “cyber” to include everything that interacts with networks, including humans. This conflation prevents the book disaggregating the risks adequately.

Consider the separation between “computer” and “network”: if you disable any of the hardware by which any of your devices could network with any other devices (and you prevent anybody else restoring such connectivity) you would never have another cyber risk. You would have computer risks (such as direct access through a physical interface), but not cyber risks. This distinction is ignored by the book under review, but it matters, such that some governments are operating computers without networks; some governments have even replaced computers with electronic typewriters for the most secret work, so as to be sure that the typing cannot be exposed to cyber.

A human who chooses never to interact with cyber space is not exposed to cyber risks. However, a human without cyber risks could still leak the information that you’ve taken offline. Imagine that the typists reveal the nuclear launch codes that are typed on those electronic typewriters, or the couriers deliver the codes where they shouldn’t. These are not cyber risks, but the outcome is the same whether you intercept the same information on paper or online.

The book’s root problem is one of effort rather than intelligence. The book is incredibly short (the conclusion runs out at page 159, including citations).

Readers without prior expertise will be dizzied by rushed and inconclusive reviews.

For instance, the taxonomy of cyber activities (page 25) has only four items, each of which in turn is a list of undefined things, whose placement make little sense. For instance, North Korean “vandalism” appears in the first item, but “sabotage” appears in the third item, and “existential attacks” appear in the fourth item. These four items don’t come from any standard: they are cited to newspaper articles. You don’t need an obscure or complicated standard to be more helpful. For instance, the US Department of Justice categorizes five malicious cyber activities (hacktivism; crime; espionage; terrorism; war), but it doesn’t appear in the book.

The book’s taxonomy of malware (page 30) is a similarly inadequate paragraph, which cites no further reading at all.

The chapters on nukes are more informed but also explode with unhelpful taxonomies. The second chapter answers its titular question (“How and why might nuclear systems be vulnerable?”) by differentiating “negative control” from “positive control,” which are useful in the practice of controlled experiments but useless in the theory of nuclear vulnerability. In chapter 2, “positive control” is interpreted as a lot of things that are essentially either protective or sustaining, whereas “negative control” is protective, but protective against certain things or in certain ways that “positive control” is not, for no discernible reason. In whole or part, this section is too confusing to expose to students or policy-makers. Worse, the table (page 39) that is supposed to summarize the text doesn’t match the text; and its rows have no key.

Lexicons and taxonomies are hard work – I know because I have produced plenty. They must be thorough or you end up contributing to the confusion that you’re trying to rationalize. I expected more thoroughness given that Andrew Futter boasts of three years’ funding to develop his ideas.