The Growing Threat of Virtual Terrorism

Cyberspace has been used to recruit, fund, coordinate, and execute terrorism for well over a decade now, however the threat posed by Virtual Terrorism (VT) has never been greater, nor has its potential to launch attacks through strokes on a keyboard. Prior to the birth of the Islamic State (IS), Al Qaeda had already called upon its adherents to wage ‘electronic jihad’ through ‘covert mujahideen’ to launch cyber-attacks on the West’s governments and infrastructure. Five years ago, the gap between terrorists’ intentions and capabilities was already closing. Today, there is relatively little preventing VT from becoming an even more serious threat.

To emphasize the seriousness of the problem, the U.S. Department of Homeland Security (DHS) has stated that “cyberspace is a particularly difficult to secure due to a number of factors: the ability of malicious actors to operate from anywhere in the world, the linkages between cyberspace and physical systems, and the difficulty of reducing vulnerabilities and consequences in complex cyber networks. Of growing concern is the cyber threat to critical infrastructure, which is increasingly subject to sophisticated cyber intrusions that pose new risks. As information technology becomes increasingly integrated with physical infrastructure operations, there is increased risk for wide scale or high-consequence events that could cause harm or disrupt services upon which our economy depends.”

The threat of attack need not be from terrorists, and often emanates from governments. Ukraine’s power grid was first attacked in 2015 (presumably by Russia), one of the first known incidents of physical infrastructure having been compromised and severely impacted by VT. The attack on the control systems of Ukraine’s power grid can theoretically be repeated against infrastructure in almost any sector, including water, transportation, and defense systems. While espionage and theft is the most common objective of cyber-intrusions, Ukraine’s example demonstrates that state (and non-state) actors can penetrate even the most sensitive and secure command and control structures, simply to create havoc and cause disruption to a nation’s ability to operate.

Not only did the perpetrators in this case succeed in disrupting the flow of electricity to some 200,000 people in Western Ukraine for several hours, they simultaneously targeted the automatic control systems of rail, mining and airport networks. According to the DHS, the attack was deliberately timed to occur during the specific period of the day when customers contact the help desks of Ukrainian electricity companies with the greatest frequency, so that support staff were pre-occupied and attention was likely to have been diverted from the initial network intrusion.

In doing so, the hackers were able to test and monitor the companies’ and government’s reaction, which in turn may presage a future attack, designed to cause even greater havoc and disruption.

The malware used against the power companies was subsequently identified as BlackEnergy 3, believed to be of Russian origin and designed specifically to attack infrastructure systems. According to the DHS, a unique feature of BlackEnergy 3 is its KillDisk function, which enables the attacker to rewrite files on infected systems with random data while blocking the user from rebooting their systems, rendering them inoperable. The virus also searches victims’ computers for software primarily used in electric control systems, indicating a likely focus on critical infrastructure systems. The Ukraine example provides a good glimpse into the future, where attacks on infrastructure could become common, once the malware is perfected.

The best- known example of a cyber-attack on physical infrastructure was the Stuxnet malware, which began in 2008 and was used to stifle Iran’s ability to produce nuclear weapons by attacking computers at its nuclear facilities, thereby interrupting the country’s ability to successfully spin centrifuges. Stuxnet was spectacularly successful, and is believed to have been a contributing incentive for Iran to complete its nuclear agreement with the West.

It is not widely known, but Iran appears to have attempted to return the favor via a VT attack on the Bowman Avenue Dam near Rye, New York in 2013. By gaining access to the dam’s control systems, Iran was able to acquire operational information (such as water temperature and flow rates), and would have been able to gain control of the dam’s gates if they had not been coincidentally disconnected at that time for maintenance.

There are dozens of other examples in which control systems have been attacked. These include an attack on a northwest U.S. rail company in 2011 in which signals were disrupted (creating delays), a 2014 attack on a German steel mill that enabled access to the firm’s technology and operating environment, and a 2001 attack on an Australian sewage and water system, resulting in the release of waste water and sewage into local parks and water tributaries. The problem is global.

Some 16 infrastructure sectors have been deemed ‘critical’ by the U.S. government, are obvious targets for VT, and are now regulated under the U.S. National Cybersecurity Framework (NCSF) — including financial services, telecommunications, and food production and distribution. President Obama also issued an Executive Order entitled “Improving Critical Infrastructure Cybersecurity” (ICIC), intended to enhance the security and resilience of America’s critical infrastructure by encouraging efficiency, innovation and economic prosperity. The ICIC is an attempt to set industry standards and best practices to help organizations to manage cyber-security risks. Yet in spite of companies’ knowledge of the existence of this threat, only 17% of 600 IT security executives surveyed from 13 countries in 2014 (including the U.S., UK and Brazil) said their companies had achieved what they would regard as a ‘mature’ level of cyber security (i.e. actually had IT security programs in place the thwart an attack).

Both the NCSF and ICIC are intended to be guidelines, rather than a mandate for corporate behavior – and herein lies a problem which is familiar to government attempts to address man-made risk more generally. As is the case with the COP21 (the Paris Climate Change Conference) guidelines, there is no law requiring compliance, nor any penalties for a failure to comply (although COP21 was hailed as a ‘breakthrough’ agreement, most people do not realize this). The same may be said about the fight against terrorism. How many governments and companies have implemented strict security protocols to protect themselves against terrorism without having experienced an attack?

Part of the issue here is that both governments and companies are reluctant to take measures that will slow their economies down or interfere with the ability to operate. Implementing sufficient IT counter-measures takes time, sucks up resources, and cuts into profit. Not factored into many organizations’ thinking process on this subject is how to put a price on VT, or the inevitable loss of reputation that goes along with it when it becomes public (which can of course be significant). If corporate executes were thinking more along these lines, perhaps more companies would be taking the risk more seriously.

As the online world meets the physical world, the risk of VT can only rise, and this applies not only to governments and companies, but to individuals, as the use of smart home alarm systems, televisions, appliances and other electronics become more popular. All of these items can be hacked, implying that our homes can be accessed remotely, and our televisions and computers monitored remotely, not just by governments, but by hackers. Few consumers consider this ‘darker’ aspect of living the ‘smart’ life.

One advantage individuals have is that they tend to upgrade their computers and other electronics (every 3-5 years) — more frequently than companies tend to upgrade theirs (which is every couple of decades in the case of control systems). On that basis it is easy to see why control systems are targets of choice on the part of virtual terrorists – their software is typically outdated, often light years behind current technology in relative terms. Surprisingly, some critical infrastructure systems are even using software that is more than a decade old.

The enormity of the challenge in determining the nature of the threat and monitoring vulnerability at the government level can be summarized by a 2015 report from the U.S. Government Accounting Office, which noted that most federal agencies overseeing the security of America’s critical infrastructure still lack formal methods for determining whether those essential networks are protected from hackers, and that of the 15 critical infrastructure industries examined — including banking, finance energy and telecommunications – 12 of them were overseen by agencies that did not have proper cybersecurity metrics. These sector-specific agencies had not developed metrics to measure and report on the effectiveness of all of their cyber risk mitigation activities, or their sectors’ basic cybersecurity posture. If that is the case 15 years and hundreds of billions of dollars after 9/11 in the U.S., it would appear that the advantage resides with those intent on implementing VT.

So what can be done, apart from raising awareness to the problem, devoting more resources, and making counter VT actions compulsory instead of voluntary? Creating a more holistic approach to the problem is a start, becoming proactive (instead of reactive) in thinking about how to address the problem, implementing routine cyber-security audits, and creating teams of individuals dedicated solely to the problem inside companies. Budgets therefore need to be adjusted to devote more resources to addressing the problem across the board. Security and privacy risk mapping, benchmarking, and scenario planning should become a standard component of a cyber-risk management protocol.

But what is also needed is a change in how we think about VT, and other forms of man-made risk. Less known and understood types of risks only tend to get on our radar in a meaningful way after the fact. Not only do we need to become more proactive on this subject, we should presume that VT will become as common in our collective psyche as other forms of terrorism have become, after many decades of enduring and addressing the problem. Governments, corporations and individuals are really only starting to understand what VT is, and what its potential impact can be. If we fail to turn the tide against VT quickly, it will soon be impacting us all in ways we had never imagined.