Metro-Goldwyn-Mayer

Tech

/

Robots and Virtual Terrorism

Robots can store sensitive information, including encryption keys, user social media, e-mail accounts and vendor service credentials, and send that information to and from mobile applications, Internet services, and computer software. As a result, encryption is mandatory to avoid data compromises, yet most robot manufacturers use the same passwords for most or all of their products, leaving consumers vulnerable to hacking if they fail to change them. Robots also receive remote software updates, so proper encryption is necessary to ensure that these updates are trusted and have not been modified to include malicious software.

A 2017 study found nearly 50 cybersecurity vulnerabilities in robot ecosystem components, many of which were common in home, business, and industrial robots, as well as the control software used by other robots tested. Although the number of robots tested was not a large sample, the fact that dozens of vulnerabilities were uncovered, in such a broad spectrum of robots, is concerning. Most of the robots evaluated were using insecure forms of communications, with mobile and software applications connected to the Internet, Bluetooth, and Wi-Fi without properly securing the communication channel. Although some had weak encryption, most of the others sent data to vendor services or the cloud without any protection.

Most of the robots that have been tested to determine the nature of their security protocols did not require sufficient authorization to protect their functionality, including critical functions such as installation of applications in the robots themselves or updating of their operating system software. This enables cyberattackers to install software without permission and gain full control over them. Most of the robots tested were also either not using encryption at all or improperly using it, exposing sensitive data to potential attackers. Furthermore, many robot manufacturers have generally failed to ensure that either users are instructed to change passwords, or updates are routinely provided when changes in the product security protocols are made.

Certain robot features are common, intended to improve accessibility, usability, interconnection, and reusability (such as real-time remote control with mobile applications). Unfortunately, many of these features make robots more vulnerable from a cybersecurity perspective, with both critical- and high-risk cybersecurity issues present in many of the features. A hacked robot has many of the same vulnerabilities as computers, and can suffer the same consequences.

A hacked robot operating inside a home might spy on a family via the robot’s microphones and cameras. An attacker could also use a robot’s mobility to cause physical damage in or to the house. Compromised robots could even hurt family members and pets with sudden, unexpected movements, since hacked robots can bypass safety protections that limit their movements. Hacked robots could also start fires in a kitchen by tampering with electricity or potentially poison family members and pets by mixing toxic substances with food or drinks, or by utilizing sharp objects to cause harm.

While there are of course nefarious potential connotations to a robotic future, such capability also opens up fantastic possibilities in areas as broad as search and rescue operations, disaster relief, ambulatory services, and oil spill containment. They may, of course, also be used for more nefarious purposes. In 2014 some researchers at Harvard University created the largest robot swarm at that time using 1,024 tiny robots the size of a penny that could find one another and collaborate to assemble themselves into various shaped and designs, like a mechanical flash mob.

Some defense contractors have already developed drones that can fly into enemy territory, collect intelligence, drop bombs, and defend themselves against attack. In the Korean demilitarized zone, South Korea has deployed border control sniper robots that detect intruders with heat and motion sensors, and can automatically fire on targets up to one kilometer away with machine guns and grenade launchers. What if autonomous drones infected with malware decided for themselves to drop bombs or perform kamikaze missions in a stadium filled with people?

Lethal autonomous robots take many forms—they can walk, swim, drive, fly, or simply lie in wait. Our ability to outsource kill decisions to machines is fraught with a panoply of ethical, moral, legal, technical and security implications. As robots continue to proliferate, we are more likely to suffer the consequences of Moore’s law clashing with Murphy’s law. Inaccurate data and software glitches may combine with malware and hardware malfunctions to create an inevitable kill zone where it may be the least desired or expected. All drones and robots are hackable, and it is a question of time until virtual terrorists succeed in turning what would otherwise have been a wondrous technological future into one even more fraught with fear, uncertainty, and peril.

This article was originally posted in HuffPost.