Black Bear Pictures

Tech

/

How do Robots Influence Elections in the Post-Truth Era?

In the 1950s, Alan Turing, the great British computer scientist, focused on the dream of designing a computer algorithm that could mimic human behavior. The Turing Test has conducted research on artificial intelligence, and consists of measuring the ability of a machine to exhibit intelligent behavior analogous to that of a human being.

At present, social media ecosystems are inhabited by a number of actual or non-real individuals that have potentially threatened democracies worldwide, influenced markets and generated revolt or panic in the face of disasters and humanitarian emergencies.

Among nonhuman beings bots, the software robots, are the most outstanding. A bot is able to produce content automatically and can interact with humans on social media without generating suspicion. Although many of them are used benignly, such as virtual stores, whose bots dialogue directly with potential customers; they can be very dangerous when they contribute to the spread of untruthful information, rumors and spam.

More seriously, these robots can artificially inflate support for a candidate, influencing the outcome of an election. They may even give the false impression that information, no matter its origin, is extremely popular and endorsed by many individuals, exerting a significant influence, albeit possibly unintentionally.

Astroturf is exactly this inflation of a movement that is not real, as opposed to root or spontaneous movements, which actually receive support. The term was created in 1985, by Texas Democrat Senator Lloyd Bentsen of the United States. The politician and also life insurance businessman created the term in reference to the pressure made on him by several insurance companies that sent letters to him with requests to favor their interests within the Congress and in the press.

More recently what we have in place of letters are the bots and their ability to change the perception of individuals on social networks, artificially altering the audience of some, or ruining the reputation of others, for political ends. This is because it is already proven that positive or negative emotions are contagious in social media.

Today, the strategies used to undermine elections already present some patterns such as: the dissemination and amplification of misleading, false and divisive content, always based on some current event and the identification of societal vulnerabilities, such as the theme of immigration and national sovereignty.

In the United States, the election that put Trump in power, in 2016, was the trigger for further discussion of the phenomenon by the scientific community. It is not yet known what was the real influence of robots in this electoral process. Similarly, during the last presidential elections in Chile, held in December 2017, for the first time, false news circulated in greater quantity in social networks, surpassing real information.

In Brazil, the 2014 elections were marked by intense political polarization both in the streets, influenced by the 2013 protests, and in social networks. In the latter, the greatest hostility was driven by bots, which motivated a part of the debates.

Not only that, but the next great leap in tech will certainly define future geopolitical competition and the landscape of political warfare. The coming age of digital competition brings new challenges for politics, such as how to deal with artificial intelligence, how to do better information sharing or to distinguish between a false or truthful audio and video.

The future of political warfare resides in the countering of threats in the digital domain, with cyber criminals, tech firms and cyber activists working to delegitimize democratic elections. The weaponization of big data should be prevented once the tendency is that more and more domestic political processes, as fundamental to democracies as an election, a referendum or a plebiscite will have human command and non-human influence in the future.