Sam Barnes/Web Summit via Sportsfile

Tech

/

Targeted Advertising: Implications for National Security

Recent reports of the now defunct political-data firm Cambridge Analytica’s data collection of 87 million Facebook users without permission has sparked a major public outcry. Facebook users were dismayed to discover their Facebook data was allegedly being used for behavioural microtargeting in political campaigns in various countries including the United States, the United Kingdom, India and Kenya, amongst others.

This adds to the growing fallout from Facebook’s admission in November 2017 that an estimated 126 million Americans may have viewed nearly 3,000 Russian-backed advertisements from the Internet Research Agency both before and after the 2016 presidential election. In one case, IRA allegedly bought a ‘Black Lives Matter’ advertisement on Facebook that could be seen as portraying the group as threatening, and targeted some residents of Baltimore and Ferguson, where race riots had previously taken place.

Cambridge Analytica had previously claimed its behavioural microtargeting model was pivotal in electing Donald Trump in the US in 2016. But what exactly is behavioural microtargeting and what are its ramifications?

How behavioural microtargeting works

Platforms such as Facebook enable advertisers, including political campaigns, to identify specific categories of end users, such as “males between ages 35-50 living in Singapore who like rock music.” The advertiser can then send a specific and different message to each of these categories. Behavioural microtargeting goes one step further by targeting people based not only on demographics, but on their behaviour.

Cambridge Analytica allegedly took users’ behaviour data (e.g. what pages they ‘liked’) to create detailed psychological profiles that allowed campaigns to predict what kinds of messages would be most likely to convince them. For example, a gun rights campaign could show neurotic users an image of a burglary, to persuade them that guns were necessary for protection; whereas they could show images of a family going hunting, to extrovert users.

Consequences of behavioural microtargeting

The power of targeted advertising has been used to influence positive behaviour. The Behaviour Insight Team in the United Kingdom (or the nudge unit), uses data and targeted ads to nudge people to make better choices in life. For example, targeted advertising campaigns such as environmental awareness and obesity initiatives can encourage behavioural change in individuals.

As far back as the 2008 and 2012 US Presidential elections, President Obama’s campaign teams were praised for using data-mining to determine which voters to ‘micro target’ and how to do reach them, based on their support for topics like reproductive rights or labour unions.

However, there are concerns that behavioural microtargeting could be “weaponised” as a tool for influence operations that exacerbate societal tensions and heighten social polarization. Experts such as Martin Moore of King’s College, London agreed that behavioural microtargeting played a role in exploiting the biases of British voters. The Brexit referendum, for instance, saw the Leave.eu campaign allegedly using psychographics to identify individuals on social media – categorised as ‘persuadable’ – who held favourable attitudes towards Britain’s exit from the European Union. Out of these ‘persuadable’ individuals, those who had previously expressed discontent against the influx of foreign immigrants into Britain, were sent messages claiming that staying in the European Union enabled immigrants to displace British jobs, thereby feeding their bias.

A hyper-targeted future

On one hand, it is still unclear to what extent effective behavioural microtargeting is truly effective in influencing the behaviour of individuals. Measuring the actual success of behavioural microtargeting efforts is hard, and if measured, the effects are relatively small.

As such targeting techniques become increasingly accurate, there is growing concern that psychological mass persuasion could be easily used to manipulate individuals to behave in ways that are seemingly not in their own or society’s best interest. For instance, conspiracy theory messages could be targeted at individuals who are deemed “vulnerable” to believing false news stories online or who are likely to share with others. If the spread of this conspiracy theory creates ill-will and enmity between different races or classes of the population, it would erode the trust within systems.

Unlike traditional advertisements, which are seen by the public and can be refuted or debunked, micro targeted messages are only seen by the target audience, who will supposedly be more inclined to believe them than to doubt them, and who will not have the benefit of hearing counter arguments.

What can we do?

Politicians in several countries have called for tighter regulation. New developments in proposed legislative requirements could set a precedent in accountability and transparency for political advertising.

The upcoming GDPR (General Data Protection Regulation) is the basis for data regulation across Europe. In relation to online advertising, the GDPR aims to provide more transparency over how personal information of individuals is used. Social media platform owners must state how an individual’s data is being used (e.g. tracking of cookies online) – while users have the right to give or withdraw consent on their data usage. With the implementation of GDPR, individuals can determine the extent of personal data usage by social media companies. GDPR might be a step in the right direction for future regulators to emulate. However, both context, culture and existing legislation of various countries should be considered when implementing something similar.

Most importantly, public education initiatives regarding online ad targeting can help individuals discern how personal data is being used in advertising. In the United States, the Americans have set various self-regulatory guidelines for online behavioural advertising. For example, the Transparency and Consumer Control Principles sets out clear guidelines for how consumer data is being used. In Singapore, the Advertising Standards Authority of Singapore (ASAS) has provided guidelines for social media advertising, with the Digital and Social Media Advertising Guidelines which social media ads must comply with industry-wide guidelines that can allow governments, advertisers and social media companies to define the parameters of ethical advertising.

Such initiatives can highlight the impacts of biases in advertising to reinforce the critical importance of advertising guidelines to both practitioners and individuals. Future advances in machine learning technology could provide further opportunities to better deal with malicious behavioural microtargeting. Until then, such capabilities should be tempered with further research in order to minimise the negative impacts which could adversely impact society and democracy.