Tech
How Social Media is Fueling Divisiveness
Social media platforms face a myriad of criticisms, from contributing to rising anxiety and suicide rates among American teenagers, to profiting from the business of selling personal data and comprising individual privacy. The Black Lives Matter movement in 2020 demonstrated that the zeitgeist of racial justice and equality could be organized through digital activism, which gained widespread support and online engagement.
Social media is an instrument of political change, but these changes are dangerously consequential. Amid the COVID-19 infodemic, the wake of the Donald Trump fueled-Capitol riots in the United States, and the increase in political polarization, the world is abandoning the perception that social media does not significantly impact domestic and world politics.
As many aspects of people’s lives were forced into the digital realm during the pandemic, social media platforms were utilized even more for entertainment, communication, and connection. According to a New York Times analysis of Internet usage based on two online data providers, from the first U.S. COVID-19 deaths in January 2020 to March 2020, average daily traffic on Facebook skyrocketed by 27 percent and 15.3 percent on YouTube. In March, Mark Zuckerberg stated in a conference call with reporters that traffic for video calling “exploded” and messaging, particularly on WhatsApp, “doubled in volume,” says an additional New York Times report.
Increased engagement and subsequent rise in power of social media companies like Facebook have resulted in more sinister consequences: the spread of misinformation and disinformation. In September 2020, the World Health Organization and other United Nation agencies issued a joint statement reiterating a global concern for the COVID-19 infodemic, an “overabundance of information” that has led to widespread dissemination of misinformation and disinformation. Additionally, the statement included a call for member states and stakeholders, which include social media platforms, to combat the infodemic.
Nonetheless, the use of social media platforms has interfered with the integrity of elections, has been used to incite political violence, and has contributed to the spread of misinformation and political polarization around the world.
Political polarization and the spread of misinformation
Recommendation algorithms in social media affect perceptions that contribute to political polarization. “Right now, social media companies like Facebook profit off of segmenting us and feeding us personalized content that both validates and exploits our biases,” stated Yaël Eisenstat, former CIA analyst, diplomat, and Facebook employee at a TED conference in August 2020. “Their bottom line depends on provoking a strong emotion to keep us engaged, often incentivizing the most inflammatory and polarizing voices, to the point where finding common ground no longer feels possible.”
Social media platforms are breeding grounds for the spread of fake news and misinformation, which also contributes to the political divide. In 2018, three MIT scholars published a study based on over a decade of data, that found false news spreads on Twitter six times faster than real news stories. Moreover, false news stories were found to be 70 percent more likely to be retweeted than real news stories.
How people consume their news ultimately affects their perceptions of the world and their political views. According to the Pew Research Center, “one-in-five U.S. adults say they often get news via social media.” Consumption of false information not only creates a misinformed electorate but ultimately makes finding common ground and engaging in civil discourse more challenging.
The Markup’s Citizen Browser Project found that Facebook users who voted for Joe Biden and users who voted for Donald Trump in the 2020 election held different views on the U.S. Capitol riots – because their social media feeds showed stories that catered to their political biases. Not only were the Facebook users shown different news stories, but they were also shown stories from different sources altogether. Biden voters were more frequently served sources like The Washington Post, The New York Times, and CNN. Meanwhile, Trump voters were more frequently served sources like The Daily Wire, Fox News, and Breitbart.
In parallel, the design of social media recommendation algorithms in part contributed to the political violence in the U.S. Capitol. Dr. James Kimble, a propaganda expert at Seton Hall University, stated: “Social media enables you to craft an echo chamber” and that there is a “sense of self-selection where all you hear is what you want to hear and you don’t hear your opponents.”
The result, argues Kimble, is “disastrous for public discourse” because varying perspectives “do not collide with each other and thus grow more and more strong and seem true to those people.” He includes that discourse must be free from threats of violence, asserting that “some of these tweets flirted with the idea of domestic terrorism or encouraged people to be violent to show up at the capitol.”
Political violence
The U.S. Capitol riots on January 6th, 2021 are considered by some experts to be a result of misinformation campaigns and recommendation algorithms on social media platforms like Twitter, Parler, and Gab. In an interview with The Diplomatic Envoy, John Shannon, a professor at Seton Hall’s Stillman School of Business, has one explanation. “One of the great strengths and weaknesses on the planet is you can find people with similar views and ideas and theories [on social media].”
Political communities, such as terrorist organizations, who organize and recruit through social media platforms worldwide, are evidence of this. In 2016, an internal Facebook analysis of German political groups found that “64% of all extremist group joins are due to our recommendation tools.”
While many criticize social media platforms themselves, state actors are also guilty of abusing the platforms to incite violence. In some cases, governments targeted people in their own countries. In 2018, the UN published a report that military leaders in Myanmar used Facebook to conduct a systematic propaganda campaign against Rohingya Muslims.
“The role of social media is significant,” according to the UN report. “Facebook has been a useful instrument for those seeking to spread hate, in a context where, for most users, Facebook is the Internet. Although improved in recent months, the response of Facebook has been slow and ineffective. The extent to which Facebook posts and messages have led to real-world discrimination.”
Cynthia M. Wong, a former researcher at Human Rights Watch, explained in the Netflix documentary, The Social Dilemma, that this campaign “helped incite violence against the Rohingya Muslims that included mass killings, burning of entire villages, mass rape, and other serious crimes against humanity that have led to 700,000 Rohingya Muslims having to flee the country.”
Evidently, social media can invade many facets of daily living, from forming political opinions and cementing those opinions, to being used as an instrument for conducting genocide. On top of that, investigative journalist Carole Cadwalladr presented a startling judgment in her 2019 Ted Talk about Facebook’s role in recent elections. Her conclusion addressed “whether or not it is possible to have a free election again.”
She stated that “as it stands, I don’t think it is.”
Election interference
In September 2020, an internal memo by Sophie Zhang, a former Facebook data scientist for the Facebook Site Integrity team, was leaked to the public. Zhang found evidence that foreign governments, political parties, and other actors in Honduras, Azerbaijan, India, Spain, Brazil, Bolivia, Ecuador, and Ukraine were using fake accounts and/or organizing campaigns on Facebook to influence public opinion and elections. Additionally, Zhang stated that she and her colleagues removed “10.5 million fake reactions and fans from high-profile politicians in Brazil and the U.S. in the 2018 elections.”
Further evidence shows that social media is being abused to interfere in elections. According to “Challenging Truth and Trust: A Global Inventory of Organized Social Media Manipulation,” a report by researchers from the Oxford Internet Institute, there is evidence that out of 48 countries examined all around the world, 30 have political parties that are deliberately using computational propaganda on social media platforms during elections or referenda.
Two mainstream examples of this deliberate abuse of social media include Russia’s interference in the 2016 U.S. presidential election and the United Kingdom’s 2016 Brexit referendum. In both cases, the Internet Research Agency (IRA), a company supported by the Russian government, organized disinformation campaigns by writing and posting fake content and creating thousands of fake social media accounts to spread propaganda. The IRA created accounts on Twitter, Facebook, Instagram, YouTube, and other social media platforms. Fake content curated by the IRA was retweeted over a staggering two million times and reached over 288 million views on Twitter. Leading up to the 2016 U.S. presidential election, Russian posts reached 126 million U.S. Facebook accounts, according to a Park Advisors report sponsored by the U.S. State Department from 2019.
In anticipation of the 2020 presidential election, Facebook suspended the recommendation tab for political groups to try and avoid another election fiasco. After November 3rd, Facebook temporarily cut off all political ads in the U.S. in order “to reduce opportunities for confusion or abuse” the company stated. Additionally, from October 29th, 2020, to December 9th, 2020, Instagram temporarily removed the “Recent” tab from hashtag Instagram pages in the United States as a precaution against the spread of misinformation.
“Regulation is the way we protect the commons”
In addition to the precautions taken by Facebook and Instagram, some critics and experts have suggested taxing data mining, fixing the algorithm, and even dissolving social media companies altogether to prevent further consequences from disinformation campaigns. However, regulation is a resounding suggestion among experts.
John Shannon, speaking on the legal aspects of social media regulation to The Diplomatic Envoy, stated, “They are not enough. This problem will require regulation; regulation is the way we protect the commons. We are in the early stages of trying to regulate a largely unregulated industry we call technology.”
Dr. Viswa Viswanathan, an Associate Professor of Computing and Decision Sciences at Seton Hall University, concedes to regulation as a possible solution to problems caused by social media, but he does not believe regulation alone is a panacea for all of these issues. A fundamental takeaway is that “people need to know how to think critically or else they will always be targets of exploitation.”
One reason why misinformation campaigns are so successful is their ability to manipulate a target audience. Viswanathan elaborates on this, claiming “the educational system (at all levels) has mostly failed to help people to think critically” because it has come to view itself as an economic tool. One question that remains is if critical thinking, regulation, or other solutions can ultimately prevent social media’s disastrous impact on political polarization, political violence, and election integrity.