The Platform

MAKE YOUR VOICES HEARD!
Photo illustration by John Lyman

There’s a disinformation ecosystem online and tech and social media giants are making it worse.

Creators of online disinformation are trying many different approaches and tactics to get their content to reach certain users. In recent years, however, social media platforms and search engines, through their recommendation algorithms, have become powerful facilitators to accomplish this task. Some of these platforms claim they are trying different solutions, but these steps are not sufficient given the global reach of the problem.

For years now, YouTube, the second most visited Internet website in the world after Google, researchers and activists have highlighted the platform’s opaque algorithms in promoting disinformation and harmful content. In July, the Mozilla Foundation published a report titled “YouTube Regrets,” which analyzed YouTube recommendation algorithms. The study found that 71% of the harmful content circulating on the platform was driven by algorithms, with pandemic and political misinformation making about one-third of that content. In January, a letter signed by 80 fact-checkers accused the company is not doing enough to fight fake news on its platform. In response to these criticisms, the company has made several promises to improve the efficiency of its recommendation system.

Search engine algorithms are also encouraging disinformation content. Users who seek relevant results are sometimes directed to websites rife with disinformation. Search engines will boost these results or fail to remove them which directly results in increasing the discoverability of the content.

A NewsGuard report that analyzed Google and TikTok search tools found that these tools recommended disinformation content relating to the war in Ukraine, U.S. elections, and COVID-19. Recent research found that Bing and DuckDuckGo while less prevalent, also contributed to this problem.

Social media platforms are also exposing their users to disinformation content. For instance, Facebook algorithms amplified disinformation about climate change during the 2020 U.S. elections. A recent watchdog study found that Instagram pushed anti-vaccine content through the explore and suggest post features that recommend content to users.

Online shopping platforms also suffer from this problem. In April 2021, a research report by the Institute for Strategic Dialogue exposed that Amazon’s algorithms are recommending books that advocate for QAnon and anti-vaccination groups. The company said its discovery tools don’t guide the users to any particular viewpoints or books.

Understanding the size of the problem requires a lot of data, but tech companies continuously refuse to share the details of these recommendation algorithms, arguing they are trade secrets and hence should remain tucked away. The opacity of these algorithms makes it extremely hard for society to understand their harmful impacts. Most recently, regulations have been proposed to push for more transparency. These regulations will be a significant step toward curbing online disinformation.

Users of these platforms should work to report the harmful recommended content and pressure the platforms to remove it.

Mohamed Suliman is a senior researcher at Northeastern University and also holds a degree in Engineering form the University of Khartoum.