Photo illustration by John Lyman

Tech

/

Why Moderating ‘Hellscapes’ is So Difficult

Big Tech companies have been granted a new, powerful role: mediating information and addressing harmful content that is being spread on their platforms.

The question is: how?

Since all of today’s information is filtered by algorithms, there is no shared consensus reality or baseline facts in the digital world. Social media platforms didn’t create the polarization that exists today. Their platforms serve as accelerators and they compete with one another through algorithm dominance.

However, a lack of algorithmic transparency exists. It is unknown how Big Tech companies deploy and design their algorithms to tackle disinformation and data breaches. We don’t even know if, for example, users, companies, services, or products are treated unfairly with platforms’ algorithmic ranking (self-preferencing).

Even though the algorithms are trained to scan or identify all types of content, they can sometimes get it wrong.

For one thing, algorithms can mirror humans’ inherent bias, and secondly, it can be that information is phrased or framed in a way that bypasses the algorithm. When information touches on controversial or hot topics, it is remarkably complex for not only the algorithms…but for us as well.

Access to innumerable amounts of information online has not only paralyzed our ability to properly digest it…it has made it harder to moderate content in the age of disinformation and polarization. Questions, such as “is this freedom of speech or hate speech?” come about — and that’s proven extremely controversial today. What type of dialogue infringes the bounds of hate, and violence, or even, be considered misinformation, propaganda, and targeted advertising?

I feel that everything is dumped under two umbrellas: abusive content and fake news. There isn’t a shared definition of what harmful or illegal content is because there are billions of different classifications. It can range from decontextualized information, deep fakes, and graphic content to hate speech toward any user, group, or trend.

Take the war in Ukraine as an example. Handling information from the war has proven incredibly arduous. Ukrainian civilians have used social media platforms to communicate, receive, and access information about escalations of the war as well as updates on Russian attacks and humanitarian relief. Russia continues to fight not only with bots and hackers, but by also censoring information, spreading targeted propaganda, and decoupling from the global Internet.

Social media platforms are facing pressure to either take down graphic content from the war’s atrocities or to keep documentation of it as war crime evidence. Certainly, the conflict has even challenged international humanitarian laws, or the laws of war. Can the laws of war be applied in the digital world? Do social media platforms have any obligations under them? Meta responded to the war in Ukraine by adding a ‘warning label’ to graphic content in order to shed a light on human rights abuses; however, they removed anything that promoted the torment.

On the digital front, Russia is weaponizing online platforms’ products and services. Before it was removed, a deep-fake video of Ukrainian President Volodymyr Zelensky was circulating on Facebook and YouTube. Twitter banned #IStandWithPutin inauthentic fake accounts. How can Big Tech companies identify potential threats in these kinds of situations without infringing on users’ privacy?

The European Union’s latest digital regulation, Digital Services Act (DSA), intends to impose a code of conduct for content moderation. The DSA requires platforms to disclose the type of content that is taken down and the ones that platforms act on. Not only does it attempt to protect users, but the DSA’s goal is to “curb illegal content and disinformation on platforms” in order to achieve a level of transparency and platform liability.

The DSA seeks to protect users from platforms and moderate content. However, it doesn’t necessarily mention how platforms will protect users from other users as well as how the DSA will regulate content issues. In addition, the enforcement mechanism will be challenging because there are a lot of digital services that are not platforms. Since Big Tech companies have their own platform policies to protect users, only they know what is really happening inside their platforms and sometimes, can take precipitous decisions without warning users.

As information continues to be filtered by algorithms, moderating content in the age of disinformation will remain an existential threat. Not only are the repercussions reflected in today’s growing polarization, but also, by how content is orchestrated online to influence users’ behavior, further political agendas, and mobilize followers. Even though regulations are often proposed before technologies’ potential impact is understood, it will be interesting to see how the DSA can hold Big Tech companies accountable and how it will shape the digital ecosystem and international cooperation. The DSA’s guidelines will demonstrate how good platforms are in addressing and responding to mediating information or taking down content.