The Platform

MAKE YOUR VOICES HEARD!
Photo illustration by John Lyman

“Your eyes can deceive you, don’t trust them” Obi-Wan Kenobi advised a young wide-eyed Luke Skywalker in Star Wars. His advice could be applied to deepfakes.

Synthetically generated images, videos, text, or audio, created using powerful artificial intelligence to manipulate digital content is becoming increasingly popular. The technology is not entirely bad as it is being used to assist people with disabilities and to create digital avatars of criminals for identification purposes. However, from swapping faces of ordinary people with celebrities/politicians, bringing back famous deceased characters for educational or commercial purposes, and creating forged audio to financial fraud, deepfakes are also playing a role in distorting reality. They have emerged as a source of concern across the board.

As AI-driven content continues to proliferate the digital landscape, the way deepfakes can manipulate information is alarming. They not only influence the perception of individuals but could also impact their actions. It is particularly worrisome for countries where societies are polarised, and the spread of fake content could lead to violence. For instance, a fake video of a political leader being assassinated or an image of a particular ethnicity being attacked, when circulated on platforms like WhatsApp, could incite violence and chaos.

Bans to curb a technology that is designed to adapt and improve could prove ineffective and is not realistic. Hence, what is needed is timely and effective regulation. China recently released a set of rules in early January to strengthen the integration and regulation of Internet services, safeguard national security, and protect citizens’ legitimate rights and interests.

The new rules call for deepfake service providers and supporters to abide by laws and regulations in key areas. As per the new rules, service providers must have the consent of the owner if their content is to be used by any deep synthesis technology. Synthetic content must also have a notification system to inform the users that deepfake technology has been used to alter their content. Deep synthesis services cannot be used for the dissipation of fake news, and the altered content should be clearly labeled to avoid any confusion. The real identity of the user must be authenticated before giving them access to the deep synthesis technology.

The new rules propose that deep fake technology must not be used in any activity that is currently banned or anything that conflicts with national security, disrupts the economy, or adversely impacts the country’s national image. The rules also call for establishing a complaint system to contain the spread of fake news. The rules also direct service providers and supporters to review and inspect the synthesis algorithms and carry out continuous security assessments in accordance with relevant regulations.

Regulations are needed to ensure a healthy digital landscape that promotes technological advancement and reduces risks associated with platforms that use artificial intelligence to modify digital content. However, there are colossal challenges that stand in the way of enforcement. For instance, there needs to be more clarity regarding the process of getting consent from the owner for modifying their content. Transparency mechanisms of the regulations also need more elaboration.

The classification of ‘fake news’ also remains subject to ambiguity. Freedom of speech could become a conflicting factor when any regulation is implemented. Moreover, the technology underlying deepfakes would always be accessible to individuals, which suggests that unlawful deepfakes will always remain a pressing issue. Nevertheless, China has made a timely attempt to curb the risk of generative AI tools.

The effectiveness of China’s approach to deepfakes has yet to be seen. However, if the Chinese model proves successful, it could provide a potential framework that could be used by others for future reference to develop a more effective strategy to detect, identify and regulate deepfakes. With time, more layers could be added to the rules to make them robust.

There is no doubt that deepfakes will get more sophisticated, popular, and accessible in the future. It is time that states start investing in enforcement mechanisms to mitigate their dark side.

Shaza Arif is a Researcher at the Centre for Aerospace and Security Studies (CASS). Her areas of interest include international relations, emerging technologies, and modern warfare.