The Platform

Photo illustration by John Lyman

We’re some ways off from AI becoming sentient and destroying the world, but some steps should be taken to regulate the burgeoning technology.

Artificial intelligence is swiftly emerging as a game-changer. According to PwC, global GDP may increase by approximately 14% due to investments in machine learning, robots, and digitalization. Applications like ChatGPT are now familiar to many, reflecting the growing influence of AI in diverse fields like journalism, healthcare, and complex analytical research. Although some studies warn that AI might eventually replace humans, this transformation is likely decades away, as AI continues to evolve and requires human support.

Artificial intelligence’s promise does not come without perils. Governments, keen on leveraging digital technology to achieve the UN’s Sustainable Development Goals (SDGs) by 2030, must remain cognizant of the potential risks that these technologies present. These risks extend beyond cyber threats to include concerns about biological warfare, especially in the context of increasing global tensions.

Recognizing the significance of these challenges, the UN Security Council convened a dedicated session on AI in June. The discussion encompassed the potential dangers of AI, ranging from heightened cyber-attacks by state and non-state actors to the possible initiation of biological conflicts.

Jack Clark, co-founder of Anthropic, emphasized the need for a robust public-private partnership to supervise the private AI industry. He warned against entrusting the future solely to private-sector actors, urging governments and companies to collaborate in developing rigorous evaluation systems to maintain accountability and trust. “We cannot leave the development of artificial intelligence solely to private-sector actors,” Clark stressed.

China, maintaining its customary neutral stance, articulated that AI can be a double-edged sword. While acknowledging risks, China’s ambassador to the United Nations, Zhang Jun, stressed that opportunities also exist, depending on how humanity regulates and utilizes AI. He further emphasized the necessity to balance scientific innovation with security and called for an international ethics-based framework to govern AI.

“At present, as a cutting-edge technology, AI is still in its early stage of development. As a double-edged sword, whether it is good or evil, depends on how mankind utilized it, regulate it, and how we balance development and security. The international community should uphold the spirit of true multilateralism, engage in extensive dialogue, constantly seek consensus, and explore the development of guiding principles for AI governance,” Ambassador Zhang Jun remarked.

However, the session largely overlooked the unique challenges facing developing nations. Countries in Africa and Latin America, lagging in the emerging AI competition between the U.S. and China, are not insulated from AI risks and often lack the resources to mitigate them. Whereas emerging powers like China and India have heavily invested in AI, many poorer nations still grapple with weak infrastructure.

The International Telecommunication Union reports that only 40% of African citizens have Internet access, leaving a significant portion of the continent isolated. This divide is likely to widen with the rise of AI, complicating efforts to achieve the SDGs by 2030. These nations must also navigate additional complexities such as inflation, depreciating currencies, and political instability.

The unequal distribution of growth also places developing countries at risk of bearing security and environmental consequences they did not create. While there is increasing awareness of shared responsibility in achieving SDGs, the digital divide persists. The emergence of an ‘AI divide’ sees nations investing heavily in technology, while others are exposed to threats from non-state actors, including extremist groups.

Oxford’s Insights 2022 Government AI Readiness Index illustrates how the digital lag among developing countries obstructs their ability to face unprecedented threats amid political turmoil.

Every aspect of our lives, from the basics like food and water to the complexities of technology and weaponry, presents both opportunities and challenges. Without a sense of shared responsibility for balancing these elements, the pursuit of equitable development may become an unattainable ideal, if not an outright utopia.

Maysa Fouly is a Master's student of Political Science and a Researcher at the Egyptian Ministry of Communications and Information Technology.