The Platform

MAKE YOUR VOICES HEARD!
Photo illustration by John Lyman

What concerns a lot of tech experts isn’t necessarily artificial intelligence itself, it’s the lack of any guardrails.

The imprint of popular culture on our understanding of technology is profound; it consistently stirs apprehensions about the human, social, and even existential impacts of advanced technologies. Lately, reality seems to be catching up with the ambitious technologies portrayed in fiction: Generative AI systems like ChatGPT have garnered global attention, semi-autonomous weapons systems employed in conflict zones such as Afghanistan, Pakistan, and Yemen are under fire from human rights advocates, and AI algorithms are increasingly implicated in swinging elections to autocrats.

Nevertheless, advanced technologies, notably artificial intelligence, are not intrinsically hazardous. AI, a tool crafted by humanity, has the potential to either accelerate or hinder our collective progress. Indeed, it is this precarious state of uncertainty surrounding our conceptualization of AI’s influence on the world that recently guided discussions at the UN Security Council.

On July 18, UN Secretary-General António Guterres addressed the potential and peril of artificial intelligence for global peace and security. As Ryan Heath in Axios reports, Guterres has endorsed the idea of a UN agency to tackle AI threats, spanning from AI’s potential role in weapons of mass destruction to its function in propagating conspiracy theories.

The establishment of a global agency for AI governance presents a cornucopia of opportunities to address security and human rights concerns. The mere existence of fully automated weapons systems constitutes a significant threat to global security, peace, and human rights, as these systems distort the human component of conflict and eliminate the element that often initiates peace. Generative AI could derail democracy in its stride. As Harold Adlai Agyeman, Ghana’s ambassador to the United Nations, recently noted, “The capacity to tell the difference between what is real and what is made up is diminishing by the day.” The unfettered approach to AI cannot persist, especially as these technologies continue to evolve at an unparalleled pace.

The establishment of a UN agency holds promise for agile AI governance. The inception of the Global AI Action Alliance (GAIA) during the 2021 World Economic Forum bolstered this notion. This alliance, which unites over 100 governments, civil society organizations, corporations, and academic institutions, aims to maximize AI’s societal benefits while curbing its risks.

Bolstered by the United Nations’ legal authority, resources, and its capacity to incentivize member states, such a UN agency could well be the solution we need to bridge the existing governance gaps and chart a resilient course for the future.

Despite the appeal of a global agency for AI governance, orchestrating a multi-stakeholder initiative for AI is no simple task. Achieving substantive engagement from all relevant stakeholders across sectors is a formidable challenge. Nanjira Sambui of the Artificial Intelligence & Equality Initiative (AIEI) articulates this in a piece for the Carnegie Council, noting that “Supposedly global initiatives often comprise only Western nations, leaving out lower-income countries – and typically also China, despite it being a tech giant. From the private sector, only big tech players tend to be invited or spotlighted. Smaller tech companies meanwhile, and those which deal with digital technologies only tangentially, rarely have their voices heard, even though they may have more nuanced insights based on their niche focus areas or markets served. Civil society representatives are usually large, international NGOs with offices in capital cities, rather than the kind of local actors who may better understand what is happening on the ground.”

It is a legitimate concern that this new UN agency might become a platform for lofty rhetoric, rather than a forum from which actionable guidelines originate. The multi-stakeholder nature of the AI industry necessitates a comprehensive and concerted effort from international organizations, governments, academia, the private sector, and civil society—an endeavor that could prove arduous yet essential.

The private sector and industry must commit resources to participate in a regulatory sandbox, testing and experimenting with new technologies under a UN-controlled environment. Concurrently, the government and academia must develop these regulatory sandboxes and provide public policy and executive expertise. Civil society, in turn, brings grassroots perspectives to the table by convening diverse actors and fostering a community that can actively engage the AI governance agency, ensuring its policies are informed by diverse and inclusive viewpoints. A substantive multi-stakeholder approach could well be the deciding factor between a global AI talking shop and an agile governance initiative.

Constructing an ecosystem of ethical AI oversight is a Herculean task, the magnitude of which cannot be downplayed or understated. Nonetheless, the risks of proceeding without such a framework are too vast to risk. As David Ryan Polgar, founder of the aptly named All Tech Is Human, reminds us: all tech is human, and every technological advancement involves human inputs and outputs at its inception and culmination. Once again, global policy must operate with an unflinching commitment to ensuring these emerging technologies enhance rather than jeopardize human civilization.

Suzie Shefeni is a International Relations analyst based in Namibia. She holds a postgraduate degree in Political and International Studies from Rhodes University and writes on African affairs, foreign policy and AI governance.