Countries are Taking a Piecemeal Approach to AI Governance

The emergence of generative artificial intelligence tools, like ChatGPT, has sparked concerns amid a vast array of potential negative consequences, ranging from cybersecurity threats to layoffs across various industries. The full extent of the overall societal impact has yet to be fully understood. These concerns underscore the urgency for effective governance and regulatory measures, which are binding and enforceable. Essentially, there is a growing need to regulate AI before it is too late.

In March, Elon Musk and others called for a six-month pause in developing AI systems, until the risks could be fully appreciated. The open letter released by the Future of Life Institute has gathered thousands of signatures from artificial intelligence experts and industry executives across the globe, nonetheless, without effect.

Not long ago, the first lawsuits were filed against OpenAI, the developer of ChatGPT, over copyright infringement, data leakage, and data theft. American comedian Sarah Silverman was one of the plaintiffs claiming illegal acquisition of datasets containing authored work. As a matter of fact, in the data gathering process, the reportedly 300 billion words taken from various sources available on the Internet were scrapped without consent from original authors and copyright owners.

Likely, a recent $3 billion lawsuit filed against the company by a group of individuals over unauthorized access to personal information and data theft is presumably to open precedent for similar cases. Finally, the leakage of over 100,000 OpenAI login credentials that were later sold illicitly, reveals the danger of using the platform to share personal and corporate sensitive data and information, calling companies to draft internal policies to guide employees’ use of ChatGPT.

The aforementioned cases are illustrative examples of the implications of absence regulation, which are self-explanatory to answer why we need AI regulations urgently. It is paramount for generative AI platforms’ owners to modify their current internal process and adopt compliance measurements for ethical and lawfulness of their business to avoid the eventual cease of their activities in the near future.

Countries have, however, tackled AI regulations primarily as means of national policies: the National AI Initiative in the United States, the National AI Strategy in the United Kingdom, and the Integrated Innovation Strategy Promotion Council’s Social Principles of Human-Human-Centric AI in Japan are some examples of how governments can leverage from the importance of AI to bring benefits for their citizens and to some extent, the environment. Though it is necessary to address challenges and opportunities to explore technological advancements as a catalyst to economic development, the legal and regulatory frameworks will safeguard the responsible use of these technologies found in the three pillars of transparency, liability, and accountability while establishing parameters to protect citizens within the sphere of law that cover intellectual property, data protection and privacy, ethics, and algorithmic bias.

Regulation-wise, policymakers in Washington are currently discussing the final text of a AI Bill of Rights; the European Union will soon implement the AI Act; China is launching the Measures for the Management of Generative Artificial Intelligence Services; Canada has a proposed law, the AIDA; and Brazil passed the draft for the regulation of AI. At the international level, the Organisation for Economic Co-operation and Development (OECD) is leading AI-related group discussions aiming to implement policies and provide countries with further support.

The United States

In 2022, the White House Office of Science and Technology Policy (OSTP) published the U.S. Blueprint for an AI Bill of Rights to address concerns regarding possible threats of AI developments while safeguarding individuals against algorithmic discrimination. The proposal suggests that “systems should undergo pre-deployment testing, risk identification, and mitigation,” while promoting algorithmic discrimination protections and data privacy.

While the document itself is non-binding, it is perceived as a roadmap proposal for a future regulatory framework. Currently, a bipartisan group of lawmakers have set up a commission to study artificial intelligence.


The European Commission proposed the EU AI Act nearly two years ago, aiming to promote trust, transparency, accountability, and ethics in AI systems. Generative AI-related topics were later added to the proposed text due to the impact of this technology, which says, “Generative foundation models, like GPT, would have to comply with additional transparency requirements, like disclosing that AI-generated the content, designing the model to prevent it from generating illegal content and publishing summaries of copyrighted data used for training.”

The AI Act divides AI systems into four levels of risk and imposes strict requirements on high-risk systems and rules on data usage, with prohibitive practices, such as live facial recognition.


China’s proposed legislation outlines several requirements for the providers of generative AI products and services for the general public, calling for compliance with intellectual property rights and data protection regulations. Eventually, non-compliance may result in penalties, depending on the severity of the violation.

One contentious aspect of the legislation, and what has brought in-depth debate among experts, is the emphasis on the Chinese Communist Party’s core socialist values, strongly founded on the safeguarding of the country’s sovereignty and national security. Subversion of state power, and the encouragement of extremism, violence, obscenity, and xenophobia throughout data gathering, algorithm design, and model generation are some of the prohibitive themes stated by the legislation.

While the participation – and commitment – of Big Tech is crucial, they are not the ones that should lead AI discussions. Policymakers and regulators are essential in driving these discussions, implementing regulations, and guaranteeing enforcement. It is also essential to include AI as part of their development agendas to reduce the impact on job creation and provide better funding opportunities for local businesses. If leveraging technologies for the benefit of society, AI can bring relatively positive outcomes.

Policymakers should include the private sector and civil society groups in any discussions while developing potential legislation, as society is the first victim of any onslaught of misuse of AI-related fraud. For instance, misinformation created by deep fake videos could cause social unrest and potentially cause a wider conflict between countries at odds, not to mention deep fake videos from famous brands amalgamated by phishing scams is a dangerous recipe for online fraud. China’s proposed legislation states that users have the right to report non-compliant content to relevant authorities that will oversee the application of the measures and future legislation.

Another critical factor is that the policies should not be rigid since technology continuously evolves. Therefore, we should be ready to revisit any laws to maintain the delicate balance of ethics and innovation.

Finally, countries should collaborate on taming the AI beast. Unlike other past technological advancements, AI has enormous uncertainties that impact every corner of the planet, as was recently seen where an AI system taught itself Bangali and persuaded a person to reconsider his marriage.