
Tech
Can Democracy Survive AI?
When images of billionaire tech barons – Elon Musk ($421.2bn), Mark Zuckerberg ($220bn), Sundar Pichai ($1.1bn), and Jeff Bezos ($219.5bn) – standing shoulder-to-shoulder to welcome the newly inaugurated President Trump beamed across global outlets in January, it was more than a photo op. It was a symbol of a new era: one where democracy rests on the cutting board, waiting to be carved up by a handful of AI gatekeepers.
As AI rapidly evolves, global powers and corporations fractiously compete to innovate. The technology that is increasingly interlaced with society threatens to tear down our democratic safeguards, leaving democracies seeking paths for survival.
Much has been uttered about AI and its proclivity to erode democracy. AI is said to enhance authoritarians, proliferate disinformation, distort discourse, amplify bias, and exaggerate inequality. But these are not the greatest hazards. The key threat is elegantly depicted in the image of the four billionaires chumming it up: AI centralises power and control, subordinating the public good for corporate gain.
AI’s power problem: Centralising control
AI’s development is predicated on two key elements: access to vast swathes of data and sufficient computing power to process it. These resources are only realistically accessible to two kinds of actors: big tech firms and state governments. As a result, the power to shape AI – and, by extension, society – is concentrated among a minute number of very wealthy and powerful individuals.
AI researcher Kate Crawford has argued AI is “a fascist’s dream…power without accountability.” So, authoritarian regimes unsurprisingly use AI to track populations and suppress dissent, but perhaps more worrying is the unchecked influence of big tech companies. OpenAI, the developer behind ChatGPT, has fewer than 800 employees, yet commands billions in investment from giants like Microsoft and Nvidia. Meanwhile, just three U.S. companies control over two-thirds of the world’s cloud computing resources.
This centralisation is not just a technical issue – it’s a political one. When a handful of corporations control the direction of AI, they shape the technology’s design and deployment according to their own interests, not those of the public. The result is a dangerous concentration of power that is fundamentally at odds with pluralism and self-governance, the concrete foundations of democracy.
View this post on Instagram
When private interests trump the public good
This centralisation of power leads directly to a second, equally troubling issue: the relegation of public interests to private gain. The images from Trump’s inauguration starkly portrayed the tech oligarchies and their powerful relationship with government. When Meta abandoned its fact-checking policy soon after, it sent a clear message: profit and political favour eclipse the public good.
AI’s deployment often runs counter to the public interest. Social media algorithms, for example, have been linked to the spread of harmful content and dramatic rise in teenage suicide rates in the U.S. Meanwhile, the immense trade-off between AI growth and existential environmental peril is largely ignored in public debate, even as companies like Amazon, Google, and Microsoft admit AI investments compromise their own carbon emission goals.
Attempts by companies to ethically align AI with human values – such as OpenAI’s “democratic inputs” for ChatGPT – are a step in the right direction. But these consultations are non-binding and lack the legitimacy of traditional democratic processes. Ultimately, the question remains: who decides what is ethical? I contend the public are best placed to make these decisions. Not a narrow, homogenous group of mostly young, white, male, and American developers.
Regulatory capture: Who’s really in charge?
Compounding these problems is regulatory capture, where the very companies developing AI dominate its governance. This can take the form of a “revolving door” between industry and government, or simply the assertion that only those with deep technical expertise are qualified to regulate AI.
The 2023 AI Safety Summit at Bletchley Park was dominated by Big Tech, with the voices of communities and workers most at risk largely sidelined. While it’s true that technical expertise is important, it should not be the sole criterion for shaping regulation. In the pharmaceutical industry, for example, public consultations help determine whether a drug’s benefits outweigh its risks, even if the public doesn’t understand the technical details of its manufacture. The same approach could, and should, be applied to AI.
Currently, however, regulatory structures are insufficient. Tech companies are left to set standards and norms through their products, eroding both democracy and the public interest. As Marietje Schaake has described, this amounts to a “tech coup” – a takeover of governance, politics, and policy by private interests.
The AI race
A key tension is the perceived trade-off between AI innovation and safety. At the 2025 AI summit in Paris, U.S. Vice President JD Vance declared, “the AI future is not going to be won by hand-wringing about safety.” This competitive framing – AI as a race to be won and lost – is a stark lurch from the tone of the Bletchley Summit two years prior and has sparked calls for a more deregulatory flavour focused on economic opportunity over public welfare.
Proponents of minimal regulation argue that it stifles innovation and risks ceding advantage to dark actors. They also point to the complexity of AI and the difficulty of tracing responsibility when things go wrong. Yet a singular focus on competition is politically myopic. The challenges of regulating AI are real, but they are not an excuse for inaction – especially when the survival of democracy is at stake.
AI does offer undeniable opportunities – something it is important not to lose sight of when deliberating the future. It has been touted as a critical vehicle for future growth by the UK government, with Peter Kyle, stating that “currently available AI” could accelerate UK productivity by 5% per annum over the next five years. However, questions of growth and AI transformation should be equalled by important demands of where? and for whom?
Blinded by unencumbered aspirations for economic growth, public transparency and accountability – essential elements for healthy democracy – have been sacrificed at the altar of industry efficiency. The present centralised control of AI amongst a handful of companies, unchecked and unaccountable, projects an ominous outlook for the future of democracy. So, a balance must be struck, one where we can realise the encouraging potential of AI while safeguarding democratic society.
Reimagining democracy for the AI age
So, can democracy survive AI? The answer is not a simple yes or no. Democracy, as we know it, cannot flourish in its current form. To survive, and thrive, in the AI era, democratic societies must adapt. This means reimagining institutions and processes to prioritise public deliberation and the public interest at every stage of AI governance.
Innovative approaches, such as “mini-publics” and lay-centric deliberation, offer a way forward. By recentring decision-making on diverse, representative groups of citizens, democracies can reclaim control over AI and ensure that its development serves the long-term well-being of society as a whole.
The relationship between AI and democracy is not inevitably adversarial. With the right reforms, AI and democracy can evolve together, shaping and even enhancing each other. But this will only happen if we confront the centralisation of power, rebalance public and private interests, and put the public back at the heart of technological governance.