The Platform

MAKE YOUR VOICES HEARD!
Photo illustration by John Lyman

The EU AI Act and related frameworks aim to democratize AI by balancing innovation with accountability and inclusivity, addressing risks of misuse and systemic inequalities.

The European Union’s AI Act represents a timely intervention in the governance of artificial intelligence. Positioned as a high-stakes attempt to harness AI for the equitable benefit of society, the Act emphasizes transparency, accountability, and human oversight. Yet, even as it charts a path toward reforming the often opaque dynamics of AI and Europe’s labyrinthine bureaucracy, it simultaneously exposes critical cracks in global approaches to inclusion—especially for those marginalized.

Central to the Act is its risk-based framework, which places stringent restrictions on high-stakes AI applications like biometric surveillance and predictive policing. When left unchecked, these technologies disproportionately harm minorities, perpetuating systemic inequalities on an unprecedented scale. Yet, governed democratically, AI holds the potential to empower rather than oppress, to lift communities rather than surveil them.

The Council of Europe’s Framework Convention on AI adds another layer of optimism. As the first international legally binding treaty emphasizing human rights and participatory governance in AI, it imagines a world where affected communities are passive subjects and active architects of the systems that influence their lives.

In an era where mass surveillance thrives and data is both currency and weapon, such aspirations are urgent. Technology companies profit from unprecedented data collection, while governments deploy the same data to consolidate control. Against this backdrop, the EU AI Act’s mandate for transparency and participation signals a decisive move to democratize AI. Europe’s leadership, underscored by the Council of Europe’s human-rights-first approach, stands in stark contrast to the authoritarian uses of AI flourishing elsewhere.

The risks of AI misuse are no longer hypothetical—they unfold in real-time. Consider China’s Integrated Joint Operations Platform (IJOP), a big data surveillance system that profiles citizens based on behaviors ranging from electricity usage to political loyalty to the Chinese Communist Party. Marginalized groups like the Uyghurs bear the brunt of this system, facing intensified scrutiny and devastating consequences, including internment in concentration camps.

Closer to Europe, Turkey serves as another cautionary tale. With its history of profiling political opponents and minority groups such as the Alevis, Turkey exemplifies how AI can be weaponized under authoritarian regimes. Despite being a founding member of the Council of Europe, Turkey has not signed onto its AI Framework.

Instead, under President Erdoğan’s increasingly autocratic leadership, AI has been used to monitor dissenters and manipulate public opinion. The 2023 and 2024 elections were marred by AI-driven disinformation campaigns targeting opposition candidates, further eroding democratic norms. These tactics and Turkey’s documented data privacy violations reveal the profound risks of AI exploitation in systems lacking robust oversight.

These examples underscore AI’s dual nature: its power to uplift is matched by its capacity to oppress. For minorities and political dissenters, the stakes are existential, demonstrating how emerging technologies can entrench authoritarianism when transparency and accountability are absent.

By contrast, the EU AI Act offers a vision of AI governance that frightens would-be autocrats. Mandatory transparency and participatory governance lay the groundwork for ethical innovation. Europe’s model could—and should—serve as a blueprint for other regions navigating the complexities of AI regulation.

Skeptics often argue that regulation stifles innovation, but this critique reflects a narrow conception of progress. Far from hindering innovation, the EU AI Act provides guardrails to ensure that technological advancements align with societal values.

However, even this forward-looking framework is not without its shortcomings. Structural inequities persist across EU member states, particularly in migration policies that reveal troubling double standards. Europe’s historical tendency to leave vulnerable populations behind remains a cautionary note. The digital revolution offers an opportunity to correct these systemic injustices but without deliberate efforts to include marginalized voices, AI risks replicating—and magnifying—these biases.

As we navigate the digital age’s crossroads, the EU AI Act presents a pivotal reform opportunity for Europe’s digital ecosystem and political identity. Yet reforms alone will not suffice. A deliberate commitment to inclusivity, ensuring marginalized communities participate in decision-making processes, is critical. The Council of Europe’s participatory governance framework for AI rightly centers this need, advocating for ethical obligations and practical necessities. Systems designed without community input will fail to be both fair and effective.

Technical fixes, such as algorithmic transparency and fairness audits, are important but insufficient. The root causes of AI bias lie in flawed data, entrenched societal inequalities, a lack of diversity in technology development, and opaque decision-making structures. Governing AI responsibly requires addressing these systemic issues head-on.

The stakes are immense. Consolidating technological power among private firms and governments without meaningful public input endangers democratic accountability. Minority communities—those most vulnerable to AI-driven surveillance and decision-making—must have a seat at the table. Without collective action, the digital divide risks calcifying into a new form of inequality, one far harder to bridge.

Europe’s leadership in AI governance offers hope, balancing innovation with justice. However, the lessons of past crises are clear: lofty principles and compliance checklists are not enough. If Europe truly aspires to lead, it must adopt policies that uplift everyone, particularly those left behind in this digital transformation. Only then can AI fulfill its promise as a tool of empowerment rather than disenfranchisement.

A. Sencer Gözübenli is a Zagreb-based Turkish researcher who focuses on issues concerning national minorities and specializes in transnational identity politics and kin-state activism in the Balkans. Currently he is pursuing his doctoral studies in Sociology at Åbo Akademi University in Finland with special focus on minority issues in inter-state relations in the Balkans.