Tech
The Humanist Case for Governing Artificial Intelligence
“It’s quite conceivable that humanity is just a passing phase in the evolution of intelligence,” Geoffrey Hinton has warned. Marvin Minsky offered a more paternal vision: “Will robots inherit the earth? Yes, but they will be our children.” And Isaac Asimov, characteristically unsentimental, insisted, “I do not fear computers. I fear the lack of them.” Those three perspectives capture the unsettled mood of the age. Artificial intelligence inspires awe, anxiety, and pragmatic acceptance all at once. What no longer inspires is doubt about its significance. AI is not arriving; it has arrived.
Large language models, once obscure research projects, now draft legal briefs, summarize medical records, compose music, and simulate conversation at scale. Behind them lies a broader ambition: artificial general intelligence and, perhaps one day, artificial superintelligence. Whether those milestones are near or distant is beside the point. The systems already deployed are reshaping labor markets, information ecosystems, and political life. They determine what we see, what we read, and increasingly what we believe. The ethical scaffolding meant to govern such tools has struggled to keep pace.
It is against this backdrop that Humanists UK proposed the Luxembourg Declaration on Artificial Intelligence and Human Values at the 2025 General Assembly of Humanists International. The Declaration does not read like a technical white paper. It reads like a moral intervention. Its premise is that AI represents a turning point in human history and that the speed of its development now outstrips the ethical and regulatory systems designed to guide it. When technological capacity accelerates faster than democratic oversight, risks multiply: the erosion of freedoms, the concentration of power, and the destabilization of democratic norms.
The Declaration responds with ten principles. They are rooted in humanist values such as reason, compassion, dignity, and freedom, but they are directed squarely at contemporary policy dilemmas. One of the most urgent is the preservation of human judgment. AI, the Declaration argues, should assist rather than replace human ethics, responsibility, and reason. This is not an abstract philosophical claim. In courtrooms, hospitals, and hiring platforms, algorithmic recommendations are increasingly treated as neutral or even superior to human deliberation. Yet algorithms do not bear moral responsibility. They are trained on historical data and optimized for measurable outputs. They cannot weigh justice in the way human communities must.
The emphasis on the common good follows naturally. AI systems should serve humanity broadly rather than enrich a narrow elite. The industrial revolutions of the past produced immense wealth but also deep inequality. Without deliberate intervention, AI may replicate that pattern. The Declaration calls for shared prosperity, investment in education, and protections that ensure workers are not simply displaced and discarded. Technological progress does not guarantee social progress. The latter requires policy choices.
Democratic governance occupies another central place. The development and deployment of advanced AI are concentrated in a handful of corporations and states. That concentration brings efficiency and scale, but it also raises the specter of unaccountable power. The Declaration insists on accountability at all levels, rather than corporate or state control, insulated from scrutiny. It calls for ethical review boards, independent oversight bodies, and multi-stakeholder governance frameworks capable of embedding safeguards into AI systems from the outset. Transparency and autonomy are essential in this vision. Citizens should understand how AI systems affect them, and data protection must be robust enough to preserve meaningful consent.
The risks extend beyond governance structures to the texture of everyday life. Protection from harm includes preventing discrimination, manipulation, and violence facilitated by AI systems. Generative models can flood the public sphere with misinformation at unprecedented speed. Deepfakes and automated propaganda can blur the line between truth and fabrication. The Declaration pairs a defense of reason, truth, and integrity with a commitment to free inquiry. Democracies depend on both. Efforts to curb misinformation must not become tools for suppressing dissent, yet inaction can allow deception to corrode public trust.
The document also addresses creators and artists, whose work forms the raw material for many generative systems. Fair compensation and recognition are not sentimental add-ons; they are structural necessities. If human creativity is treated as an inexhaustible resource to be scraped and repackaged without consent, cultural ecosystems will suffer. Protecting creators affirms the value of human expression in an age when machines can mimic it convincingly.
Future generations enter the frame as well. AI’s environmental footprint, its long-term safety risks, and its potential to alter human civilization demand a temporal perspective that extends beyond quarterly earnings and election cycles. Intergenerational justice requires that present gains not mortgage the future. The Declaration’s final emphasis on human freedom and flourishing underscores its broader ambition: AI should expand knowledge, leisure, happiness, and progress, not constrict them.
Importantly, the Luxembourg Declaration does not claim to offer definitive answers. It is not an oracle or a manifesto disguised as scripture. It is a framework for navigating uncertainty. Its grounding in humanist values distinguishes it from purely market-driven approaches, which prioritize profit, and from authoritarian models, which subordinate technology to centralized political control. By contrast, the Declaration situates AI governance within a tradition that affirms universal human rights and aligns with instruments such as UNESCO’s recommendations on AI ethics and the Universal Declaration of Human Rights.
It also gestures toward global asymmetries. The infrastructure of AI is concentrated in the Global North, and the benefits of innovation risk accruing to those already advantaged. A humanist framework insists that the fruits of technological progress be shared and that international cooperation bridge divides rather than entrench them.
The Declaration ultimately calls on governments, corporations, civil society, and individuals to adopt its principles through concrete policies and international agreements. This appeal recognizes that AI governance cannot be left to engineers alone. It is a civic endeavor. The systems being built will shape education, labor, art, warfare, and democracy itself. Decisions about their design and oversight are therefore political in the broadest sense.
Hinton’s caution, Minsky’s optimism, and Asimov’s realism each capture a facet of the AI debate. The Luxembourg Declaration attempts to hold those facets together without collapsing into fatalism or naïveté. AI carries risks and promises in equal measure. Whether it erodes freedom or enhances it will depend less on technical breakthroughs than on the values embedded in its governance.
In that sense, the document is not primarily about machines. It is about responsibility. Intelligence, artificial or otherwise, does not absolve human beings of moral agency. If AI transforms the conditions of life in the twenty-first century, then the ethical frameworks we construct around it will help determine whether that transformation deepens inequality and authoritarianism or advances human dignity and flourishing. The technology is new. The question it poses is not.