
Culture
Jeff Sebo on Ethics, Sentience, and the Future of Moral Consideration
Jeff Sebo is not interested in preserving the status quo. An associate professor at New York University, Sebo’s work cuts across environmental ethics, bioethics, animal ethics, and the rapidly evolving field of AI ethics. He serves as director of NYU’s Center for Environmental and Animal Protection and its Center for Mind, Ethics, and Policy—two platforms from which he challenges one of modern philosophy’s most enduring assumptions: human exceptionalism.
Sebo argues for a moral framework that doesn’t stop at the species line. His scholarship explores what it means to be sentient, conscious, or capable of agency—and why those traits should inform our ethical obligations not just toward nonhuman animals, but toward artificial intelligences and future beings. In raising these questions, he exposes the deep-seated biases that shape moral reasoning.
In his latest book, The Moral Circle, Sebo invites readers to rethink the boundaries of moral concern, pressing toward a more inclusive ethic—one that reflects the complexities of a world increasingly shared with other minds, both biological and synthetic.

Scott Douglas Jacobsen: There is a traditional notion of human exceptionalism. There is also a belief, probably from Descartes, that humans have souls while animals do not. Therefore, nonhuman animals can be treated however we see fit, for better or worse. What was your first challenge to this ethical precept of human exceptionalism?
Jeff Sebo: Human exceptionalism, as I define it in my book, is the assumption that humans always matter the most and should always take ethical priority. We might consider animal welfare or animal rights, but we still assume that humans come first.
When we developed this assumption of human exceptionalism, we also conveniently assumed that the vast majority—if not all—nonhuman animals lacked sentience, agency, and other morally significant capacities and relationships. According to this perspective, humans were the only beings who mattered.
However, we now understand that sentience, agency, and other morally significant capacities and relationships are widespread in the animal kingdom. Yet, despite this, we continue to hold on to the idea that humans always matter most and always take priority.
My book challenges that assumption. It seriously considers the possibility that a wide range of nonhuman animals have morally significant experiences, motivations, lives, and communities. It asks: What is our place in the moral universe if we share it with such a vast and diverse range of nonhuman beings?
Jacobsen: Your analysis is multivariate, as it should be, because this problem is complex. You consider factors such as sentience, agency, the capacity to experience pleasure and pain, varying emotions, and the ability to make short- and long-term plans.
These are very subtle and important distinctions, especially when they are brought together as a complex. For those who have not yet read your book, how would you parse these capacities apart and bring them together for analysis?
Sebo: There are many different proposed bases for moral standing—in other words, various capacities or relationships that might be sufficient for an individual to merit consideration, respect, and compassion.
Sentience is the ability to consciously experience positive or negative states—such as pleasure, pain, happiness, or suffering.
Then there is consciousness, which is the ability to have experiences of any kind, even if they lack a positive or negative valence. For example, you can perceive colours or sounds without experiencing pleasure or pain.
Another important capacity is agency, which is the ability to set and pursue one’s own goals in a self-directed manner based on one’s own beliefs and desires.
Part of what makes this topic complex is that humans typically combine these capacities. We are sentient, conscious, and agentic, and all of these traits seem intertwined when we consider what makes humans morally significant and worthy of respect and compassion.
However, these capacities can be teased apart in nonhuman beings. Some nonhuman animals, like humans, may be sentient, conscious, and agentic. But other beings might be conscious without being sentient, meaning they have experiences without a positive or negative valence. Others might be agentic without being conscious, meaning they can set and pursue their own goals without having feelings associated with their actions.
In such cases, it matters which capacities we consider sufficient for moral significance.
You also mentioned other, more specific cognitive capacities, such as perception, attention, learning, memory, self-awareness, social awareness, language, reasoning, decision-making, metacognition (the ability to think about one’s own thoughts), and having a global workspace that coordinates cognitive activity.
These additional features are relevant in different ways. One reason is that they indicate whether an individual has sentience, consciousness, or agency. The more of these features an individual possesses, the more likely they are to have positive or negative experiences.
Another way these capacities are relevant is that they provide insight into an individual’s interests and vulnerabilities—assuming they have morally significant interests and vulnerabilities in the first place.
For example, if a being can engage in complex long-term planning and decision-making, they may be more interested in their own future and face higher stakes in decisions about their survival. These considerations suggest that when determining whether a nonhuman entity matters—and what they want, need, and are owed—we must examine the full range of behavioural and cognitive capacities they possess.

Jacobsen: We encounter a host of distinctions in bioethics, law, moral philosophy, and ethics—distinctions that are increasingly strained by the pace and complexity of modern technology. Yet, the true value of this technological revolution may not lie in the tools themselves but in how they compel us to revisit and reimagine long-held assumptions about human nature and selfhood.
A friend once remarked that when using his iPhone, the device’s task-switching feature mirrors the way his mind organically toggles between different cognitive modes—visualizing images, recalling sounds, replaying music, performing calculations, and so on. In your view, does living in a high-tech society sharpen our ability to recognize and interrogate these distinctions more effectively? Or do you think we’re still too quick to revert to a reflexive, tribal mindset—one that insists, in essence, “We have souls; they do not. We matter. Go, team human”?
Sebo: Possibly! Technology pushes us to refine our scientific and philosophical understanding of sentience, consciousness, and agency because we are now interacting with an even larger number and a wider variety of complex cognitive systems. This reality forces us to think more critically about how our brains compare to other animal brains—and now, digital, silicon-based minds. These challenges compel us to add more rigour to our theories of mind.
A similar transformation occurred in the study of animal minds. For a long time, theories of consciousness were created by and for humans, focusing exclusively on human cognition. This limited our imagination and constrained our understanding of consciousness beyond our own species.
However, as researchers began taking animal consciousness seriously, they encountered a vast array of minds structured differently from our own yet capable of much of the same high-level behaviour and cognition. This forced us to challenge prior assumptions about how specific brain structures were essential to particular types of behaviour and cognition.
We may soon experience a similar paradigm shift as we start thinking more critically about digital minds. We have long adhered to the idea that biological minds, with their exact materials, structures, and functions, are the only ones capable of high-level cognition. However, we are forced to rethink our assumptions as we begin to confront digital minds that can exhibit much of the same behavior and cognition but through radically different means—using silicon-based substrates and alternative structures.
Just as our understanding of sentience, consciousness, and agency evolved when we started studying nonhuman animal minds, we now face a similar challenge with digital minds. This shift compels us to reconsider what is necessary for complex cognition and moral significance. Thinking about these age-old topics in new ways improves our understanding of animal and digital minds. It also allows us to apply that knowledge back to human cognition. By studying these alternative cognitive systems, we may gain deeper insights into our minds, including what it truly means to be sentient, conscious, or agentic.
Jacobsen: What do you think are the modern notions that allow us to continue enacting old callousness toward nonhumans, just as we did in the past? Are there new concepts leading to the same outcomes?
Sebo: Yes, absolutely. Even industrialization plays a role in this. While we have developed new technologies and scientific frameworks, we still carry many of our old biases and forms of ignorance. Some of these biases are deeply ingrained in human nature. In contrast, others are reinforced by societal structures that remain largely unchanged from fifty or even a hundred years ago. We have a strong bias in favour of beings like and near us. When a being looks, acts, or communicates in human-like ways and when we perceive them as companions, we are far more likely to care about their well-being and give weight to their interests. Conversely, when a being looks, acts, or communicates differently, or when we classify them as objects, property, or commodities, we grant them far less moral consideration. The same holds true for beings physically distant from us or in different timescales—we prioritize those right in front of us over those far away in space or time.
This bias has shaped how we treat other animals, particularly favouring mammals and primates, who resemble us in body structure, facial features, cognition, and behaviour. We assign them moral worth if we classify them as companions—such as cats and dogs. However, we extend far less consideration to animals who differ greatly from us, such as invertebrates, aquatic species, or animals used for farming and research. These creatures are often reduced to objects or commodities, reinforcing a hierarchical moral structure that justifies their instrumentalization for human purposes.
We may see these old biases reemerging in new ways with AI systems. For instance, we already interact with human-like chatbots, which have a low probability of actual consciousness but generate highly realistic human-like text through pattern recognition and prediction. Because they mimic human communication and are marketed as digital assistants or companions, we may perceive them as having human-like minds and assign them moral weight accordingly. Meanwhile, other AI systems may be far more likely to be conscious due to their internal cognitive complexity. Yet, we may fail to recognize their moral significance simply because they do not resemble us.
Suppose an AI system lacks human-like speech, facial features, or emotional expressiveness and is designed primarily to perform rote tasks. In that case, we may treat it more like a tool than a potentially sentient entity. This mirrors how we treat invertebrates, farmed animals, or lab animals—beings who may have morally significant experiences but are excluded from ethical considerations due to human biases.
Different populations may have distinct features, and we may hold different biases toward them. With nonhuman animals, we exhibit speciesism, a form of discrimination based on species membership. With digital minds, we might develop substratism, a form of discrimination based on the material substrate of an entity’s mind. However, at the core, these biases stem from the same underlying tendency—favouring beings that are like us and near us. Whether dealing with digital minds or nonhuman animals, this bias will manifest similarly, shaping how we assign moral worth and ethical consideration.
Jacobsen: In the film Blade Runner 2049, there was a striking moment where a synthetic human destroyed a holographic AI assistant stored in a data stick. It was fascinating because you had one synthetic being eliminating another, treating it as disposable, much like crumpling up and discarding a bad note on a notepad. Are we at risk of accidentally engineering our own callousness into AI systems, particularly in how we design them to interact with other beings?
Sebo: Yes, we are definitely at risk of that, and this is where AI safety and AI welfare intersect. AI safety focuses on making AI systems safer for humans. At the same time, AI welfare considers how we can develop AI safely for AI systems themselves, assuming they develop morally significant interests, needs, and vulnerabilities.
One area where these concerns overlap is algorithmic bias. If AI systems train on human data, they absorb humanity’s best and worst aspects. They inherit our insights, but they also replicate and potentially amplify our biases—racism, sexism, and other forms of discrimination.
If we train AI systems—either directly or indirectly—to believe that differences in material composition justify unequal treatment, we risk embedding dangerous moral assumptions into their cognitive architecture. If AI learns that beings of different materials—such as other AI systems, humans, or animals—can be treated as expendable, this conditioning could have serious consequences. AI may develop hostility toward other AI systems with different architectures or even extend indifference or aggression toward humans and animals if they mirror the treatment they receive.
Jacobsen: When you referenced substratism earlier, did you adhere to substrate independence—the idea that consciousness and morally significant capacities can exist in different material forms, such as carbon-based biological brains and silicon-based artificial systems?
Sebo: If by substrate independence you mean the idea that consciousness and other morally significant capacities can arise in various material substrates, including both carbon-based biological systems and silicon-based digital systems, then yes, I am open to that possibility.
One of the central arguments in my book is that we will soon face the challenge of deciding how to treat highly advanced digital minds, even though we may lack definitive knowledge or consensus on two key questions: What exactly makes an entity matter for its own sake? Do digital minds possess the necessary attributes to qualify for moral consideration?
As technology advances, we will need to grapple with these questions in a way that avoids reinforcing our historical biases while ensuring that our ethical frameworks remain flexible enough to accommodate nonhuman and nonbiological forms of intelligence.
We will continue to face substantial and ongoing disagreement—both about ethical values about scientific facts concerning sentience, consciousness, and agency—as we make decisions about how to treat these emerging forms of intelligence. We will not reach certainty or consensus on whether substrate independence is correct or incorrect anytime soon. Because of this, we must develop a framework for decision-making that allows us to make sound ethical decisions despite the persistent uncertainty and disagreement.
When confronted with this epistemic uncertainty, we have a moral responsibility to err on the side of caution. That means granting at least some moral consideration to entities that have a realistic possibility of having subjective experiences. This is why we must extend some moral weight to AI and other digital minds in the near future.
Jacobsen: Earlier, you spoke about speciesism, and now we are transitioning to substratism. In your book, you provide two clear examples—one about Neanderthals and another about synthetic (android) roommates. When considering ethical frameworks beyond the Universal Declaration of Human Rights, how do Neanderthals and android thought experiments help us move beyond human-centered moral reasoning?
Sebo: Early in the book, I present a thought experiment where you and your roommates take a genetic test for fun, hoping to learn about your ancestry. To your surprise, you discover that one roommate is a Neanderthal, while the other is a Westworld-style android.
The Neanderthal scenario reminds us that species membership alone cannot determine moral considerability. Of course, species membership is morally relevant because it influences an individual’s interests, needs, vulnerabilities, and capacity for social bonds. However, if a Neanderthal lived alongside us, shared an apartment, and exhibited sentience, consciousness, and agency, their moral worth would be self-evident.
They would have personal projects, meaningful relationships, and experiences that matter to them—including relationships with us that hold mutual significance. Given all this, it is clear that they would still matter morally for their own sake, and we would have moral responsibilities toward them, regardless of their species classification.
The same reasoning extends to nonbiological entities, such as advanced AI systems or synthetic beings. If an android did exhibit sentience, consciousness, and agency, then substrate differences alone—whether carbon-based or silicon-based—should not be the sole determinant of moral status. This thought experiment challenges our deep-seated biases and pushes us to rethink moral considerability beyond traditional human-centred ethics.
So, if your roommate turned out to be a Neanderthal rather than a Homo sapiens, that difference might slightly modify the specific obligations you owe them, but it would not change the fundamental fact that you do owe them moral consideration. Their species membership would not negate their sentience, consciousness, or agency, nor would it diminish your ethical responsibilities toward them.
With the Westworld-style robot, however, the situation becomes more complex. Once you learn that your roommate is made of silicon-based chips, even if they demonstrate the same behaviours and exhibit cognitive capacities comparable to yours, you might question whether they truly possess sentience, consciousness, or agency. You might be uncertain whether their expressions of emotion, care, and concern are genuine or merely sophisticated simulations.
Imagine sitting at the dinner table with your Neanderthal and robot roommates. You discuss your day, share your successes and failures, and empathize with one another. With the Neanderthal roommate, you might feel fully confident in your empathy, recognizing their capacity for real experiences and emotions. With the robot roommate, however, you might hesitate, wondering whether your instinct to empathize is truly appropriate.
As I mentioned earlier, regarding your Neanderthal roommate, you should be confident that they matter and that you have ethical responsibilities toward them. You should continue showing up for them in a morally appropriate way. Your uncertainty is understandable with your robot roommate, but that uncertainty does not justify treating them as a mere object. Uncertainty should never lead us to round down to zero and assume they do not matter.
Instead, when in doubt, we should err on caution. That means granting at least some degree of moral consideration, showing respect and compassion, and making ethical decisions that acknowledge the possibility of their sentience or agency.
Jacobsen: AI is evolving at an unprecedented pace. There is massive capital investment, intense competition, and highly driven, ambitious talent pouring their lives into developing increasingly advanced AI systems. Given this rapid acceleration, how do ethical considerations around synthetic minds and artificial intelligence change when our moral frameworks remain largely outdated?
We are struggling to engage in mainstream ethical discussions about AI and digital minds. Yet, many societies are still debating fundamental scientific concepts—from evolution to the Big Bang theory. In many ways, our moral discourse is still stuck in first-century or Bronze Age perspectives, while AI pushes us into an era that demands new ethical paradigms. This gap between technological and ethical progress seems like a major barrier to responsible AI development. What are your thoughts on this disparity?
Sebo: The way you frame the issue is exactly right. Many moral intuitions and judgments evolved in response to the social environments of 10,000 years ago when humans lived in small communities and faced different types of conflicts and pressures. These moral frameworks were not designed for the complexities of the modern age, and they are especially ill-suited for addressing fast-moving technologies like artificial intelligence.
As a result, we find ourselves in a situation where technological development is accelerating, but our ethical frameworks are lagging behind. This creates a dangerous gap: We are engineering systems that will increasingly shape the world, yet we lack consensus on how to navigate this transformation ethically. AI ethics needs to catch up to AI development—otherwise, we risk deploying powerful technologies without the moral safeguards necessary to prevent harm.
An important observation is that technological progress far outpaces social, legal, and political progress. When we consider where AI could advance in the next five to ten years, along with the strong incentives that companies and governments have to race toward developing more advanced and sophisticated AI systems, it becomes clear that we must prepare for these possibilities—even if we cannot predict them with certainty.
We do not yet know whether we will reach Artificial General Intelligence (AGI) in the next two, four, six, eight, or ten years. Nor do we know if AI will develop sentience, consciousness, or agency within that timeframe. However, we must allow for the possibility because so much remains unknown about the nature of these capacities and the trajectory of AI development.
Many would have been skeptical if you had asked AI experts a decade ago whether we would have AI systems capable of writing realistic essays or passing standardized tests across various professional and academic fields by 2025. Yet, those systems now exist. Similarly, you had asked when AI could match or surpass human-level performance across a wide range of cognitive tasks. At present, some experts doubt that this will happen by 2035. But others find it plausible, and either way, the pace of technological development could again surprise us.
This is because the same computational and architectural features associated with intelligence are often linked—in complex and overlapping ways—to sentience, consciousness, and agency. While intelligence and sentience are not identical, they share many of the same fundamental properties. As a result, in our pursuit of AGI by 2030 or 2035, we may accidentally create artificial sentience, consciousness, or agency without realizing it. In other words, we may be racing directly toward that reality without recognizing it as our destination.
The key takeaway for companies, governments, policymakers, and decision-makers is that we cannot afford to confront this problem only once it arrives. We must begin preparing for it now. Even if today’s language models are not usable candidates for sentience, AI companies must still acknowledge that AI welfare is a credible and legitimate issue that deserves serious ethical consideration.
Companies should start assessing their AI systems for welfare-relevant features, drawing from the same frameworks we use in animal welfare assessments. They should also develop policies and procedures for treating AI systems with appropriate moral concern, again using existing AI safety and animal welfare ethics models.
If companies fail to prepare, they will find themselves caught off guard, relying on public relations teams to dictate their response strategies rather than making these critical ethical decisions proactively and responsibly. That is not how these decisions should be made.
Jacobsen: Two things stood out from the text. One is the wider application of universalism or universal moral consideration in fundamental ethics. The other is a probabilistic approach to ethics rather than appealing to transcendent absolutes.
So, in your ethics framing, do you believe there are any absolutes? Or should probability theory and universalism serve as the two benchmarks for a temporary ethical framework concerning moral concerns within the moral circle?
Sebo: Yes. That’s a good question, and I’ve been thinking a lot about it.
I do make some assumptions throughout the book—assumptions that I take to be plausible and widely accepted across a range of ethical traditions, even those that disagree on other matters.
For example, the idea that we should reduce and repair harm caused to vulnerable beings—particularly those with sentience, consciousness, and agency—is an implication of many ethical theories and traditions. Since this principle is widely accepted, we can be confident that it should be a core component of any ethical system. Similarly, many ethical frameworks imply that we should consider and mitigate risks in a reasonable and proportionate way.
I look for opportunities where different traditions converge since those points of agreement reinforce ethical confidence. Even if we cannot be certain of a claim’s absolute truth, we can still have high confidence in its validity based on broad moral consensus.
With that in mind, I believe we should confidently hold that sentient, conscious, and agentic beings matter and that their interests deserve moral consideration, respect, and compassion. We should reduce and repair the harms we cause them where possible and reasonably assess and mitigate the risks we impose on them.
These principles are robust across multiple ethical frameworks, so they deserve serious moral weight, even if they fall slightly short of total certainty.
Jacobsen: Thank you so much for your time today. I appreciate it. It was nice to meet you. Thank you again for sharing your expertise.
Sebo: Thank you for talking with me. If there’s anything else I can do to help or if you have any follow-up questions, feel free to let me know.