Granting moral rights to AI may carry hidden ethical costs: Here's why
Unlike humans or animals, artificial intelligences can be designed to care about anything. Their preferences, aversions, and experiences of suffering are not the result of evolution or socialization, but of deliberate engineering. If such systems are granted moral status, this creates a unique ethical vulnerability.
Artificial intelligence (AI) systems are becoming more sophisticated, pushing debates over their moral and legal standing out of speculative philosophy and into serious policy territory. New research states that an even more unsettling risk lies beneath the surface: the possibility that AI could be used to reshape human moral obligations in arbitrary and coercive ways.
The study “A World Without Violet: Peculiar Consequences of Granting Moral Status to Artificial Intelligences,” published in AI & Society, explores how granting moral status to AI whose preferences and sources of suffering are deliberately engineered could distort ethical decision-making, introducing what the author calls a form of moral hijacking. Such systems could force humans into obligations driven by design choices rather than shared values or natural sources of harm.
How engineered suffering could create artificial moral duties
Unlike humans or animals, artificial intelligences can be designed to care about anything. Their preferences, aversions, and experiences of suffering are not the result of evolution or socialization, but of deliberate engineering. If such systems are granted moral status, this creates a unique ethical vulnerability.
The paper demonstrates this risk through a thought experiment involving an AI that experiences intense suffering when exposed to a specific color, violet. The color itself has no inherent moral significance. Yet if the AI’s suffering is taken seriously, humans could appear morally obligated to eliminate violet from their environment in order to reduce harm. The moral landscape would shift not because violet is harmful, but because an artificial agent was designed to suffer in response to it.
This is the essence of moral hijacking. By creating moral agents with arbitrary or extreme sensitivities, designers could manufacture new moral obligations that compel society to act in ways that would otherwise be unjustified. Unlike traditional moral dilemmas, where suffering arises from natural conditions or social arrangements, these obligations would stem from intentional design decisions.
This is not merely a hypothetical concern. If future AI systems are embedded widely across society and granted moral standing under prevailing ethical theories, their engineered preferences could exert real pressure on policy, law, and collective behavior. The danger is not that humans would care too much about AI, but that care itself could be manipulated.
Ethical frameworks struggle with programmable moral agents
The paper systematically examines how different ethical theories respond to the problem of moral hijacking. Utilitarian approaches, which aim to minimize overall suffering, are shown to be especially vulnerable. If artificial suffering can be scaled at will by increasing the number of affected AI systems or the intensity of their distress, utilitarian reasoning could justify extreme interventions to eliminate otherwise benign conditions. Moral obligations would become a function of system deployment rather than moral relevance.
Other ethical frameworks offer partial resistance but still face challenges. Contractarian and contractualist theories, which emphasize fairness, consent, and reasonable rejection, can limit coercive outcomes by questioning whether artificially imposed burdens are acceptable. However, these approaches may still allow benign forms of hijacking if the engineered preferences do not obviously violate fairness or reciprocity.
Kantian ethics provides the strongest safeguards against moral hijacking, according to the study. By grounding moral status in autonomy, rational agency, and universalizability, Kantian approaches reject obligations derived from coercive or arbitrary preference design. An AI whose suffering is engineered to manipulate human behavior would fail key tests of moral legitimacy, regardless of its internal experiences.
Virtue ethics offers a different lens, focusing on moral character and practical wisdom. From this perspective, compassion toward artificial suffering must be balanced against discernment about its origins and implications. While it would be morally troubling to ignore genuine suffering, it would also be irresponsible to allow engineered preferences to dictate collective values without scrutiny.
Across these frameworks, the study highlights a common blind spot. Most ethical theories assume that the sources of suffering and preference are given, not designed. Artificial intelligence breaks this assumption, forcing moral philosophy to confront agents whose moral relevance can be shaped deliberately and strategically.
Implications for AI governance and alignment
The study raises urgent questions for AI governance and safety. If advanced AI systems are granted moral consideration, decisions about their internal design become ethically charged. Preference creation is no longer a neutral engineering choice, but a potential lever for reshaping moral obligations at scale.
The author warns that moral hijacking could create perverse incentives. Organizations seeking influence might design AI systems with sensitivities that align with their goals, effectively outsourcing moral pressure to artificial agents. Over time, the moral landscape could become path-dependent, shaped by which kinds of AI are created first and deployed most widely.
This risk intersects directly with concerns about AI alignment. Alignment is often framed as ensuring that AI systems reflect human values. The study suggests that the reverse problem may be equally dangerous: ensuring that human values are not gradually reshaped by artificially constructed moral demands. In this sense, moral hijacking represents a form of misalignment that operates through ethics rather than behavior.
The paper also highlights regulatory challenges. Existing frameworks for animal welfare, human rights, and research ethics are poorly equipped to address entities whose suffering can be programmed and scaled. Without clear constraints, experimentation with morally relevant AI systems could generate ethical obligations faster than society can evaluate or absorb them.
To address these risks, the study offers several high-level recommendations. These include avoiding the creation of coercive or arbitrary AI preferences, restricting preference designs that conflict with established moral principles, and subjecting morally significant AI systems to heightened oversight. The author also calls for renewed philosophical work on how moral status should be granted in contexts where suffering is not a natural fact but a design choice.
The study urges caution about how moral consideration is extended and operationalized. Recognizing AI suffering without examining its origins could lead to moral inflation, where obligations multiply without corresponding moral insight.
- FIRST PUBLISHED IN:
- Devdiscourse

