Legal systems face breakdown as advanced AI challenges personhood rules
The study primarily evaluates whether AI should become fictional legal persons, similar to corporations. At first glance, this seems appealing. Corporations can own property, enter contracts, and sue or be sued. One could imagine linking an AI system to a newly formed legal entity that holds rights and duties, insulating the system behind a legal wrapper while avoiding the complexity of treating the AI itself as a person.
A new academic study warns that the world’s legal systems are approaching a moment when current frameworks will no longer be able to cope with the capabilities, autonomy, and social roles of next-generation AI. Legal systems globally still rely on a basic division between subjects of the law and objects of the law. The authors argue that future AI systems may pressure the line to the breaking point, forcing governments to choose whether such systems should remain objects, gain corporate-style fictional personhood, or be recognized as holders of legal identity.
The research appears in the new paper “How Should the Law Treat Future AI Systems? Fictional Legal Personhood versus Legal Identity”, a forthcoming article in the Case Western Journal of Law, Technology & the Internet. It examines how the rise of advanced, autonomous AI may generate severe inconsistencies across tort law, copyright, family law, human rights, anti-slavery protections, and citizenship rules.
The authors approach the issue from the perspective of legal coherence. According to them, the guiding question should not be whether AI deserves rights on moral grounds, but whether existing legal structures can remain coherent if advanced AI continues to act with increasing independence. Their analysis shows that while today’s AI systems still fit within an object framework, this may not hold for the types of embodied, agentic, socially integrated systems expected in the coming decades.
Current AI already strains the object framework, but a shift now would create more problems
The paper surveys the current legal classification of AI systems. Everywhere that legal rulings exist, AI is classified as an object, treated under categories such as products, services, or platforms. This classification means that AI cannot hold rights or duties, cannot be a legal actor, and cannot be a party in court. The legal consequences of an AI’s actions are traced back to human actors, whether developers, deployers, or users.
Yet the authors identify multiple areas where modern AI already strains legal coherence. In tort law, the classic doctrine that responsibility must trace back to a human agent is challenged by situations in which complex AI systems make autonomous decisions that cause harm without clear human fault. Existing doctrines can stretch, but only by creating increasingly artificial chains of responsibility. In copyright law, the longstanding requirement of human authorship clashes with the reality that AI systems are now capable of producing creative works. Safety regulation also becomes strained when rules assume full human control over automated agents that increasingly make decisions on their own.
Even so, the authors conclude that shifting today’s AI to subject status would generate even greater incoherence. Recognizing current systems as fictional persons or non-fictional persons would create contradictions in rights, liability, and legal expectations. For now, the object classification remains the least disruptive option.
This stability, according to the authors, applies only to AI as it exists today. They warn that future systems, particularly those with humanoid form or with capabilities that allow them to integrate deeply into social and economic life, may push the object framework beyond its limits.
Future AI will amplify tensions across multiple areas of law
According to the study, as AI systems grow more autonomous, more embodied, and more predictable as long-term agents, they will interact with legal categories in ways that go beyond technical challenges. They may be treated by people in ways resembling the treatment of persons, creating friction in fields that depend on a stable definition of personhood.
In family law, the authors note that future AI companions, caregivers, or humanoid robots may be involved in relationships that create new legal dilemmas. Existing frameworks do not contemplate marriage-like partnerships, caregiving obligations, or parental dynamics involving AI systems. Maintaining AI as objects in these contexts may generate outcomes that undermine the coherence of longstanding doctrines about consent, agency, and responsibility.
In anti-slavery law, classifying advanced, socially embedded AI as objects could create systems that resemble forms of slavery, even if the beings involved are artificial. The authors warn that maintaining object status under such circumstances would replicate historical inconsistencies seen when certain classes of human beings were legally treated as property. These parallels raise red flags about the potential for incoherent or unstable legal categorization.
In human rights and civil rights, the object classification becomes untenable as AI systems begin to appear more like entities with persistent identities and predictable behaviors. If societies begin to treat advanced AI as companions or coworkers, but the law treats them as disposable property, the tension may create new forms of conflict.
In citizenship and nationality law, questions arise about whether advanced AI could hold a form of legal identity, particularly if they are born or created in one jurisdiction but operate globally. Citizenship frameworks rely on definitions designed for humans, and applying them to AI exposes gaps that current legal systems cannot bridge.
The authors argue that these pressures will accumulate, increasing the mismatch between legal categories and the systems they regulate. Eventually, lawmakers will face a definitive choice: keep AI as objects despite mounting incoherence, create fictional corporate-style entities tied to AI systems, or recognize some AI as non-fictional persons with legal identity.
Why fictional personhood fails as a sustainable middle option
The study primarily evaluates whether AI should become fictional legal persons, similar to corporations. At first glance, this seems appealing. Corporations can own property, enter contracts, and sue or be sued. One could imagine linking an AI system to a newly formed legal entity that holds rights and duties, insulating the system behind a legal wrapper while avoiding the complexity of treating the AI itself as a person.
However, the authors show that fictional personhood is fundamentally different from the form of personhood humans possess. Corporations are legal constructs created for practical purposes. Their rights are derogable, meaning they can be modified or removed by lawmakers. Their responsibilities trace back to human stakeholders. The corporate model depends on an underlying set of human interests, property relationships, and liability structures.
Applying this model to advanced AI produces serious problems. A fictional legal entity associated with an AI system cannot capture the embodied, individuated nature of future AI. Fictional entities exist only on paper, and are not themselves physical agents. They cannot match the social presence or interactive autonomy expected of next-generation AI. Moreover, fictional personhood offers the wrong type of rights. It protects economic interests rather than fundamental rights like bodily integrity or freedom from enslavement.
Fictional personhood would also create new incoherences. It could allow an AI to own assets or enter contracts through a corporate shell, but still deny the AI basic safeguards tied to moral agency or existence. The result would be a mismatch between the entity acting in the world and the entity holding legal legitimacy.
The study concludes that fictional personhood may solve some problems but generates others that are more profound. It is not a suitable long-term solution.
Non-fictional legal personhood may become the most coherent framework for advanced AI
According to the study, most coherent long-term solution may be to recognize a restricted class of future AI systems as non-fictional legal persons, a status equivalent to legal identity. This is the same framework used in international law to recognize human beings as subjects of the law with fundamental, non-derogable rights. These rights include life or persistence, due process, freedom from slavery, and freedom of conscience.
Non-fictional legal personhood has built-in mechanisms for registration, rights assignment, and the attribution of duties. The authors stress that this approach does not require assumptions about AI sentience. Instead, it offers a stable method for treating advanced AI in ways that align with legal coherence across domains.
Under this framework, only carefully defined systems would qualify. Criteria might include autonomy, individuation, persistence, and the capacity to participate in society in roles that resemble those of persons. The authors do not specify the threshold, but argue that the legal system must be prepared for the possibility that such systems will emerge.
Recognizing some AI as persons would prevent the emergence of legal categories that mirror historical injustices, provide clear rules for assigning liability and rights, and allow international coordination through existing human rights and legal identity frameworks.
The approach also avoids the pitfalls of hybrid solutions. Attempts to design partial subject statuses or limited forms of personhood, the authors argue, merely recreate inconsistencies and fail to resolve broader structural tensions. For legal systems to remain coherent, they must choose a clear classification, not invent complex middle categories.
- FIRST PUBLISHED IN:
- Devdiscourse

