Who’s to blame when AI drives? Accountability crisis in self-driving vehicle accidents
Under current frameworks, responsibility often defaults to the vehicle’s owner or operator, even when they had no control over the system at the time of the incident. Manufacturers, meanwhile, resist criminal accountability on the grounds that AI behavior can be unpredictable and not entirely foreseeable, even to its designers.
Self-driving vehicles are moving closer to mainstream adoption, but a new legal challenge is accelerating alongside them, determining who is responsible when an autonomous car causes harm. A recent study by Dorottya Biczi of the Doctoral School of Law and Political Sciences at Széchenyi István University, Hungary, explores this key issue, warning that existing legal systems are unprepared for the ethical and civil consequences of automation.
Published in Engineering Proceedings, the study “Legal Challenges for Automated Decision-Making in Self-Driving Vehicles—Liability Issues and Remedies” provides one of the most comprehensive examinations yet of how laws must evolve to keep pace with artificial intelligence on the roads.
Who is responsible when no one is driving?
The study explores a key question troubling regulators worldwide: who should be held liable for accidents involving fully autonomous vehicles, the human owner, the manufacturer, or the artificial intelligence system itself?
Biczi outlines the internationally recognized six levels of automation, ranging from Level 0 (fully human-controlled) to Level 5 (completely autonomous). She explains that while today’s semi-automated vehicles at Levels 2 or 3 still require human oversight, Levels 4 and 5 pose unique legal dilemmas. In these higher levels, human intervention becomes optional or even impossible.
Traditional legal frameworks assume the presence of a human driver capable of error. However, once a vehicle’s decision-making depends entirely on complex algorithms, the concept of driver liability collapses. Biczi notes that current laws were written for mechanical systems, not for self-learning, adaptive AI capable of making independent decisions.
This raises a profound ethical issue: if a self-driving car causes an accident due to a software fault or machine-learning error, no human actor directly caused the harm. The question of accountability becomes blurred across manufacturers, developers, and even data suppliers involved in training the vehicle’s neural networks.
This legal ambiguity creates a dangerous accountability gap, leaving victims with little recourse and undermining public trust in autonomous technology.
When AI makes the call: The ethical and criminal dilemma
Secondly, the study examines the moral and criminal implications of automated decision-making. Self-driving cars are programmed to make rapid ethical choices, such as whom to protect in an unavoidable collision, but AI systems cannot be held morally or criminally responsible for those decisions.
The author argues that this creates an unprecedented tension between ethics and law. Unlike human drivers, algorithms cannot understand guilt or intent. Yet, in real-world scenarios, their actions may have life-and-death consequences. This challenges the foundation of criminal law, which is built upon notions of intent, negligence, and moral responsibility.
Under current frameworks, responsibility often defaults to the vehicle’s owner or operator, even when they had no control over the system at the time of the incident. Manufacturers, meanwhile, resist criminal accountability on the grounds that AI behavior can be unpredictable and not entirely foreseeable, even to its designers.
The research identifies the lack of a “legal personality” for AI as a central problem. Without recognizing artificial systems as potential legal subjects, courts cannot assign direct accountability to them. Yet granting AI legal status raises deeper philosophical and ethical questions, should a machine have rights or obligations similar to a person or corporation?
The study warns that these unresolved issues are already straining existing legal systems. As autonomous vehicles become more common, cases involving AI-driven harm will only multiply, forcing lawmakers to reconcile technological autonomy with human accountability.
Fixing the accountability gap: Civil remedies over criminal punishment
Rather than extending criminal liability to either AI systems or their users, the study proposes a civil law-based compensation model as a fairer and more practical solution.
The framework shifts the focus from punishment to risk-sharing and victim compensation, aligning with principles of fairness and technological innovation. Under this model, manufacturers of autonomous vehicles would contribute to a state-managed insurance fund, financed through a small portion of the vehicle’s sale price.
If an accident occurs and is proven to result from an autonomous system’s malfunction or error, victims would be compensated through this fund. This mechanism ensures that injured parties are not left without remedy, while also protecting innovation by preventing excessive criminal liability for developers and producers.
The study also calls for greater transparency in AI systems, addressing the so-called “black box problem.” Most self-driving algorithms operate as proprietary systems, making it nearly impossible for investigators or courts to trace how a vehicle reached a particular decision.
Without access to the decision-making logic embedded in AI, assigning responsibility becomes guesswork. Biczi calls for mandatory data transparency standards, allowing judicial oversight without compromising trade secrets. Such standards would make it possible to reconstruct accident scenarios and verify whether an algorithm acted according to design or deviated due to faulty programming or biased data.
The author further recommends closer alignment between national regulations and the EU Artificial Intelligence Act, which classifies autonomous driving as a “high-risk” activity. Harmonizing laws across jurisdictions would not only enhance legal clarity but also strengthen public confidence in AI-driven transportation.
- FIRST PUBLISHED IN:
- Devdiscourse

