AI may be accepted in some court cases but rejected in others


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 12-03-2026 19:39 IST | Created: 12-03-2026 19:39 IST
AI may be accepted in some court cases but rejected in others
Representative Image. Credit: ChatGPT

The debate over artificial intelligence (AI) in courts may be less about technology itself than about the kinds of human conflicts people are willing to place in front of a machine. New research indicates that public support for AI adjudication is significantly stronger in institutional and rule-based disputes than in cases involving intimate relationships, abuse, or violent harm.

In the study “Psychological features of dispute content and public acceptance of AI in legal adjudication: evidence for systematic variation beyond individual differences,” published in Frontiers in Artificial Intelligence, researchers point out that acceptance of AI in legal settings changes with dispute framing and case type, raising fresh questions about where courts can deploy such systems without losing public confidence.

Public support for AI rises in rule-based cases, but not in personal disputes

In the first study, the researchers recruited 1,384 Japanese participants and asked them to evaluate 46 short legal dispute scenarios. Participants had to indicate whether each case was better suited to AI or to a human adjudicator. The goal was not simply to measure whether people liked or disliked AI, but to see whether patterns appeared across different kinds of cases.

Those patterns were strong. The researchers found a clear two-part structure in public judgments. One cluster included institutional-procedural disputes such as patent misuse, contractual problems, food safety violations, data leaks, environmental breaches, and fraud-related misconduct. These cases showed comparatively higher acceptance of AI. The second cluster included interpersonal-relational disputes such as divorce, child custody, child abuse, murder, stalking, assault, and sexual violence. In those disputes, people showed a much stronger preference for human decision-makers.

The split was not small. On average, participants showed significantly stronger support for human judgment in interpersonal disputes than in institutional ones. At the same time, the study did not find a simple pro-AI public. Even in institutional disputes, respondents still leaned slightly toward human involvement overall. That means the public was not choosing AI outright in one category and humans outright in another. Instead, the type of dispute changed the strength of people’s preference for human authority.

That nuance matters for the wider debate over AI in the justice system. Much public discussion tends to frame the issue as a yes-or-no question: should courts use AI or not. The data point to a different reality. Citizens appear to judge AI case by case, and their support depends on what they believe a dispute requires. When the problem looks standardized and procedural, algorithmic help appears more acceptable. When the problem involves pain, family breakdown, or direct human harm, public trust moves back toward people rather than machines.

The first study also found that some cases sat near the boundary. Traffic accidents and workplace overwork disputes produced more divided reactions. These were not as clearly accepted as AI-suited procedural disputes, but they also did not produce the same level of strong consensus for human-only judgment seen in child custody or child abuse cases. That suggests many legal disputes may sit in a gray zone where citizens hold mixed ideas about fairness, empathy, objectivity, and consistency.

Age and education had some effect, but only modestly. For institutional disputes, older participants were slightly more accepting of AI. For interpersonal disputes, education had a small relationship with stronger preference for human judgment. Compared with the broader pattern linked to dispute type, those demographic effects were limited.

Emotion, familiarity and expectations reshape public judgment

The second study asked whether those patterns would hold up under more direct testing. The answer was yes. Using a separate sample of 596 participants, the authors repeated the dispute-rating exercise with a revised response scale and found the same two broad categories again: institutional disputes with relatively higher AI acceptance and interpersonal disputes with stronger human preference. That replication mattered because it suggested the pattern was not just a one-off result from a single survey design.

But the second study went further by changing how disputes were framed. The researchers tested two contextual factors: emotional involvement and prototypicality. Emotional involvement referred to whether the case was presented in a way that emphasized personal suffering and interpersonal feeling, or in a more neutral and fact-based way. Prototypicality referred to whether the case was described as a common, precedent-rich legal matter or as a rarer, atypical one requiring more case-specific judgment.

Those framing shifts mattered. The study found that contextual cues systematically changed acceptance judgments. In general, emotionally involving framing pushed people toward stronger support for human adjudication, while more prototypical and familiar disputes were associated with relatively higher openness to AI. The authors argue that this fits a broader psychological pattern: familiar and rule-heavy cases may feel easier to process as standard procedures, while emotional or unusual disputes feel more dependent on human understanding.

One of the most important results in the paper was that AI-specific expectations were the strongest predictor of acceptance. These expectations had far more predictive power than personality traits or basic demographic factors. People who believed AI was capable and useful were much more willing to accept it in legal adjudication, while people with stronger AI risk perceptions were more resistant.

The study also found a significant interaction involving emotion, gender, and prototypicality. Under prototypical conditions, women showed a markedly stronger preference for human judgment when disputes were framed as emotionally involving. Men, in contrast, showed somewhat greater AI acceptance under emotional framing in that same condition. Under non-prototypical conditions, however, both genders tended to prefer human judgment regardless of emotional framing. In other words, when a case seemed unusual or less standard, the public shifted back toward people, even before considering other factors.

That result does not mean men and women hold fixed, opposite views about AI in law. The authors are careful to frame gender effects as conditional and likely shaped by context. Still, the finding points to a wider truth in the paper: acceptance is not governed by one variable alone. It is produced by an interaction between the kind of dispute, the way the dispute is framed, and the beliefs people already hold about AI.

This is where the paper makes one of its strongest contributions. Much of the earlier work on AI acceptance in courts has focused on individual traits such as age, values, trust, or general attitudes toward technology. the author do not reject those explanations, but they argue that they are incomplete. Their evidence suggests that the public is not simply sorting itself into pro-AI and anti-AI camps. People are also sorting legal problems into different mental categories, and those categories shape what kind of decision-maker feels legitimate.

Courts may need a phased AI rollout, not a one-size-fits-all model

If courts, regulators, or legal technology developers want to introduce AI into judicial settings, they may need to stop treating the courtroom as a single deployment zone. The study suggests that some parts of the legal system are more open to algorithmic assistance than others. Institutional and procedural disputes may be better starting points, especially where cases are common, standardized, and less emotionally charged.

That could include areas such as traffic violations, regulatory compliance, contractual disputes, and other domains where consistency and rule application are central concerns. In those settings, AI may be seen as helping with efficiency, standardization, and organization without directly violating the public’s expectations of justice.

The reverse is also true. Family disputes, violent crimes, child welfare matters, and other disputes that citizens see as deeply interpersonal appear far less suited to AI-led adjudication in the public mind. In those areas, the study suggests that robust human oversight is not just a legal safeguard but a legitimacy requirement. The public appears to believe these cases demand empathy, situational judgment, and recognition of human suffering in ways that machines do not convincingly provide.

The authors also point toward hybrid systems as a more realistic path. Rather than replacing judges, AI might support them by organizing information, assisting with case management, or offering preliminary analysis in more procedural disputes while leaving final authority to human decision-makers. That model may be especially important in sensitive cases where public resistance to machine-led judgment remains high.

Still, the research stops short of claiming it has identified the final psychological mechanism behind public acceptance. The results are consistent with a classification-based explanation, but they also note that emotional reactions, moral intuitions, fairness concerns, and pre-existing attitudes toward AI may all be involved. Their studies measured acceptance judgments, not the deeper mental process itself.

There are other limits as well. Both studies were conducted with Japanese online samples, and the second study had a very high exclusion rate because participants had to pass both attention and comprehension checks. The authors note that this raises questions about how far the findings can be generalized, especially across cultures or among people with different levels of digital familiarity. They also relied on short vignette scenarios rather than full legal case files, which means real-world disputes may produce more complex responses.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback