Public trust in AI judges varies sharply by race
The symbolic meaning of AI diverged significantly depending on the scenario. Participants were more critical of judges who relied on AI during sentencing decisions compared to bail determinations. This suggests a growing discomfort with automating high-stakes judicial outcomes, particularly when AI is perceived as replacing rather than supporting human judgment.
A large-scale U.S. study of the role of AI in judicial decision-making reveals stark differences in how Black, Hispanic, and White citizens perceive courtroom judges who use artificial intelligence in bail and sentencing decisions. According to the peer-reviewed article "Public Perceptions of Judges’ Use of AI Tools in Courtroom Decision-Making: An Examination of Legitimacy, Fairness, Trust, and Procedural Justice", published in Behavioral Sciences, AI’s role in legal proceedings is deeply symbolic and racially inflected, affecting perceived legitimacy, fairness, and trust in justice.
How do racial groups perceive judges who rely on AI?
The study used an experimental design involving 1,800 participants, evenly stratified by race and gender, who were shown scenarios involving judges making either bail or sentencing decisions. Participants were randomly assigned to one of three conditions: the judge relied solely on personal expertise, used AI in tandem with expertise, or relied entirely on AI. Across all groups, judges who used only their own expertise were perceived as most legitimate and fair. However, Black participants consistently rated AI-assisted judges more positively than their White and Hispanic counterparts.
The symbolic meaning of AI diverged significantly depending on the scenario. Participants were more critical of judges who relied on AI during sentencing decisions compared to bail determinations. This suggests a growing discomfort with automating high-stakes judicial outcomes, particularly when AI is perceived as replacing rather than supporting human judgment.
The legitimacy scores echoed this pattern. Judges using only expertise received the highest ratings, followed by those using a hybrid approach. Judges relying solely on AI scored lowest in legitimacy and fairness. Nonetheless, among Black participants, legitimacy ratings remained higher across all conditions, signaling a more favorable view of AI's potential to mitigate systemic biases.
How does AI use affect perceived fairness and procedural justice?
Participants evaluated procedural justice through questions about the fairness of the judge’s process. The highest scores were assigned to judges who used their own judgment. The introduction of AI, either alone or in combination, resulted in lower fairness ratings. Yet again, Black participants rated the fairness of AI-assisted decisions more favorably than other groups, especially in the sentencing phase. This points to a perceived potential of AI to constrain the subjective biases of human judges.
Interestingly, Hispanic participants consistently gave the lowest scores in both legitimacy and fairness across all conditions. Their responses signal a more skeptical view of AI tools in judicial settings, aligning with previous findings that trust in institutions often varies across minority communities based on distinct sociohistorical experiences.
The study’s open-ended responses also emphasized concern over AI’s inability to weigh mitigating factors, its potential to perpetuate bias through flawed data, and fears of eroding human empathy in the courtroom. Still, many respondents acknowledged the potential of AI to improve consistency and reduce arbitrary judgments, if implemented with safeguards, oversight, and transparency.
Does judicial trust in AI influence public trust?
A third major finding of the study centers on how the public’s trust in AI is influenced by their perception of whether a judge trusts AI. Across the board, participants reported greater trust in AI tools when they believed the judge also trusted them. However, this relationship was significantly stronger among Black participants. The study’s regression analysis revealed that perceived judicial trust in AI predicted higher participant trust most strongly within the Black subgroup, suggesting that judicial endorsement can play a critical role in shaping acceptance among communities historically marginalized by the justice system.
This finding highlights the symbolic power of judges as institutional figures whose technological choices may reinforce or diminish broader public confidence. When judges were seen as actively trusting AI tools, especially in risk-sensitive contexts like sentencing, participants were more inclined to mirror that trust, provided the tools were perceived as transparent and not a replacement for human discretion.
Open-ended responses further reinforced this. While some participants outright rejected the legitimacy of AI in the courtroom, others viewed it as a potentially neutralizing force that could counteract judicial bias. Many stressed that AI should function strictly as a supplementary tool, with final authority remaining in human hands. Concerns over “black-box” opacity and data-driven discrimination were prevalent. At the same time, support was contingent on the presence of robust human oversight and algorithmic transparency.
The study’s thematic synthesis, aided by language model tools but finalized through human review, captured four dominant themes: support for AI in routine decisions if properly monitored, fears over replication of racial bias, skepticism of AI’s interpretive capabilities, and conditional trust based on the judge’s perceived technological literacy.
The study calls for a cautious but inclusive path forward in integrating AI into judicial systems. The researchers recommend strong oversight mechanisms, frequent algorithm audits, explainable AI design, and community-based policy engagement to ensure fair implementation.
- FIRST PUBLISHED IN:
- Devdiscourse

