Digital justice advances, but AI still faces resistance in EU labor law
Across Europe’s labor courts, digital transformation is no longer limited to filing systems and electronic case management. Artificial intelligence is increasingly entering judicial workflows, raising fundamental questions about fairness, transparency, and the future role of human judgment in employment disputes. A new cross-country study shows that while basic digitalization has improved efficiency and access to justice, more advanced AI-driven tools continue to face deep skepticism, particularly in sensitive labor law cases involving worker dismissals.
The study, titled AI and Digital Justice in EU Labor Law: A Comparative Study on Predictive Tools and Judicial Transformation and published in the journal Frontiers in Artificial Intelligence, the research examines how AI and digital justice tools are being adopted across European Union member states, and why acceptance varies sharply depending on legal culture, institutional trust, and governance safeguards.
Digital courts advance, but predictive justice meets resistance
The study finds that the digitalization of labor courts has progressed unevenly across the European Union. Basic digital tools such as electronic filing, online access to case documents, and digital signatures are now widely accepted and have become integral to court operations in most jurisdictions. These systems have reduced administrative delays, improved procedural efficiency, and lowered barriers for parties navigating labor disputes.
However, the research shows that acceptance drops sharply when digitalization moves beyond infrastructure toward predictive or decision-support technologies. AI-based tools designed to estimate case outcomes, suggest litigation strategies, or guide dispute resolution choices are viewed with caution by legal professionals, particularly in countries with lower levels of judicial digital maturity.
Estonia and Lithuania emerge as the most digitally advanced jurisdictions in the study. In these countries, courts have integrated digital tools more deeply into judicial workflows, and legal professionals display greater openness to AI-assisted systems. Familiarity with digital governance and long-standing investment in e-government infrastructure appear to play a decisive role in shaping trust.
On the other hand, Belgium, Croatia, the Czech Republic, and Italy show significantly lower acceptance of predictive justice tools. Legal professionals in these jurisdictions express concerns that AI systems could oversimplify complex legal reasoning, obscure accountability, or undermine judicial independence. The study highlights that resistance is not rooted in opposition to technology itself, but in fears about how AI might alter the balance of power and discretion in labor law adjudication.
Transparency and accountability define trust in judicial AI
Trust in AI-driven justice systems depends less on technical performance and more on governance design. Legal professionals consistently identify transparency and accountability as prerequisites for any meaningful adoption of AI in labor courts.
Predictive tools that operate as opaque black boxes generate strong resistance, especially when applied to cases involving dismissal, compensation, or worker protections. Labor law disputes often require contextual interpretation, balancing statutory rules with factual nuance and social considerations. The study shows that legal actors fear AI systems could flatten this complexity into probabilistic outputs that mask underlying assumptions.
Accountability concerns further complicate acceptance. Judges and lawyers question who would bear responsibility if AI-supported recommendations contribute to unjust outcomes. Without clear lines of liability, AI tools risk diffusing responsibility across software developers, court administrators, and judicial actors, weakening trust in the system as a whole.
The research emphasizes that these concerns are amplified in labor law, where courts are expected to safeguard weaker parties. Any perception that automated systems could tilt decisions toward efficiency at the expense of fairness provokes strong institutional resistance.
AI as a regulatory aid rather than a decision-maker
The study proposes a more limited but potentially transformative role for artificial intelligence in labor justice. The authors argue that AI can function as a regulatory instrument that structures information, clarifies legal thresholds, and guides users toward appropriate dispute resolution pathways without making binding decisions.
This approach is illustrated through the development of a legal chatbot within the IDEA project. Designed to assist workers and employers involved in redundancy disputes, the chatbot does not predict court outcomes or issue legal advice. Instead, it analyzes user-provided information and directs parties toward negotiation, mediation, or litigation based on applicable legal rules and procedural safeguards.
By embedding legal norms into the system’s logic while preserving human oversight, the chatbot model aims to reduce unnecessary litigation and improve access to justice without undermining judicial autonomy. The study finds that such narrowly scoped tools receive significantly higher acceptance from legal professionals compared to predictive justice systems.
This distinction underscores a broader insight: AI systems are more likely to be trusted when they support legal processes rather than substitute legal judgment. Tools that enhance procedural clarity and empower users to understand their options align more closely with established principles of labor justice.
Legal culture shapes the future of judicial AI
The comparative nature of the study reveals that national legal culture plays a decisive role in shaping attitudes toward AI. Countries with strong traditions of judicial discretion and adversarial litigation display greater caution toward predictive tools. In these contexts, AI is perceived as a potential threat to the interpretive role of judges.
Conversely, jurisdictions with more administrative or technologically integrated legal systems show greater willingness to experiment with AI-assisted justice. Familiarity with digital governance appears to normalize algorithmic support as an extension of existing bureaucratic tools.
The study cautions against one-size-fits-all approaches to judicial AI deployment. Attempts to impose uniform AI solutions across diverse legal systems risk failure if they ignore institutional context and professional norms.
Implications for EU policy and labor law reform
Investment in digital infrastructure alone is insufficient to ensure acceptance of AI tools. Trust must be built through transparent design, clear accountability frameworks, and participatory development involving legal professionals.
The study suggests that regulatory guidance at the EU level should focus on defining permissible roles for AI in judicial contexts, with particular attention to sensitive domains such as labor law. Rather than promoting predictive justice, policymakers may find greater success in supporting AI systems that enhance procedural accessibility and legal literacy.
The research also highlights the importance of training and institutional dialogue. Judges and lawyers who understand how AI systems function are better positioned to assess their risks and benefits. Education, rather than automation, emerges as a key driver of responsible adoption.
- FIRST PUBLISHED IN:
- Devdiscourse

