Why people resist AI in public administration?

The perception of losing control emerged not from generalized distrust in technology but from specific worries about how delegation to AI systems affects democratic norms. Participants feared that AI systems lacked the moral reasoning, discretion, and accountability expected in human public servants. Many also worried about being unable to challenge or appeal decisions made by an algorithm, reinforcing a sense of helplessness.


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 06-05-2025 18:31 IST | Created: 06-05-2025 18:31 IST
 Why people resist AI in public administration?
Representative Image. Credit: ChatGPT

As artificial intelligence becomes embedded in the machinery of modern governance, a new study uncovers the roots of public discomfort with algorithmic decision-making in government. The research, titled “Artificial Intelligence in Government: Why People Feel They Lose Control” and published on arXiv, identifies how people perceive loss of democratic control when administrative decisions are handed over to AI. Drawing on a large factorial survey experiment in the UK, the study reveals that public concerns are far less about AI’s technical performance and far more about core democratic principles such as accountability, transparency, and the ability to challenge decisions.

The findings challenge the prevailing narrative that citizens primarily reject AI in government due to unfamiliarity or a lack of technological literacy. Instead, the study introduces a three-pronged model of perceived delegation risks, assessability, dependency, and contestability, rooted in Principal-Agent Theory. It shows how people evaluate AI not just as a tool but as an agent acting on behalf of the state, capable of altering the power dynamics between government and governed.

Why do people feel they lose control when AI is used in public decision-making?

Using a sample of over 2,500 respondents, the researchers tested public reactions to hypothetical government decisions in three sensitive domains: tax administration, welfare benefits, and judicial bail rulings. Each scenario varied in whether decisions were made by human caseworkers, AI systems, or AI-assisted human operators. The study found that fully automated decisions by AI triggered the strongest feelings of lost control, particularly in welfare and bail contexts. However, the intensity of these concerns was not uniform across all domains, indicating that public tolerance for AI may be context-dependent.

The perception of losing control emerged not from generalized distrust in technology but from specific worries about how delegation to AI systems affects democratic norms. Participants feared that AI systems lacked the moral reasoning, discretion, and accountability expected in human public servants. Many also worried about being unable to challenge or appeal decisions made by an algorithm, reinforcing a sense of helplessness.

The concept of “assessability” was central to this response. Citizens expressed concern that they lacked the tools or knowledge to understand how AI systems made decisions, making it difficult to evaluate fairness or correctness. In other words, if people cannot judge whether a decision is right or wrong, or how it was made, they perceive a breakdown in the mechanisms of democratic oversight.

What types of risks do citizens associate with AI decision-making?

The study articulates three distinct types of perceived delegation risks. First is assessability, the degree to which citizens believe they can understand and evaluate the actions of the AI system. When AI is used to make consequential decisions, such as who receives public aid or who is released on bail, the inability to explain outcomes in plain terms undermines public confidence and democratic legitimacy.

Second is dependency, or the fear that AI systems might become too entrenched in government processes, reducing flexibility and responsiveness. Participants worried that reliance on automated systems would lead to rigid decision-making frameworks that ignore human nuance. This concern was particularly pronounced in welfare and criminal justice domains, where individual context matters deeply.

Third is contestability, or the perceived difficulty of challenging AI decisions. Respondents felt that AI systems are inherently opaque, offering few opportunities for appeal or reconsideration. The fear that “you can’t argue with a machine” resonates strongly across all policy domains studied. The more people believe a system lacks recourse or human override, the more they perceive themselves as powerless.

Interestingly, the study finds that combining human and AI decision-making does not always ease these fears. In some scenarios, AI-assisted human decisions were still seen as problematic, suggesting that people view AI influence itself as a risk, regardless of whether a human has the final say. This finding complicates the often-proposed hybrid model as a compromise solution for ethical AI deployment in public services.

What are the broader implications for democratic governance?

The research has far-reaching implications for how governments and technology developers approach AI integration. First, it signals that legitimacy in algorithmic governance cannot be achieved through performance alone. Efficiency, accuracy, and cost savings may matter to policymakers, but citizens care more about the integrity of democratic processes - transparency, accountability, and voice.

Second, the findings suggest that public communication and design transparency must evolve. Governments deploying AI must prioritize interpretability and develop user-facing mechanisms that allow citizens to understand and challenge decisions. This could involve clear documentation, human-in-the-loop systems with explicit override capabilities, and robust appeal processes. Simply branding AI as “expert” or “objective” is not enough, citizens want proof that democratic safeguards are embedded in the system.

Third, the study raises caution against over-automation in sensitive or morally complex domains. When decisions impact liberty, dignity, or livelihood, as in the case of bail or welfare, people demand higher standards of moral judgment and empathy. Delegating such decisions to AI systems without meaningful human oversight may erode public trust, not just in technology, but in government itself.

The research also redefines the framework through which AI acceptance should be measured. Rather than focusing solely on trust, risk perception, or accuracy, the authors argue for a political model of technology adoption that centers on perceived control and democratic agency. This approach highlights that technology deployment is not just a technical matter but a political act with consequences for how citizens relate to the state.

The study invites policymakers to rethink the governance of AI in public institutions. It underscores the importance of designing AI systems that are not only efficient and fair but also accountable and contestable. Regulatory frameworks must go beyond data protection and algorithmic bias to include democratic values such as participation, deliberation, and rights to explanation and redress.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback