Flawed European democracies more optimistic about AI than full democracies

A key question driving the research is whether levels of democratic quality affect public trust in both AI technology and the entities responsible for its governance. Contrary to expectations, the results show that citizens in flawed democracies express significantly higher levels of trust in both governmental institutions and AI oversight measures than those in full democracies. Respondents from flawed democracies also report a more positive overall attitude toward the use of AI in public services and democratic processes.


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 03-04-2025 10:02 IST | Created: 03-04-2025 10:02 IST
Flawed European democracies more optimistic about AI than full democracies
Representative Image. Credit: ChatGPT

A new study challenges long-standing assumptions about how democratic strength shapes public perceptions of artificial intelligence, revealing surprising divergences in trust, awareness, and attitudes across European democracies. Published in AI & Society, the peer-reviewed study, titled “European Reactions to AI in Full and Flawed Democracies: An Investigation of Key Factors,” analyzes responses from over 4,000 citizens across eight European countries to understand how people perceive AI’s role in democratic life.

The research, led by Long Pham, Barry O’Sullivan, Teresa Scantamburlo, and Tai Tan Mai, investigates whether public trust in AI, general awareness of its use, and overall attitudes toward the technology differ depending on a country’s democratic classification. Using the Economist Intelligence Unit’s Democracy Index, the study categorizes Italy, Romania, and Poland as “flawed democracies” and France, Germany, Spain, the Netherlands, and Sweden as "full democracies."

A key question driving the research is whether levels of democratic quality affect public trust in both AI technology and the entities responsible for its governance. Contrary to expectations, the results show that citizens in flawed democracies express significantly higher levels of trust in both governmental institutions and AI oversight measures than those in full democracies. Respondents from flawed democracies also report a more positive overall attitude toward the use of AI in public services and democratic processes.

Using structural equation modeling and bivariate analysis, the study validates five hypotheses and reveals stark contrasts in how AI is received across political systems. The first major finding confirms that trust in national governments and public authorities is substantially higher in flawed democracies. Respondents in these countries also express stronger support for policy mechanisms that ensure ethical AI use such as national legislation, voluntary certifications, and independent oversight bodies, suggesting that institutional confidence may be more robust where democratic systems are perceived as less mature.

Another key focus of the study is the overall public perception of AI deployment. Flawed democracies once again report more favorable views, despite their lower scores on electoral integrity, political participation, and civil liberties. This paradox may be driven by optimism about AI's ability to improve state performance, enhance efficiency, and reduce corruption in environments where governance challenges are more visible. Meanwhile, citizens in full democracies appear more cautious, reflecting heightened expectations for transparency and stronger sensitivity to the ethical risks associated with algorithmic decision-making.

Further, the study assesses whether awareness of AI differs between the two regime types. Contrary to the researchers’ hypothesis, the data show no statistically significant differences in AI awareness. Despite expectations that full democracies, with their greater political engagement and access to information, would demonstrate higher familiarity with AI, both groups reported comparable levels of knowledge about its application across sectors like healthcare, finance, agriculture, transportation, and law enforcement.

The fourth key finding examines the impact of demographic factors, namely age, education, and gender, on individuals' attitudes toward and levels of trust in AI. The analysis shows that higher education levels consistently correlate with greater trust in AI solutions and more favorable attitudes across both full and flawed democracies. Age also plays a role: older adults in full democracies exhibit more positive attitudes and trust in ethical oversight than their counterparts in flawed democracies. Gender effects were more limited and mixed, but women in full democracies tended to express more positive attitudes toward AI, contradicting prior research suggesting greater skepticism among women regarding AI in public administration.

A final component of the study evaluates the relationships between the observed factors - trust, awareness, and attitude - and how they interact across different democratic systems. The results reveal that citizens’ attitudes and awareness significantly shape their trust in both AI solutions and the entities deploying them. These relationships are mediated by demographic factors and digital literacy, further complicating the picture of how AI acceptance is formed in different political environments.

At the policy level, the research offers several implications. First, it underscores the need for targeted AI governance frameworks that account for differing baseline levels of trust and institutional credibility. In full democracies, where skepticism runs higher, greater emphasis should be placed on transparency, algorithmic explainability, and citizen engagement. In flawed democracies, policymakers may find more public support for AI implementation but must remain vigilant to ensure that optimism does not obscure potential risks or reduce the demand for accountability.

The study also calls for the development of specialized public education initiatives focused specifically on AI literacy, distinct from broader digital skills campaigns. While general digital competence is widespread, it does not automatically translate into an understanding of AI’s ethical and governance challenges. Improving citizens’ ability to critically assess AI technologies is seen as essential to promoting informed democratic participation in decisions about their use.

Moreover, the authors stress that e-government models must evolve to reflect the complexities of AI deployment, which extends beyond simple automation to include autonomous decision-making and predictive analytics. Traditional digital government frameworks may no longer suffice in ensuring public trust when AI is used to influence elections, allocate public services, or make high-stakes administrative decisions.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback