The WEIRD AI divide: Why some nations trust AI while others don’t

A key insight from the study is that the presence of AI regulations appears to influence perceptions differently in WEIRD and non-WEIRD societies. In countries with strong data privacy laws, such as those in the European Union, people tend to have more trust in AI firms’ ability to protect their data


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 05-03-2025 17:37 IST | Created: 05-03-2025 17:37 IST
The WEIRD AI divide: Why some nations trust AI while others don’t
Representative Image. Credit: ChatGPT

Artificial Intelligence (AI) is rapidly transforming societies worldwide, yet perceptions of its benefits and risks vary significantly across different countries. A new study titled “WEIRD? Institutions and Consumers’ Perceptions of Artificial Intelligence in 31 Countries”, published in AI & Society by Bronwyn Howell, explores the contrast between AI acceptance in Western and non-Western nations. The research highlights that citizens of Western, Educated, Industrialized, Rich, and Democratic (WEIRD) societies tend to be more skeptical of AI, while those in non-WEIRD countries exhibit greater optimism about the technology’s role in their future. These findings carry major implications for AI governance, policy-making, and regulation in different global contexts.

The WEIRD Divide: Skepticism versus optimism

The study is based on survey data collected in 2023 by Ipsos across 31 countries, measuring perceptions of AI’s benefits, risks, trustworthiness, and future impact. The results show a striking pattern: countries with higher levels of democracy, education, and wealth tend to be less optimistic about AI compared to developing economies. In WEIRD societies, AI is often met with apprehension due to concerns over privacy, job displacement, and ethical risks. Conversely, in non-WEIRD countries - where democratic institutions are weaker and economic opportunities are more limited - AI is seen as a tool for growth, efficiency, and economic progress.

This divergence may stem from differing social structures. WEIRD nations rely heavily on institutional safeguards and third-party regulations, making them more cautious about new technologies. Non-WEIRD societies, where people often rely on close personal networks rather than institutions, may be more willing to trust AI and automation as a means to bypass corrupt or inefficient bureaucracies.

The paradox of AI regulation: Trust and distrust

A key insight from the study is that the presence of AI regulations appears to influence perceptions differently in WEIRD and non-WEIRD societies. In countries with strong data privacy laws, such as those in the European Union, people tend to have more trust in AI firms’ ability to protect their data. However, this trust does not necessarily extend to AI itself. European nations, despite having some of the strictest AI governance frameworks, exhibit lower confidence in AI’s fairness and reliability compared to countries with weaker regulations.

This paradox raises important questions about the effectiveness of AI governance. Does regulation genuinely mitigate risk, or does it simply reinforce public fear? The study suggests that in WEIRD societies, AI policies may be driven more by public anxiety than by actual harm. This could lead to over-regulation that stifles innovation, whereas non-WEIRD nations may embrace AI with fewer safeguards, potentially exposing their citizens to greater risks.

Education, income, and AI perceptions

Another crucial factor influencing AI perceptions is the role of education and economic status. The study finds that higher education levels correlate with lower optimism about AI. This runs counter to the assumption that greater knowledge about technology should lead to increased confidence. Instead, in WEIRD societies, educated individuals tend to be more aware of AI’s limitations and ethical concerns, making them more cautious.

Income levels also play a role. Wealthier countries, where people have more stable jobs and access to social safety nets, are more wary of AI disrupting employment markets. In contrast, in lower-income countries, AI is perceived as a means of economic advancement, opening up new job opportunities and driving technological progress. This suggests that AI optimism is not necessarily about technological understanding but rather about perceived economic and social benefits.

Implications for global AI policy

The findings of this study have significant implications for AI governance worldwide. In WEIRD countries, policymakers must balance regulation with innovation, ensuring that AI concerns do not lead to excessive restrictions that hinder economic and technological growth. AI companies operating in these regions need to engage in more transparent communication to build trust and address ethical concerns proactively.

For non-WEIRD nations, the challenge lies in implementing safeguards without discouraging AI adoption. While optimism about AI can drive rapid technological progress, a lack of regulatory oversight could result in issues such as biased algorithms, data misuse, and worker exploitation. Policymakers in these regions must find ways to harness AI’s benefits while ensuring ethical deployment and accountability.

Ultimately, AI perceptions are shaped by broader socio-political and economic structures. Understanding these differences is crucial for crafting policies that foster responsible AI development while respecting the cultural and institutional contexts of each country. As AI continues to evolve, a one-size-fits-all regulatory approach may not be effective; instead, global AI governance should adapt to the distinct needs and concerns of different societies.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback