Americans fear AI risks, back bans on advanced systems
The public overwhelmingly views AI as a source of societal risk. Survey respondents reported moderate to high concern about AI-induced harms, including threats to personal privacy, discrimination, social surveillance, misinformation, and even existential risks. Perceived risks significantly influenced support for regulatory action.
Amidst the rise of artificial intelligence (AI) technology, the American public is voicing strong support for regulatory interventions. New empirical findings suggest that trust in government institutions and perceived risks posed by AI systems are crucial predictors of whether citizens back policies aimed at slowing or halting advanced AI development.
These insights come from a study titled "Public Opinion and the Rise of Digital Minds: Perceived Risk, Trust, and Regulation Support," forthcoming in Public Performance and Management Review. Drawing on a nationally representative sample of U.S. adults surveyed in the 2023 Artificial Intelligence, Morality, and Sentience (AIMS) study, the research offers a foundational understanding of public attitudes toward generative AI regulation - an increasingly urgent domain amid accelerated deployment of systems like ChatGPT and other large language models (LLMs).
How do Americans perceive AI risks, and how does this shape support for regulation?
The public overwhelmingly views AI as a source of societal risk. Survey respondents reported moderate to high concern about AI-induced harms, including threats to personal privacy, discrimination, social surveillance, misinformation, and even existential risks. Perceived risks significantly influenced support for regulatory action.
When asked about policy preferences, most Americans favored both "soft" and "strong" approaches to AI regulation. Soft regulations included measures like public campaigns and government action to slow development, while strong regulations involved global bans on artificial general intelligence (AGI) and prohibitions against data centers capable of developing AI systems surpassing human intelligence.
Statistical modeling revealed that perceived AI risk was the strongest predictor of support for both policy types. For every standard deviation increase in perceived risk, support for slowing AI increased by 0.49 standard deviations, and support for banning increased by 0.59 standard deviations. These moderate to large effect sizes underscore a widespread public desire for safety-first AI governance strategies.
The researchers interpret this pattern through the lens of human survival instincts. As with attitudes toward climate change, nuclear power, and pandemic control, people tend to favor regulation when they perceive technologies as a threat to societal or personal well-being.
What roles do trust in government and industry play in shaping regulation preferences?
Institutional trust emerged as another significant factor, though its effects varied depending on the target of that trust. Trust in government was positively correlated with support for regulation, suggesting that when people believe regulatory agencies are competent and capable, they are more willing to back both soft and strong policy measures.
Trust in the government predicted a 0.23 SD increase in support for slowdown policies and a 0.20 SD increase in support for bans in the primary model. Even when controlling for demographic variables and exposure to AI, government trust remained a consistent driver of support.
In contrast, trust in AI companies produced the opposite effect. People who trusted AI developers like OpenAI or similar firms were significantly less likely to support regulatory action. Specifically, higher industry trust correlated with a 0.27 SD decline in support for slowing AI and a 0.11 SD decline in support for bans. When perceived risk was added to the model, the negative effect of company trust on support for bans disappeared—suggesting that trust in corporate actors may influence regulation attitudes indirectly by shaping how risky AI is perceived to be.
Trust in AI technology itself—distinct from trust in developers or regulators—also predicted reduced regulatory support. Respondents who had confidence in AI systems such as chatbots, LLMs, and robotic platforms showed substantially less interest in regulation. For every SD increase in AI trust, support for slowdowns dropped by 0.37 SD, and support for bans dropped by 0.39 SD. These findings reveal a critical public fault line: people who embrace AI as a reliable tool are less likely to view its development as needing intervention.
Who supports AI regulation, and what demographic patterns emerge?
The study also found demographic differences in support for AI regulation. Women and older adults were more likely to favor both slowdowns and bans, while high-income individuals showed stronger support for bans specifically. Black Americans expressed higher support for both forms of regulation compared to Asian Americans (the reference group), while Hispanic, Indigenous, White, and “Other” racial categories also showed elevated support for bans.
Interestingly, exposure to AI, such as interacting with AI systems or encountering AI-related news, was associated with less support for soft regulation, but had no significant effect on preferences for bans. This suggests that increased familiarity with AI may desensitize some individuals to its risks or boost their confidence in its benefits.
Contrary to expectations, education level did not significantly influence regulation preferences. Political orientation had limited effects, although more conservative respondents were somewhat more inclined to support bans on AI, particularly in the absence of trust or risk perception variables.
The findings further validate a theoretical model proposed by the researchers that includes four clusters of predictive variables: trust in government, trust in industry, trust in AI, and perceived risk. All four clusters showed statistically significant effects on regulation preferences, with the strongest and most consistent influence coming from perceived risk.
Implications for policymakers navigating AI governance
The research offers a rare, data-driven baseline for understanding how public attitudes may shape the future of AI policy. As generative AI becomes more powerful and ubiquitous, governments worldwide face growing pressure to craft legislation that aligns with citizen concerns.
For the U.S., which has generally adopted a pro-innovation stance on AI, the findings suggest a potential mismatch between government policy and public sentiment. The AIMS data indicate that a majority of Americans favor decelerating the development of advanced AI technologies and even implementing global bans on certain capabilities. In this context, the study serves as a wake-up call for lawmakers who may underestimate public desire for cautious, ethically-grounded regulation.
Moreover, the divergence between trust in government and trust in industry highlights a governance dilemma. While citizens are open to regulatory interventions, their skepticism of corporate actors implies limited support for self-regulatory frameworks or voluntary ethical codes. Instead, the findings reinforce the need for transparent, accountable public institutions to take the lead in overseeing AI development.
The researchers acknowledge the limitations of their study, including its U.S.-centric focus and the potential fluidity of public opinion in a rapidly changing technological landscape. Nonetheless, the study provides an essential foundation for further investigations into the evolving relationship between AI, public perception, and governance.
- FIRST PUBLISHED IN:
- Devdiscourse

