Public wants AI to prioritize accuracy and safety, but rejects value-shaping moderation

he study shows that although many respondents agree that fairness is important, they express more hesitation about interventions that could be seen as moderating or reshaping content based on political or social judgments. Aspirational imaginaries, AI models shaped to promote specific visions of desirable societal values, show the greatest variation in support, suggesting that the public views value-oriented AI alignment as both ethically charged and politically sensitive.


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 12-12-2025 12:59 IST | Created: 12-12-2025 12:59 IST
Public wants AI to prioritize accuracy and safety, but rejects value-shaping moderation
Representative Image. Credit: ChatGPT

Public perceptions of how artificial intelligence (AI) should be governed are emerging as a defining political issue, with new research showing that people in Germany and the United States hold widely differing expectations for how AI systems should be aligned, moderated and controlled.

A comprehensive cross-national study titled Public Opinion on the Politics of AI Alignment: Cross-National Evidence on Expectations for AI Moderation From Germany and the United States, published in Social Media + Society, examines in detail what citizens believe AI systems should prioritize: accuracy, safety, bias mitigation or the promotion of broader societal values.

The study gathers responses from 1,800 people in each country and reveals clear patterns. Accuracy and safety consistently receive the strongest support, while interventions aimed at mitigating bias or shaping aspirational, value-driven outputs generate more skepticism, variation and ideological polarization. The findings expose an important tension between public expectations and the political decisions that governments, companies and regulators will soon face as AI systems become embedded in everyday communication, decision-making and digital infrastructure.

Accuracy and safety dominate public priorities, but social value shaping remains divisive

Across both countries, the study finds a wide consensus: people want AI systems to be accurate, reliable and safe. These two goals receive the highest support regardless of gender, ideology, education or personal experience with AI. Respondents expect AI systems to avoid generating harmful or dangerous outputs and to provide factual information, highlighting a baseline level of trust the public expects before AI can be considered acceptable in democratic societies.

Support becomes more complicated when the discussion shifts toward bias mitigation and aspirational societal values. The study shows that although many respondents agree that fairness is important, they express more hesitation about interventions that could be seen as moderating or reshaping content based on political or social judgments. Aspirational imaginaries, AI models shaped to promote specific visions of desirable societal values, show the greatest variation in support, suggesting that the public views value-oriented AI alignment as both ethically charged and politically sensitive.

National differences further sharpen these distinctions. Respondents in the United States show higher support across all categories except aspirational imaginaries, where both countries show similar levels of caution. Higher AI usage in the U.S. contributes to this pattern. American respondents report using AI tools more frequently in daily life, which appears to increase familiarity and trust. German respondents, by comparison, show higher levels of skepticism, reflecting a more cautious national discourse around emerging technologies.

The study underscores that attitudes toward AI moderation differ significantly from attitudes toward speech moderation. Although free speech concerns remain important, many respondents, including those who strongly value free expression, still support strong accuracy and safety interventions. This suggests that people do not view AI-generated content as equivalent to the political speech of individuals. Instead, they see AI as a tool that must meet a higher standard of reliability and cannot be left to operate without safeguards.

Political ideology and personal experience shape expectations in distinct ways

While national context establishes the broader landscape, individual characteristics strongly shape how people evaluate specific alignment goals. The study finds, for example, that personal experience with AI predicts higher support for all forms of alignment. Frequent users are more comfortable with technical and value-driven interventions, likely because they have observed limitations or risks in everyday use. This finding holds in both Germany and the United States, though it is more pronounced in Germany where overall AI experience is lower.

Political ideology provides another important dividing line. In the United States, supporters of the Democratic Party express higher support for accuracy, safety, bias mitigation and aspirational imaginaries. Republicans express more caution, particularly regarding interventions viewed as social value–shaping. In Germany, supporters of the Green Party show similarly strong alignment-support patterns, whereas supporters of more conservative parties show more restraint.

Gender differences also play a significant role. The study finds that women show stronger support for safety-focused interventions and for bias mitigation. This finding aligns with broader research showing that women often experience higher levels of digital harassment, discrimination and online risk, leading to greater demand for protective measures in algorithmic environments.

Free speech attitudes create another layer of complexity. Respondents with stronger pro-free-speech views show less support for aspirational alignment but still support accuracy and safety. This demonstrates that public opinion on AI moderation does not map neatly onto traditional free speech debates. Instead, people differentiate between what AI systems should produce and what humans should be allowed to express, suggesting a more nuanced understanding of algorithmic communication than many policymakers assume.

These findings show that public opinion on AI alignment is neither monolithic nor predictable. Policymakers cannot assume that citizens will support or reject alignment strategies uniformly. Instead, support depends heavily on how clearly the alignment goal can be justified in terms of accuracy, safety or fairness, and how much the intervention touches on contested human values.

AI governance faces legitimacy challenges as public expectations evolve

The authors warn that understanding public opinion is not simply a matter of political strategy, it is essential for maintaining legitimacy in AI governance. As AI systems exert increasing influence over communication, information quality and decision-making, public acceptance becomes critical to the success of regulatory approaches. Alignment decisions made behind closed doors or without citizen input risk eroding trust, especially when AI outputs intersect with sensitive political or cultural issues.

One of the study’s key insights is that AI alignment cannot be treated as a purely technical matter. Accuracy and safety interventions may be widely accepted, but bias mitigation and aspirational imaginaries raise ethical and political questions that require open public deliberation. As governments develop policies around AI safety, content moderation and system transparency, they must account for the fact that citizens have different expectations depending on both their national setting and personal experience.

The opacity of AI development pipelines is another major concern. The authors highlight that most people do not know how AI systems are trained, what moderation or safety layers exist or how alignment decisions are made. This lack of transparency deepens the risk of public mistrust. Citizens may perceive alignment efforts as overreach if they are not accompanied by clear communication about their purpose and scope. Conversely, insufficient alignment could lead to concerns about harmful, biased or unfactual outputs that undermine confidence in AI-enabled systems.

Cross-national differences further complicate the governance landscape. Policymakers in the United States operate in a context where citizens are more familiar with AI and more supportive of its integration into daily life. In Germany, where public caution is stronger, policymakers face the challenge of building trust while addressing concerns about technological risk. These contextual dynamics must shape regulatory priorities, communication strategies and investment in public education.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback