Public more concerned about AI risks than experts
A growing divide in how artificial intelligence (AI) is perceived by experts and the general public is emerging as a critical challenge to its widespread adoption. A new study finds that while AI specialists view the technology as highly beneficial and increasingly inevitable, the public remains more cautious, emphasizing risks and expressing uncertainty about its societal value.
The study, titled "Charting the AI Perception Gap: Divergent Views on Risk, Benefit, and Value Between Experts and the Public Challenge the Societal Acceptance of AI," published in AI & Society, examines responses from both AI specialists and members of the public across 71 real-world scenarios, revealing a consistent divide in how each group interprets the risks, benefits, likelihood, and overall value of AI.
Experts see opportunity, public sees risk in AI expansion
The study finds a clear and consistent divergence between expert and public evaluations of AI. Experts tend to rate AI systems as more beneficial, more valuable to society, and more likely to be implemented in the near future. In contrast, the general public assigns greater weight to potential risks and expresses more skepticism about the long-term impact of AI technologies.
This gap is not limited to a single domain but extends across sectors including healthcare, mobility, security, governance, and everyday consumer applications. Experts, drawing on technical knowledge and familiarity with system capabilities, often view AI as a tool for efficiency, innovation, and problem-solving. Their assessments reflect confidence in the trajectory of technological development and its capacity to address complex societal challenges.
The public, however, approaches AI from a different perspective. Without direct involvement in system design or deployment, individuals are more likely to evaluate technologies based on perceived consequences, ethical concerns, and potential disruptions to daily life. This leads to stronger emphasis on issues such as data privacy, job displacement, algorithmic bias, and loss of human control.
The study highlights that the divergence is not simply a matter of optimism versus pessimism. Instead, it reflects fundamentally different evaluation frameworks. Experts prioritize functionality and feasibility, while the public focuses on trust, safety, and social implications. This difference in perspective creates a structural gap that cannot be resolved through technical improvements alone.
Perceived usefulness and likelihood shape acceptance patterns
The researchers assess how perceived usefulness, likelihood of implementation, and societal value interact to shape attitudes toward AI. Experts consistently rate AI applications as more likely to be realized in the near future. This reflects their awareness of ongoing research, industry developments, and technological capabilities. Public respondents, by contrast, show greater uncertainty about whether many AI systems will actually be deployed, indicating a gap in awareness about the pace of technological change.
Perceived usefulness also differs significantly between the two groups. Experts tend to assign higher utility to AI systems, particularly in domains such as healthcare diagnostics, industrial automation, and infrastructure management. These applications are seen as offering tangible benefits, including improved efficiency, accuracy, and scalability.
The public, while recognizing potential advantages in some areas, is more selective in its acceptance. Applications that directly affect personal autonomy or involve sensitive data are viewed with greater caution. This suggests that acceptance is closely tied to perceived control and transparency, rather than purely functional outcomes.
The study further shows that societal value plays a crucial role in shaping attitudes. Experts often view AI as a driver of progress, capable of enhancing quality of life and addressing global challenges. Public respondents, however, are more likely to question whether these benefits will be distributed fairly or whether they will come at the cost of increased inequality and social disruption.
These differences highlight the importance of context in AI adoption. Acceptance is not determined solely by technological performance but by how individuals interpret its implications within broader social and ethical frameworks.
Trust, communication, and policy gaps threaten AI adoption
The research identifies trust as a key factor underlying the perception gap. While experts generally express confidence in the reliability and governance of AI systems, the public exhibits lower levels of trust, particularly in areas involving high stakes or limited transparency.
This trust deficit is closely linked to communication challenges. The study suggests that existing narratives around AI often fail to bridge the gap between technical understanding and public perception. Technical explanations may emphasize capabilities and performance metrics, while public concerns revolve around accountability, fairness, and long-term consequences.
The findings indicate that simply providing more information about AI is not sufficient to increase acceptance. Instead, communication strategies must address the values and concerns that shape public attitudes. This includes acknowledging risks, explaining decision-making processes, and demonstrating how safeguards are implemented.
Policy frameworks also play a critical role in shaping trust. The study points out that regulatory uncertainty and inconsistent governance approaches can reinforce public skepticism. Clear guidelines, accountability mechanisms, and transparent oversight are essential for building confidence in AI systems.
The perception gap also has implications for innovation. If public resistance limits the adoption of certain technologies, it may slow down the deployment of potentially beneficial applications. On the other hand, ignoring public concerns could lead to backlash, undermining trust in both technology and institutions.
- FIRST PUBLISHED IN:
- Devdiscourse