Public remains wary of AI while tech professionals embrace optimism

The research reveals a widening gap in perceptions of generative AI between IT professionals and the general public. IT professionals tend to express a more nuanced and balanced perspective, recognizing both the opportunities and risks presented by AI. They acknowledge its potential to streamline work, improve efficiency, and create new employment categories, even as they accept the threat of automation in routine tasks.


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 27-06-2025 09:23 IST | Created: 27-06-2025 09:23 IST
Public remains wary of AI while tech professionals embrace optimism
Representative Image. Credit: ChatGPT

Public anxiety over artificial intelligence continues to rise, driven by fears of job loss and rapid technological disruption. Yet not all groups perceive the future of AI in the same way. A newly published study offers a revealing comparison between how IT professionals and the general public understand and evaluate AI's potential impact, especially in the workplace.

The study, titled “IT Professionals Versus the Public: Who’s More Optimistic About AI’s Future Impacts?” by Ngo Thai Duong, appears in SAGE Open and analyzes more than 2,700 online newspaper comments from Vietnam to gauge real-world sentiment on generative AI.

Using a quantitative content analysis, it captures how sentiment varies not just between occupational groups, but over time. While both IT professionals (ITPs) and public commenters share concern about AI-induced job displacement, their framing of the issue, and their trust in its long-term outcomes, differs significantly.

How do IT professionals and the public view AI differently?

The research reveals a widening perception gap between IT professionals and the general public. IT professionals tend to express a more nuanced and balanced perspective, recognizing both the opportunities and risks presented by AI. They acknowledge its potential to streamline work, improve efficiency, and create new employment categories, even as they accept the threat of automation in routine tasks.

On the other hand, the general public’s commentary increasingly skews toward pessimism, with rising fears about unemployment, social disruption, and a loss of human agency in decision-making processes. Public responses often frame AI as an uncontrollable force with unknown consequences, and they are more likely to view AI as a net threat rather than a tool for enhancement.

Interestingly, the public’s pessimism appears to have intensified over time, whereas IT professionals’ attitudes remained relatively stable. This stability, the study suggests, stems from the ITPs’ technical familiarity with generative AI, which allows them to contextualize risks without catastrophizing them. The general public, often less informed and more reliant on media narratives, displays greater susceptibility to alarmist views.

What drives the sentiment gap between IT experts and the general public?

Several structural and informational factors appear to drive the difference in perception. First and foremost is knowledge asymmetry. IT professionals possess deeper understanding of how generative AI models function, how they are deployed, and what their practical limitations are. This technical fluency enables them to critically assess AI’s impact and remain cautious but constructive in their outlook.

In contrast, the general public largely lacks access to technical insights and relies heavily on secondhand interpretations, most commonly from sensationalist media or social media commentary. This leaves non-experts vulnerable to fear-based narratives, especially those predicting large-scale job loss and economic upheaval.

Second, occupational context matters. IT professionals are more likely to be involved in AI’s creation or implementation, and thus tend to see it as an extension of their work rather than a replacement. Their positions often afford them job security and adaptability, buffering them from the employment threats that others perceive more directly.

Moreover, the public tends to assess AI’s risks not just in economic terms but moral and existential ones. There is a growing unease about dehumanization, loss of autonomy, and erosion of interpersonal trust. These themes are largely absent in the IT professionals’ discourse, which remains grounded in productivity, innovation, and practical risk management.

Another key finding is that emotional framing varies by group. IT professionals frequently use language reflecting curiosity, challenge, and opportunity. The public, by contrast, leans into emotional tones such as fear, skepticism, and occasionally, anger or helplessness.

What are the broader implications for AI policy and communication?

The growing divide between expert and public opinion has major implications for AI governance, policy, and social cohesion. If public sentiment continues to spiral toward distrust while developers remain confident and forward-looking, the risk of backlash, regulatory overreach, or civil resistance to AI implementation could increase.

The study suggests that closing the perception gap requires a multifaceted approach:

  • Public engagement and transparency: AI developers and policymakers must proactively communicate AI’s real capabilities and limits. This includes highlighting successful use cases, acknowledging legitimate concerns, and demystifying technical processes.
  • Inclusive policy development: The general public needs a voice in shaping how AI is regulated and introduced into public services. Participatory governance can reduce alienation and make citizens feel more in control of technological change.
  • AI literacy initiatives: Widespread education campaigns, especially targeting non-technical populations, can empower individuals to evaluate AI risks more rationally. Understanding the fundamentals of machine learning and automation may reduce susceptibility to exaggerated fears.
  • Balanced media narratives: News media and influencers play an outsized role in shaping AI discourse. Encouraging responsible journalism and fact-based reporting will be critical in cultivating a public climate that is informed, not inflamed.

AI perception is not just a technological issue - it is a social and psychological one. Trust, understanding, and agency will determine whether AI is welcomed as a tool for progress or rejected as a threat to human livelihood, the study asserts.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback