AI perception crisis: Public struggles to see AI as a tool, not a threat

Privacy and data security emerged as another major concern, with fears centered on surveillance, data misuse, and inadequate regulations. Closely linked were themes of job displacement and economic inequality, where the automation of labor was viewed as both a threat to individual livelihood and a driver of wider societal gaps.


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 03-06-2025 18:17 IST | Created: 03-06-2025 18:17 IST
AI perception crisis: Public struggles to see AI as a tool, not a threat
Representative Image. Credit: ChatGPT

A new cognitive psychology study has revealed a complex and often contradictory public perception of artificial intelligence (AI), reflecting both optimism and anxiety about its societal impact. Drawing on a thematic analysis of 157 AI-related YouTube videos, the research explores how the public discusses and emotionally engages with AI technology.

The study, titled “Exploring Societal Concerns and Perceptions of AI: A Thematic Analysis through the Lens of Problem-Seeking” by Naomi Omeonga wa Kayembe, was published as a preprint in May 2025. It frames societal perceptions of AI using a novel distinction between “problem-seeking” and “problem-solving” - a conceptual innovation that differentiates human goal identification from AI’s goal execution. While humans determine what is worth solving based on embodied and emotional cues, AI operates strictly within the parameters set by others, solving problems without the capacity to generate its own goals.

What are the public's main concerns about AI?

Using YouTube metadata and qualitative thematic analysis, the study identifies eleven key themes reflecting societal concerns. These range from well-documented anxieties such as job displacement, data privacy, and misinformation to more existential fears about autonomy and ethical alignment.

The theme of AI advancements and innovations dominated public discourse, revealing a dual sentiment of excitement and fear. Videos analyzed showed curiosity about AI’s capacity to transform industries and daily life but also expressed apprehension over its unpredictability and rapid evolution.

Privacy and data security emerged as another major concern, with fears centered on surveillance, data misuse, and inadequate regulations. Closely linked were themes of job displacement and economic inequality, where the automation of labor was viewed as both a threat to individual livelihood and a driver of wider societal gaps.

One of the most pressing areas of public anxiety is ethical and existential risk. Here, themes included AI autonomy, the opacity of algorithmic decisions, and concerns about superintelligent systems acting outside human control. This was often intertwined with content about misinformation and manipulation, where viewers feared AI’s potential to shape public opinion or disseminate false narratives.

Additional themes included regulation and governance, accessibility, and enhanced productivity, reflecting a spectrum of public engagement - from hopeful anticipation to calls for stronger oversight.

How do these themes reflect public attitudes toward AI?

The study’s mixed-methods design revealed a striking interplay between emotional sentiment and thematic content. Videos expressing optimism and highlighting creativity and efficiency tended to gain higher engagement, suggesting strong public interest in AI’s practical applications. Positive responses were especially evident in discussions of AI in education, productivity tools, and design innovation.

Conversely, themes grounded in skepticism, fear, and criticism were more prevalent in videos dealing with ethical dilemmas, job loss, and surveillance. Such concerns were often expressed through speculative content questioning AI’s long-term role in society.

Significantly, the analysis found that technical or abstract content, particularly around deep learning and future-oriented speculation, had lower engagement metrics. This suggests that broad audiences are more responsive to relatable, emotionally resonant content than to complex theoretical discussions.

Despite widespread discourse on AI’s transformative potential, the study identified a conspicuous absence of messaging that explicitly positions AI as a supportive tool for augmenting human intelligence. Instead, popular narratives often cast AI as a stand-alone actor, either savior or threat, thus amplifying both fascination and fear.

How can the concept of “problem-seeking” reframe societal concerns about AI?

The study’s core theoretical contribution is the differentiation between problem-solving (AI’s strength) and problem-seeking (a uniquely human capacity). Whereas AI excels at optimizing predefined objectives, it lacks the experiential grounding needed to determine which problems are worth solving. This limitation, the study argues, is rooted in the absence of embodied cognition, emotional awareness, and moral agency.

Human intelligence, by contrast, is inherently tied to bodily experience, emotions, and context-sensitive reasoning. The research introduces a conceptual framework that aligns “problem-seeking” with internal motivations, like safety, fulfillment, or social belonging, while associating “problem-solving” with external, strategy-based execution. AI, devoid of internal states or affective cues, cannot independently generate or prioritize meaningful goals.

This insight provides a crucial lens for interpreting public concerns. Fears of AI replacing human roles, making unethical decisions, or manipulating users can be traced to its lack of problem-seeking capabilities. These risks become magnified when AI is mistakenly perceived as possessing autonomous intent rather than serving as a goal-oriented instrument designed by humans.

The study calls for a reframing of public discourse that emphasizes AI’s role as an enhancer, not a replacer, of human cognition. To this end, it advocates for widespread emotional and digital literacy, suggesting that understanding our own motivations and emotional needs is critical for responsible AI use.

In tandem with technical safeguards, the study proposes fostering “ethical scaffolding” for AI deployment. This includes promoting public awareness of ethical AI, educating users on how to manage their interactions with AI tools, and encouraging developers to design systems that align with human values.

By making ethical behavior more rewarding, both socially and technologically, the study suggests that AI could reinforce positive societal norms through a feedback loop of mutual alignment. This vision depends not on AI developing consciousness, but on humans improving their ethical guidance of AI systems.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback