AI tops the list of unfamiliar digital dangers in new study

The results show that AI-driven threats occupy a unique space of ambiguity, combining high dread with low familiarity. Respondents, regardless of their technical background, viewed AI as both powerful and unpredictable, an emerging force that could evolve faster than their ability to understand or control it. In contrast, more conventional cyber threats like phishing or malware were perceived as well-known and manageable, even if still dangerous.


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 05-11-2025 10:11 IST | Created: 05-11-2025 10:11 IST
AI tops the list of unfamiliar digital dangers in new study
Representative Image. Credit: ChatGPT

A new study has found that while digital threats are widely recognized, the rise of artificial intelligence (AI) is reshaping how people perceive cybersecurity risk and not always in ways that make them safer. The research titled “Perceiving Digital Threats and Artificial Intelligence: A Psychometric Approach to Cyber Risk” examines how individuals understand, assess, and emotionally respond to digital dangers, from phishing to generative AI, and how these perceptions affect their cybersecurity behavior.

Published in the Journal of Cybersecurity and Privacy, the study’s key finding is as unsettling as it is revealing: even those with technical expertise often harbor misplaced confidence in their ability to manage threats. While traditional dangers such as malware or phishing are familiar to most users, AI-driven risks are seen as abstract, uncontrollable, and poorly understood, fueling confusion, anxiety, and in some cases, complacency.

AI tops the list of uncertain digital risks

The research surveyed 300 Italian workers across IT and non-IT sectors, asking them to rate seven types of digital hazards: social media data sharing, malware, phishing, internet browsing, online identity theft, credential theft, and AI-generated threats. Each was evaluated along two psychological dimensions derived from the psychometric paradigm, dread risk (how threatening and uncontrollable it feels) and unknown risk (how familiar and understandable it is).

The results show that AI-driven threats occupy a unique space of ambiguity, combining high dread with low familiarity. Respondents, regardless of their technical background, viewed AI as both powerful and unpredictable, an emerging force that could evolve faster than their ability to understand or control it. In contrast, more conventional cyber threats like phishing or malware were perceived as well-known and manageable, even if still dangerous.

This difference, the authors argue, underscores a critical shift in the psychology of cybersecurity. Traditional threats are now normalized; users have developed mental models and coping strategies for them. But AI is different, it operates on scales and levels of autonomy that most users struggle to conceptualize. This perception gap can lead to underpreparedness and behavioral paralysis, where individuals recognize a danger but feel incapable of responding effectively.

Interestingly, even IT professionals, who one might expect to exhibit higher confidence, demonstrated elevated levels of optimism bias, the belief that cyber incidents are more likely to affect others than themselves. This misplaced optimism, paired with AI’s perceived complexity, forms what the study calls a “dual-risk paradox”: people are both aware of digital dangers and psychologically ill-equipped to act on that awareness.

Four profiles define how people perceive cyber risk

The study divides users into four distinct psychological profiles based on their perceptions and behaviors toward digital threats. This typology reveals why blanket cybersecurity awareness campaigns often fail to change real-world behaviors.

  1. Vigilant Realists – These users combine high awareness with solid technical knowledge. They recognize cyber threats as serious but manageable, often practicing good digital hygiene. However, their competence can sometimes lead to underestimating personal vulnerability, creating a false sense of security.

  2. Under-concerned Optimists – Characterized by confidence without caution, these individuals believe that their experience or intuition shields them from most risks. They are prone to ignoring basic safeguards, such as multi-factor authentication or software updates, and often dismiss cybersecurity training as unnecessary.

  3. Anxious and Uncertain Users – This group perceives digital threats as overwhelming and uncontrollable. They are highly fearful but lack the technical or cognitive tools to protect themselves effectively. Despite their concern, they engage less in proactive measures, reflecting a disconnect between fear and action.

  4. Concerned Bystanders – Occupying a middle ground, these users express moderate awareness and anxiety but rarely translate either into consistent security practices. Their behaviors tend to be reactive, spurred only by recent news of cyber incidents or workplace policies.

According to the authors, these profiles illustrate that cybersecurity is not purely a matter of knowledge or awareness, it is fundamentally shaped by psychological perception, emotional response, and personal bias. Effective interventions, therefore, must go beyond technical instruction to address the cognitive and emotional underpinnings of behavior.

Why AI challenges traditional cybersecurity psychology

The emergence of generative AI technologies adds an entirely new layer of complexity to how people interpret digital threats. The study identifies AI not only as a technological disruptor but as a psychological disruptor, reshaping the mental frameworks through which individuals conceptualize danger and safety in the digital realm.

Unlike phishing or malware, which follow visible patterns, AI-driven risks, such as deepfakes, automated disinformation, or adaptive cyberattacks, operate in the shadows of perception. Respondents in the study described AI as “uncontrollable,” “opaque,” and “alien.” This sense of unknowability, even among experts, amplifies both fear and fascination. The result is a paradoxical mix of vigilance and detachment: people acknowledge AI’s risks but tend to disengage from them, assuming that mitigation is the responsibility of institutions or technical specialists.

This psychological distancing poses a serious obstacle to cybersecurity culture. When users perceive a threat as too complex to understand, they are less likely to adopt preventive behaviors. The authors note that education strategies must adapt to this new cognitive landscape, translating abstract AI risks into tangible, relatable examples.

Moreover, optimism bias remains a persistent blind spot. IT professionals, who were expected to show stronger awareness, often exhibited higher complacency than non-IT participants. This suggests that expertise does not eliminate bias; in some cases, it may reinforce it, as familiarity breeds a false sense of control.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback