AI-generated deepfakes trigger trauma, anxiety and false memories across users

The findings highlight consistent discomfort with AI-driven emotional manipulation. Researchers note that the unpredictability of synthetic media fuels fear around whether any digital content is real, creating an environment where uncertainty becomes a psychological stressor. Studies from multiple countries found that younger people may be more familiar with deepfakes but are not necessarily less concerned about their potential effects.


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 04-12-2025 13:46 IST | Created: 04-12-2025 11:11 IST
AI-generated deepfakes trigger trauma, anxiety and false memories across users
Representative Image. Credit: ChatGPT

Researchers warn that risks of AI-generated synthetic media extend far beyond disinformation and political disruption, reaching deeply into personal mental health, social dynamics and the stability of digital trust.

A new review titled “The Harm of Deepfakes: A Scoping Review of Deepfakes’ Negative Effects on Human Mind and Behavior”, published in AI & Society, brings together existing evidence from 28 studies selected from an initial pool of 1,143 academic papers. The review lays out the first structured map of how deepfakes cause measurable harm across cognitive, emotional, behavioral and societal dimensions.

What emerges from the analysis is a pattern far more complex than simple deception. Deepfakes influence how people view themselves, how they behave online, whether they trust institutions, and how they feel after being targeted. The researchers underline that while public discourse has focused heavily on political misinformation, the most severe harms arise in personal contexts such as image-based sexual abuse and exposure to deceptive content that manipulates attitudes or triggers distress.

Widespread concerns and the growing psychological toll

The review shows that concerns about deepfakes are now widespread across multiple populations and countries. Surveys included in the study indicate that people are increasingly uneasy about the ease with which synthetic media can imitate real individuals, replicate voices, create false statements or fabricate compromising scenarios. These concerns are not abstract; the evidence shows that people fear manipulation, reputation damage, privacy loss and political influence by actors using deepfake tools.

Several studies in the review indicate that awareness of deepfakes correlates with increased anxiety and skepticism toward online content. The review reports that many participants believe platforms should take stronger responsibility for preventing and regulating synthetic media, and that those who worry more about deepfakes tend to assign higher accountability to digital platforms. At the same time, heightened concern often decreases an individual’s sense of personal responsibility, suggesting that the public sees the threat as systemic rather than simply a matter of personal vigilance.

The findings highlight consistent discomfort with AI-driven emotional manipulation. Researchers note that the unpredictability of synthetic media fuels fear around whether any digital content is real, creating an environment where uncertainty becomes a psychological stressor. Studies from multiple countries found that younger people may be more familiar with deepfakes but are not necessarily less concerned about their potential effects.

Experts interviewed for the included studies also emphasized broader risks such as data theft, identity misuse, and the exploitation of vulnerable populations. The review cites evidence that many individuals, including those with technical knowledge, view deepfakes as an easy tool for malicious actors. This contributes to persistent worry that deepfake technology will be weaponized in personal, political and social spheres.

Taken together, these findings show a clear psychological trend: awareness alone does not empower people. Instead, it often intensifies anxiety and reduces confidence in one’s ability to navigate digital environments safely.

Deception’s real-world effects on memory, behavior and decision-making

The review identifies deception as a central mechanism through which deepfakes exert harm. The most robust empirical evidence concerns how people behave after viewing deepfake content, how their attitudes shift, and how false memories form even when content is fabricated.

One of the key behavioral risks is the tendency to share deepfakes on social media. Several large-scale studies show that people are more likely to share video deepfakes than audio-only or low-quality manipulated content, often because the realism of video increases perceived credibility. Factors associated with a higher likelihood of sharing include fear of missing out, reduced self-regulation, low cognitive ability and frequent social media use. The review notes that even high cognitive ability can, in some circumstances, increase sharing intention if a deepfake is unlabeled, reflecting the complexity of how people interpret synthetic media.

Attitude manipulation represents another significant harm. The review highlights research showing that political deepfakes can shift perceptions of politicians and political parties, especially when combined with microtargeting strategies that tailor content to personal traits or demographic groups. Microtargeting increases the persuasive power of synthetic content by aligning it with the viewer’s pre-existing beliefs. These shifts are often comparable to, or even indistinguishable from, reactions to real political content, underscoring the threat posed by synthetic media to democratic discourse.

False memories are also part of the deception-based harms. Experiments presented in the review show that a substantial portion of participants recalled events that never occurred after viewing deepfake videos or manipulated news stories. The rates of false memory formation were similar whether the deceptive content was video-based or accompanied by photographs, suggesting that deepfakes do not necessarily outperform other forms of misinformation in memory distortion, but they contribute to an already fragile information environment where false recollections can be easily planted.

Financial deception features as a smaller but still notable category. One included experiment showed that deepfake-enhanced financial news influenced investment decisions in positive or negative directions depending on the nature of the content. The realism of the deepfake and the viewer’s reliance on intuitive judgment both amplified susceptibility to fraud.

Across these engagement-based harms, the underlying pattern is clear: deepfakes can influence how people act, what they believe and how they recall information, even when individuals are aware that manipulated media exists.

Severe mental health impacts and the collapse of media trust

The review identifies the most severe harms in the context of image-based sexual abuse (IBSA), particularly the creation and circulation of nonconsensual sexual deepfakes. The evidence shows that victims suffer intense psychological distress, including symptoms associated with trauma. Reports from victims include fear, humiliation, feelings of degradation, intrusive memories, avoidance behavior, depression, anxiety, physical stress symptoms such as vomiting and elevated blood pressure, and in some cases suicidal thoughts.

Importantly, the study cites cases where victims experienced intrusive memories of events portrayed in deepfakes even though the events never occurred. This psychological phenomenon parallels trauma reactions typically associated with real experiences, indicating that deepfake IBSA can produce harms comparable to severe interpersonal violations. Victims also reported long-term impacts on daily functioning, professional life and social relationships, reflecting the gravity of synthetic sexual exploitation.

Beyond individual victimization, the review finds that deepfakes erode collective trust in media. Experimental studies reveal that exposure to deepfakes leads to lower credibility judgments toward news content in general, not just the manipulated material. Even simple warnings about deepfakes can reduce trust across the board without improving detection accuracy. This generalization of distrust reflects a broader societal harm: as synthetic media becomes more common, it becomes harder for people to rely on information channels essential to public life.

The research further shows that discovering one has been deceived by a deepfake reliably decreases self-efficacy and heightens anxiety about AI. Studies included in the review found that feedback-based deepfake detection training increased distress, suggesting that attempts to educate the public may have unintended emotional consequences if not designed with care.

This interplay of psychological distress, loss of personal agency and reduced trust in digital systems underscores a systemic risk. Deepfakes are not only tools of individual manipulation; they threaten the basic functioning of information ecosystems by weakening confidence in the authenticity of all media.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback