From skepticism to cynicism: How AI misinformation is reshaping news consumption

The erosion of trust in news due to AI-generated misinformation presents a significant societal challenge. As deepfake technology becomes more sophisticated, merely identifying fake content is not enough - there is a pressing need to improve public confidence in AI literacy.


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 06-03-2025 10:21 IST | Created: 06-03-2025 10:21 IST
From skepticism to cynicism: How AI misinformation is reshaping news consumption
Representative Image. Credit: ChatGPT

The rapid evolution of artificial intelligence (AI) has transformed how we consume information, yet it has also introduced a crisis of trust. With deepfake technology advancing to create hyper-realistic images and videos, audiences are increasingly questioning the authenticity of the news they encounter. In an era where manipulated media circulates freely, the ability to discern real from fake is crucial. However, this ability - termed AI self-efficacy - is not evenly distributed among users, leading to varying degrees of skepticism and cynicism.

A recent study titled "When Seeing is Not Believing: Self-Efficacy and Cynicism in the Era of Intelligent Media" by Qiang Liu, Lin Wang, and Mengyu Luo from the University of Shanghai for Science and Technology, published in Humanities and Social Sciences Communications, investigates this phenomenon. The study explores how individuals' confidence in identifying AI-generated content influences their trust in digital news and their overall perception of media authenticity.

The crisis of AI self-efficacy and its role in news skepticism

The study defines AI self-efficacy as an individual’s belief in their ability to recognize, understand, and assess AI-generated content. In a digital landscape flooded with deepfake media, individuals with lower AI self-efficacy tend to be more cynical, doubting not only manipulated content but also legitimate news. This skepticism stems from a perceived inability to distinguish real from fake, leading to a disengagement from critical media analysis.

To test this, the researchers conducted two experiments with a total of 1,826 participants. In one experiment, participants were assigned AI recognition tasks and then informed (regardless of their actual performance) that their ability to discern deepfakes was low. The results revealed that participants who believed they were poor at detecting AI-generated content exhibited higher cynicism toward all news sources. This suggests that declining confidence in one’s ability to navigate digital misinformation can lead to a broader erosion of trust in media.

How news content influences cynicism

Not all news is received in the same way. The study highlights that the relevance and risk level of news content influence the extent of skepticism displayed by audiences. Individuals were more likely to exhibit cynicism toward low-risk and highly relevant news compared to high-risk, personally distant content.

For example, when participants encountered deepfake content related to entertainment news or lifestyle topics, they were more likely to disengage from discerning its authenticity. In contrast, when presented with AI-generated news about war, disasters, or major political events, they made a greater effort to assess its validity. This suggests that personal stakes and perceived consequences shape how actively people engage with news verification.

The psychological impact of deepfake exposure

One of the most striking findings of the study is the concept of “reality apathy.” Repeated exposure to synthetic content - especially when individuals struggle to detect its artificial nature - leads to a passive acceptance of misinformation. Rather than continually attempting to verify news, users with low AI self-efficacy may abandon efforts altogether, assuming that much of what they encounter online is manipulated.

This has profound implications for media literacy and public discourse. The study’s findings suggest that AI-driven misinformation not only deceives people in the moment but also conditions them to expect deception in the future, diminishing trust in all forms of news. In this context, cynicism becomes a defensive mechanism, where individuals detach emotionally and cognitively from media content rather than actively scrutinizing it.

Addressing the challenge: The need for AI literacy

The erosion of trust in news due to AI-generated misinformation presents a significant societal challenge. As deepfake technology becomes more sophisticated, merely identifying fake content is not enough - there is a pressing need to improve public confidence in AI literacy.

Educational initiatives must go beyond simple fact-checking techniques and focus on building resilience against manipulation. Social media platforms and news organizations also play a crucial role in developing transparent verification systems that help users differentiate between authentic and AI-generated content.

Ultimately, the study underscores the importance of maintaining not just the technical ability to detect deepfakes, but also the psychological confidence to engage critically with digital media. If left unchecked, widespread cynicism could lead to a society where truth becomes irrelevant, and skepticism dominates public discourse.

By fostering AI literacy and restoring trust in credible sources, we can empower individuals to navigate the complex digital world with confidence - ensuring that when we see something, we can still believe in it.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback