From Deepfakes to Job Fears: OECD Study Tracks the Rapid Rise of AI Risk Reporting

A new OECD study finds that media reports of AI-related incidents have more than tripled since 2022, with growing concern over deepfakes, child safety and AI-driven fraud. While some risks like autonomous vehicles and privacy receive less attention today, event-driven spikes around elections, chatbots and geopolitics show how quickly public focus shifts as AI evolves.


CoE-EDP, VisionRICoE-EDP, VisionRI | Updated: 13-02-2026 09:41 IST | Created: 13-02-2026 09:41 IST
From Deepfakes to Job Fears: OECD Study Tracks the Rapid Rise of AI Risk Reporting
Representative Image.

Artificial intelligence is everywhere, and so are stories about its risks. A new OECD study shows that media reports about AI-related incidents and potential harms have more than tripled in just three years. Between 2022 and 2025, the average number of reported AI incidents rose from 92 per month to 324 per month.

The research was developed by the OECD Directorate for Science, Technology and Innovation together with the OECD.AI Expert Group on AI Incidents, the Working Party on AI Governance and the Global Partnership on AI. It uses the OECD.AI Incidents and Hazards Monitor, a system that scans major global news outlets to track how AI-related harms are reported.

Interestingly, even though the total number of risk-related stories has grown sharply, their share within overall AI news has slightly declined. AI is receiving far more attention in general, but stories focused specifically on harm now make up a slightly smaller portion of the total coverage than they did in 2022.

The Fastest-Growing AI Threats

Some risks are clearly rising in public attention. One of the biggest is synthetic media, including deepfakes and AI-generated images, videos and voices. By 2025, this category accounted for more than 14 percent of all reported AI incidents. High-profile cases involving fake celebrity content and election-related deepfakes pushed this issue into the spotlight.

Child safety is another growing concern. Reports involving AI-generated explicit images of minors, manipulated student photos and harmful content shown to children through algorithms have doubled since late 2023.

Cybercrime is also expanding rapidly. AI-powered phishing attacks, financial scams and fraud have nearly tripled in media coverage over the past few years. By late 2025, close to one in ten reported incidents involved some form of AI-enabled fraud or cyberattack.

Even concerns about jobs are increasing. Stories linking AI to layoffs, automation and potential job losses are steadily rising, reflecting fears about how AI may reshape the labour market.

Risks That Spike With Major Events

Some AI risks do not grow steadily but instead surge after major events. Election interference and geopolitical tensions are clear examples. During the global wave of elections in 2024 and early 2025, reports about AI-generated misinformation and political manipulation jumped sharply. At one point, this category made up more than 20 percent of reported AI incidents.

Large language models such as ChatGPT also experienced dramatic spikes. After ChatGPT launched in late 2022, media coverage about chatbot risks exploded. Concerns included misinformation, hallucinations, harmful advice and misuse of the technology. While attention later cooled, it rose again when reports linked chatbot interactions to serious mental health consequences.

AI in warfare follows a similar pattern. Media attention surged during periods of conflict, especially when autonomous drones and military AI tools were widely discussed, then eased before rising again.

Older Concerns Are Getting Less Attention

While some risks are rising, others are receiving less coverage than before. Autonomous vehicles were once one of the most reported AI risks. In 2022, they made up nearly 18 percent of incidents. By 2025, that figure had fallen below 8 percent.

Privacy violations, which dominated headlines in early 2022, have stabilised at around 8 percent of coverage. Biometric data misuse, including facial recognition errors and wrongful arrests, has also declined steadily.

Online platform harms, such as the spread of extremist content through recommendation algorithms, and health-related AI risks, including biased medical tools, are also receiving less relative media attention. This does not mean the risks have disappeared. It simply shows that the spotlight has shifted.

Why Media Trends Matter

The OECD study does not claim that media coverage equals real-world risk. A spike in headlines does not necessarily mean harm is increasing, and a drop in coverage does not mean a problem is solved. Instead, media reporting acts as a window into what society is worried about at any given moment.

Researchers used advanced language models to analyse thousands of news articles and group them into 14 categories of AI risk. The results show that public attention moves quickly, often reacting to technological breakthroughs or global events.

As AI becomes more deeply embedded in daily life, understanding these shifting narratives is crucial. The study suggests that tracking media reports can help policymakers identify emerging threats early, while also reminding them not to overlook quieter, ongoing risks.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback