AI-Generated X-rays: A New Frontier in Medical Imaging Deception

A recent study highlights the potential for AI-generated X-ray images to mislead experienced radiologists and AI detection tools. The study emphasizes the risk of fraudulent activity and cybersecurity threats, urging the development of digital safeguards to discern real from synthetic images.


Devdiscourse News Desk | Updated: 25-03-2026 16:32 IST | Created: 25-03-2026 16:32 IST
AI-Generated X-rays: A New Frontier in Medical Imaging Deception

In a startling revelation, a study published in Radiology demonstrates that artificial intelligence can generate X-ray images indistinguishable from genuine ones, deceiving not only seasoned radiologists but also AI detection systems themselves. Seventeen radiologists from 12 hospitals worldwide assessed 264 X-ray images, half of which were AI-generated using ChatGPT or RoentGen.

The study revealed that without prior knowledge, only 41% of radiologists could identify the fake images, though this improved to 75% upon being informed of the synthetic nature of half the dataset. Dr. Mickael Tordjman, leading the study from the Icahn School of Medicine, warns of potential fraudulent litigation and underscored a significant threat to cybersecurity if such deepfake X-rays are exploited maliciously.

Highlighting the need for preventive measures, researchers advocate for the implementation of invisible watermarks to distinguish real images from fabricated ones, as existing large language models, including GPT-4o, struggled to detect all deepfakes accurately. Dr. Tordjman cautions this may just be the beginning, with possible extensions into CT and MRI scans, calling for immediate establishment of educational datasets and detection tools.

(With inputs from agencies.)

Give Feedback