Radiologists Struggle to Spot AI‑Made Fake X‑Rays, Study Finds

Xray deepfake

A study published on March 24 in the journal Radiology found that both doctors and computer models have a hard time telling real X‑ray pictures apart from AI‑made fakes. This could cause big problems for patient care.

A “deepfake” is a picture, video, or sound that looks real but is created or changed by artificial intelligence.

How the Test Was Done

Seventeen radiologists from twelve hospitals in six countries took part. Their experience ranged from beginners to experts with up to 40 years of work. They looked at 264 X‑ray images—half real, half made by AI.

The images came in two groups. One group mixed real scans with X‑rays that ChatGPT created. The second group focused only on chest X‑rays, half real and half made by a tool called RoentGen, which was built by researchers at Stanford Medicine.

What the Doctors Saw

When the doctors did not know that fake images were included, they only spotted 41% of the AI‑generated X‑rays. After being told that fakes were present, their accuracy rose to 75%.

Individual performance varied a lot. Some doctors identified as few as 58% of the fake images, while others caught up to 92%.

How the AI Models Performed

Four large language models that can look at images—GPT‑4o, GPT‑5, Gemini 2.5 Pro, and Llama 4 Maverick—were also tested. Their success rates ranged from 57% to 85%.

Even the same AI that created the fake X‑rays (ChatGPT‑4o) missed many of them, though it did a bit better than the other models.

Experience Doesn’t Always Help

The study found no link between a doctor’s years of practice and the ability to spot fakes. However, radiologists who specialize in muscles and bones did better than those in other areas.

What Makes a Fake Look Different

Researchers noticed some patterns in AI‑made images. The bones often looked too smooth, spines were unusually straight, lungs were overly symmetrical, and blood vessels were too uniform. Fractures, when present, were clean and usually on just one side.

Why This Matters

If fake X‑rays are used in court or placed into hospital systems, they could lead to wrong diagnoses and harm patients.

To protect against these risks, scientists suggest adding invisible watermarks and cryptographic signatures to images when they are captured. These tools can help confirm that a picture is real.

Looking Ahead

Experts say we may soon see AI create fake 3‑D scans like CT and MRI. Building training sets and detection tools now is crucial.

The researchers have also released a set of fake images with quizzes so doctors can practice spotting them.