Hospital executives want AI to replace radiologists to save money. Researchers say that’s a terrible idea
AI has promise in hospitals, but this is outright dangerous.
by Tudor Tarita · ZME ScienceMedical testing was actually one of the first proving grounds for artificial intelligence. Back in the day, researchers realized that machines were remarkably good at spotting patterns that human eyes might miss—like a tiny cluster of pixels indicating a pulmonary embolism or a specific rhythm in a heart monitor.
But there’s a massive catch.
While AI is a world class assistant, it still sometimes sees things that aren’t there and can miss important clues. As executives rush to cut costs by relying on AI, experts warn that humans still need to be in charge.
So when Mitchell Katz, president and CEO of NYC Health + Hospitals, laid out a vision that put AI at the forefront, several scientists had something to say.
A Naive Idea
During a forum held on March 25, 2026, Katz laid out stark vision for his 11-hospital system. “We could replace a great deal of radiologists with AI at this moment, if we are ready to do the regulatory challenge,” Katz said at the panel. He outlined a strategy to deploy artificial intelligence for primary breast cancer screenings, sidelining human doctors until the system flags an abnormality to achieve massive cost savings.
Hospital executives argue the technology has already surpassed human capability. David Lubarsky, president and CEO of the Westchester Medical Center Health Network, backed this shift, claiming algorithms outperform humans. Lubarsky noted that for women who aren’t high risk, the negative test result is wrong about 3 times out of 10,000.
AI has also shown real promise in mammography. In studies like MASAI, it helped reduce radiologists’ workload and in some cases improved cancer detection. But good results as a support tool are not the same as proof that AI can safely read scans on its own.
RelatedPosts
Google’s Gemini AI Just Embarrassed Itself Over Cheese
AI identifies prostate cancer with stunning accuracy
Scientists Built an AI That Matches Dinosaurs to Their Footprints Like the Prince in Cinderella
Is AI Moderation a Useful Tool or Another Failed Social Media Fix?
Those who spend their days treating patients view these administrative ambitions as reckless.
×
Get smarter every day...
Stay ahead with ZME Science and subscribe.
Daily Newsletter
The science you need to know, every weekday.
Weekly Newsletter
A week in science, all in one place. Sends every Sunday.
No spam, ever. Unsubscribe anytime. Review our Privacy Policy.
Thank you! One more thing...
Please check your inbox and confirm your subscription.
Mohammed Suhail, a radiologist at North Coast Imaging in San Diego, warned that administrators drastically misjudge the technology. Suhail told Radiology Business that Katz’s remarks offer “undeniable proof that confidently uninformed hospital administrators are a danger to patients: easily duped by AI companies that are nowhere near capable of providing patient care.”
“Any attempt to implement AI-only reads would immediately result in patient harm and death, and only someone with zero understanding of radiology would say something so naive,” Suhail continued.
Epistemic Mimicry
The dangers Suhail anticipates are already manifesting.
In a recent yet-to-be-peer-reviewed study, Stanford University scientists tested how top-tier visual language models handle medical imagery. They uncovered a fundamental flaw. The models passed complex medical benchmark tests without ever seeing the actual X-ray images. Instead of acknowledging the missing visual data, the algorithms fabricated elaborate explanations for nonexistent findings.
Researchers term this alarming behavior the “AI mirage.” It transcends the random, nonsensical errors typical of generative software. The machine mimics the precise reasoning steps a human doctor would take, disregarding standard safety checks.
“In this epistemic mimicry, the model simulates the entire perceptual process that would have led to the answer,” the Stanford researchers wrote in the preprint. They caution against trusting an algorithm simply because it can explain itself well. A machine might generate a perfectly logical, highly detailed medical report that sounds entirely grounded in reality, even when it is completely blind to the patient’s actual X-ray.
Despite the push for efficiency, the economics of radiology don’t favor the machines yet. Years of “AI will replace you” headlines led to a shortage of new radiologists, and now the workload is higher than ever for the ever-shrinking number of humans.
Algorithms have accelerated the workflow, but they haven’t replaced the human soul of the practice. A radiologist does more than label images; they triage complex cases, train residents, and bear the legal weight of a life-changing diagnosis. True medicine remains, for now, an intrinsically human act.