Everyone’s favorite nightmare scenario when it comes to artificial intelligence is that it somehow gains sentience and starts killing humans. Turns out, it doesn’t require achieving superintelligence for AI to start costing human lives; it just requires humans granting too much trust and deference to flawed systems. A new report from Reuters highlights the proliferation of AI in healthcare, from medical devices with AI functionality to systems meant to augment doctors in the operating room, and finds that the technology, presented as a major step forward, may be producing worse outcomes.
Central to the Reuters report is the TruDi Navigation System, an image-guided navigation system produced by a Johnson & Johnson offshoot and designed to treat chronic sinusitis. In 2021, the company announced that it would use AI to assist ear, nose, and throat specialists in surgeries to clear up the inflammation of the sinuses.
Up until the AI injection, the device had been on the market for three years and had seven unconfirmed reports of malfunctioning. Since the AI “upgrade” was introduced in the system, the US Food and Drug Administration has reportedly received more than 100 notifications of malfunctions and at least 10 reports of patients being injured by mistakes that were seemingly the result of bad information from the AI system.
Some of those injuries were incredibly serious and were seemingly primarily caused by the system misinforming surgeons about where their instruments were located inside of the patient during the operation. That has reportedly resulted in instances of a surgeon puncturing the base of a patient’s skull, cerebrospinal fluid leaking from a patient’s nose, and patients suffering strokes after the surgeon accidentally struck a major artery.
Because the incidents are still under review, the FDA hasn’t attributed the injuries to AI. But the victims certainly seem to believe it may have played a role. According to Reuters, two people who suffered strokes from surgery involving the TruDi Navigation System have sued and made the case that “The product was arguably safer before integrating changes in the software to incorporate artificial intelligence than after the software modifications were implemented.”
While the TruDi Navigation System is a high-profile example of the potential risks of relying on AI in settings like the operating room, it’s far from the only device integrating AI into its operation. Per Reuters, there are 1,357 medical devices using AI that have been approved by the FDA. Few are free of concern. Research published in JAMA Health Forum earlier this year found that AI-enabled medical devices have a shockingly high recall rate: 43% have had issues bad enough to take them off the market less than one year after their initial approval—about twice the rate of non-AI devices.
Notably, that research found that many of those devices came from publicly traded companies. Those companies, like Johnson & Johnson, may have fallen into the trap of rushing a product to market without sufficient safety testing. That is the accusation of one of the lawsuits related to the TruDi Navigation System, which alleges that its AI features were pushed as a “marketing tool” and didn’t actually improve accuracy. In fact, the suit claims that the maker of the device lowered its safety standard to “80% accuracy” in order to push the new technology to market faster.
Even when AI devices aren’t involved in operations that can directly lead to adverse health outcomes, they can still provide bad information. For instance, Sonio Detect, an AI system for analyzing fetal images, allegedly incorrectly labels fetal structures and “associates them with the wrong body parts,” according to an FDA report. That’s in line with reports from last year that found Google’s medical AI hallucinating body parts.
The companies responsible for this technology don’t really seem to see the issue. Integra LifeSciences, the company that acquired the TruDi Navigation System from Johnson & Johnson in 2024, told Reuters that FDA reports “do nothing more than indicate that a TruDi system was in use in a surgery where an adverse event took place” and argued “there is no credible evidence to show any causal connection between the TruDi Navigation System, AI technology, and any alleged injuries.”
You also can’t really count on the FDA acting on these alleged problems. The part of the agency tasked with reviewing and assessing the safety of AI-enabled medical devices was severely hobbled by cuts by the Department of Government Efficiency (DOGE), losing 15 of its 40 scientists, per Reuters. Another unit involved in AI in medicine lost one-third of its staff. Notably, the Elon Musk-led DOGE also cut part of the FDA that was in charge of reviewing the safety of his brain-computer interface device, Neuralink. Meanwhile, more AI devices are seemingly getting the green light without a thorough review.
Maybe tech’s “move fast and break things” approach shouldn’t apply when the “things” is the human body.
Read the full article here
