The artificial intelligence boom is already starting to creep into the medical field through the form of AI-based visit summaries and analysis of patient conditions. Now, new research demonstrates how AI training techniques similar to those used for ChatGPT could be used to train surgical robots to operate on their own.
Researchers from John Hopkins University and Stanford University built a training model using video recordings of human-controlled robotic arms performing surgical tasks. By learning to imitate actions on a video, the researchers believe they can reduce the need to program each individual movement required for a procedure. From the Washington Post:
The robots learned to manipulate needles, tie knots and suture wounds on their own. Moreover, the trained robots went beyond mere imitation, correcting their own slip-ups without being told ― for example, picking up a dropped needle. Scientists have already begun the next stage of work: combining all of the different skills in full surgeries performed on animal cadavers.
To be sure, robotics have been used in the surgery room for years now—back in 2018, the “surgery on a grape” meme highlighted how robotic arms can assist with surgeries by providing a heightened level of precision. Approximately 876,000 robot-assisted surgeries were conducted in 2020. Robotic instruments can reach places and perform tasks in the body where a surgeon’s hand will never fit, and they do not suffer from tremors. Slim, precise instruments can spare nerve damage. But robotics are typically guided manually by a surgeon with a controller. The surgeon is always in charge.
The concern by skeptics of more autonomous robots is that AI models like ChatGPT are not “intelligent,” but rather simply mimic what they have already seen before, and do not understand the underlying concepts they are dealing with. The infinite variety of pathologies in an incalculable variety of human hosts poses a challenge, then—what if the AI model has not seen a specific scenario before? Something can go wrong during surgery in a split second, and what if the AI has not been trained to respond?
At the very least, autonomous robots used in surgeries would need to be approved by the Food and Drug Administration. In other cases where doctors are using AI to summarize their patient visits and make recommendations, FDA approval is not required because the doctor is technically supposed to review and endorse any information they produce. That is concerning because there is already evidence that AI bots will make bad recommendations, or hallucinate and include information in meeting transcripts that was never uttered. How often will a tired, overworked doctor rubber-stamp whatever an AI produces without scrutinizing it closely?
It feels reminiscent of recent reports regarding how soldiers in Israel are relying on AI to identify attack targets without scrutinizing the information very closely. “Soldiers who were poorly trained in using the technology attacked human targets without corroborating [the AI] predictions at all,” a Washington Post story reads. “At certain times the only corroboration required was that the target was a male.” Things can go awry when humans become complacent and are not sufficiently in the loop.
Healthcare is another field with high stakes—certainly higher than the consumer market. If Gmail summarizes an email incorrectly, it is not the end of the world. AI systems incorrectly diagnosing a health problem, or making a mistake during surgery, is a much more serious problem. Who in that case is liable? The Post interviewed the director of robotic surgery at the University of Miami, and this is what he had to say:
“The stakes are so high,” he said, “because this is a life and death issue.” The anatomy of every patient differs, as does the way a disease behaves in patients.
“I look at [the images from] CT scans and MRIs and then do surgery,” by controlling robotic arms, Parekh said. “If you want the robot to do the surgery itself, it will have to understand all of the imaging, how to read the CT scans and MRIs.” In addition, robots will need to learn how to perform keyhole, or laparoscopic, surgery that uses very small incisions.
The idea that AI will ever be infallible is hard to take seriously when no technology is ever perfect. Certainly, this autonomous technology is interesting from a research perspective, but the blowback from a botched surgery conducted by an autonomous robot would be monumental. Who do you punish when something goes wrong, who has their medical license revoked? Humans are not infallible either, but at least patients have the peace of mind of knowing they have gone through years of training and can be held accountable if something goes wrong. AI models are crude simulacrums of humans, behave sometimes unpredictably, and have no moral compass.
If doctors are tired and overworked—a reason researchers suggested why this technology could be valuable— perhaps the systemic problems causing a shortage should be addressed instead. It has been widely reported that the U.S. is experiencing an extreme shortage of doctors due to the increasing inaccessibility of the field. The country is on track to experience a shortage of 10,000 to 20,000 surgeons by 2036, according to the American Association of Medical Colleges.
Read the full article here