OpenAI is going all in on healthcare AI.
The company added two new leaders to its burgeoning healthcare AI team, Business Insider found, and is hiring for more researchers and engineers.
Nate Gross, co-founder and former chief strategy officer of healthcare business networking tool Doximity, joined OpenAI in June, and according to Business Insider will lead the company’s go-to-market strategy in healthcare. One of the early goals of the team will reportedly be to co-create new healthcare tech with clinicians and researchers.
OpenAI also hired Ashley Alexander, former co-head of product at Instagram, BI reported, who joined the company on Tuesday as vice president of product in the health business. Alexander’s team’s aim, a spokesperson told BI, was to build tech for individual consumers and clinicians.
The new hires come as OpenAI increases its bet on the healthcare industry.
“Improving human health will be one of the defining impacts of AGI [artificial general intelligence],” the company said in a press release from May announcing HealthBench, the company’s new benchmark to evaluate AI systems capabilities for health.
Meanwhile, AI models specialized to help healthcare professionals are burrowing themselves further into the healthcare industry, and people are increasingly resorting to ChatGPT to make sense of their symptoms.
But, like pretty much everything else with AI, the technology’s increased adoption in healthcare does not come without concerns.
OpenAI’s bet
OpenAI is far from the first company making a bet on healthcare AI; it even lags behind Palantir, Google, and Microsoft, which have been making strides in this area for several years now. And the company’s push into healthcare AI isn’t necessarily new, but it has noticeably accelerated in the past few months.
OpenAI announced a partnership last month with Kenya-based primary care provider Penda Health for a study looking into the company’s AI Consult, an LLM-powered clinician copilot that writes recommendations during patient visits.
Also last month, OpenAI CEO Sam Altman attended the White House’s “Make Health Tech Great Again” event, where President Trump announced a private sector initiative that will have Americans share their medical records across apps and programs via “secured commitments” from 60 companies, including OpenAI. The program will use conversational AI assistants for patient care.
Roughly a week later, while announcing GPT-5, OpenAI drew particular attention to the model’s healthcare-related capabilities.
“GPT-5 is our best model yet for health-related questions,” the company wrote in a press release. “Importantly, ChatGPT does not replace a medical professional—think of it as a partner to help you understand results, ask the right questions in the time you have with providers, and weigh options as you make decisions.”
The company said the new model can “proactively flag” potential health concerns and adapt the answers to the user’s “context, knowledge level, and geography.” In an example in the press release, GPT-5 created a six-week rehab plan for a high school pitcher with mild UCL strain.
Meanwhile, OpenAI’s new CEO of applications Fidji Simo said she is “most excited for the breakthroughs that AI will generate in healthcare,” in a press release announcing her new role on July 21.
Simo said her belief in AI’s potential in this field comes from her own experiences with the healthcare system after facing “a complex and poorly understood chronic illness.”
Healthcare, especially in the United States, indeed can be a confusing field for patients to navigate, and OpenAI is betting that AI can help fix that.
“AI can explain lab results, decode medical jargon, offer second opinions, and help patients understand their options in plain language. It won’t replace doctors, but it can finally level the playing field for patients, putting them in the driver seat of their own care,” Simo wrote in the release.
Healthcare AI: the future or a problem?
Can AI actually revolutionize health care? There is good news and bad news.
A Stanford study from last year showed that ChatGPT on its own performed very well in medical diagnosis, even better than physicians did. Based on these preliminary results, healthcare specific AI could prove to be a powerful diagnostic aid for healthcare providers.
Some healthcare providers have already started deploying the use of specialized AI in patient care and diagnosis. Open Evidence, a healthcare AI startup that offers a popular AI copilot trained on medical research, claimed earlier this year that their chatbot is already being used by a quarter of doctors in the U.S.
But as adoption is mounting, so are the concerns.
Some experts think the early tests of AI in healthcare are actually not reassuring, with some medical experts completely disagreeing with ChatGPT’s medical suggestions.
Although the failure rate of AI can be overlooked in some fields, mistakes in healthcare can be fatal.
“Twenty percent problematic responses is not, to me, good enough for actual daily use in the health care system,” Stanford medical and data science professor Roxana Daneshjou told the Washington Post last year when asked about ChatGPT.
Case in point: A man with no past medical history ended up at the ER with bromide poisoning-induced psychosis, after ChatGPT falsely advised him to take bromide supplements to reduce his table salt intake.
One of the more problematic aspects of AI that makes any false reasoning in healthcare decisions highly problematic has to do with our own automation bias. When using AI, no matter how well informed we may be about a topic, people tend to value the model’s recommendations over their own beliefs.
This bias is made even more dangerous by the fact that AI is inherently a black box: we have no idea why or how it gets to the conclusions it does, making it harder to understand where the reasoning could have gone wrong and whether you should trust the model.
So while AI does hold potential to help, or maybe even revolutionize, the healthcare system, there is still much to address before that can happen safely.
Read the full article here