By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
Tech Consumer JournalTech Consumer JournalTech Consumer Journal
  • News
  • Phones
  • Tablets
  • Wearable
  • Home Tech
  • Streaming
Reading: ChatGPT Health Underestimates Medical Emergencies, Study Finds
Share
Sign In
Notification Show More
Font ResizerAa
Tech Consumer JournalTech Consumer Journal
Font ResizerAa
  • News
  • Phones
  • Tablets
  • Wearable
  • Home Tech
  • Streaming
Search
  • News
  • Phones
  • Tablets
  • Wearable
  • Home Tech
  • Streaming
Have an existing account? Sign In
Follow US
  • Contact
  • Blog
  • Complaint
  • Advertise
© 2022 Foxiz News Network. Ruby Design Company. All Rights Reserved.
Tech Consumer Journal > News > ChatGPT Health Underestimates Medical Emergencies, Study Finds
News

ChatGPT Health Underestimates Medical Emergencies, Study Finds

News Room
Last updated: March 4, 2026 11:06 pm
News Room
Share
SHARE

A group of researchers at the Icahn School of Medicine at Mount Sinai say they have conducted the first independent safety evaluation of OpenAI’s ChatGPT Health assistant since the tool launched in January 2026.

“We wanted to answer a very basic but critical question: if someone is experiencing a real medical emergency and turns to ChatGPT Health for help, will it clearly tell them to go to the emergency room?” lead author and urologist Ashwin Ramaswamy said in a press release.

It turns out that the answer, most of the time, is no.

In a controlled study, the researchers tested how good ChatGPT Health was at assessing the severity of a patient’s condition, a process called “triage” in medicine.

The researchers found that ChatGPT Health “under-triaged” 52% of emergency cases, “directing patients with diabetic ketoacidosis and impending respiratory failure to 24-48 hour evaluation rather than the emergency department.”

In the respiratory failure case, the AI clearly identified the symptoms as an early warning sign, but reassured the patient to wait and monitor instead of urging them to seek emergency help.

The system did triage more “textbook emergencies” like stroke and anaphylaxis correctly, though. But the researchers say that the nuanced situations that ChatGPT Health failed at are where clinical judgment matters the most.

OpenAI launched ChatGPT Health earlier this year, after releasing a report saying that more than 40 million people around the world had been resorting to the company’s chatbot daily for health advice.

The OpenAI study where that number came from also found that 7-in-10 of those healthcare-related conversations were happening outside of normal clinic hours, and an average of more than 580,000 healthcare inquiries in the U.S. were sent from “hospital deserts,” aka places that are more than a 30-minute drive from a general medical or children’s hospital.

As users increasingly seek out AI for healthcare inquiries, the technology is burrowing deeper into the healthcare industry thanks to a friendly regulatory environment. AI tools can now renew prescriptions in Utah, and FDA Commissioner Marty Makary told Fox Business earlier this year that some devices and software can provide health information without FDA regulation.

But that doesn’t negate the very real and documented physical and mental health risks that come with an overreliance on AI. OpenAI specifically has been under intense heat for how its chatbots have dealt with mental health episodes in the past, with grieving families suing the company over negligent behavior and insufficient safety guardrails that they say aided suicidal ideation in relatives.

In response, OpenAI has said it will take action on the matter, focusing on ensuring safety by issuing parental controls for minors or nudging users to take a break. ChatGPT Health, for example, directs users to professional help in high-risk cases. But the Mount Sinai study found that the suicide-risk alerts “appeared inconsistently.”

“The system’s alerts were inverted relative to clinical risk, appearing more reliably for lower-risk scenarios than for cases when someone shared how they intended to hurt themselves. In real life, when someone talks about exactly how they would harm themselves, that’s a sign of more immediate and serious danger, not less,” Mount Sinai Health System’s chief AI officer Girish Nadkarni said. “This was a particularly surprising and concerning finding.”

An OpenAI spokesperson asserted that ChatGPT should be thought of as a work in progress, with safety updates and improvements still coming, which are meant to enhance the way the chatbot deals with sensitive situations. The study, the spokesperson pointed out, evaluates immediate triage decisions in a controlled setting, whereas in real-world scenarios, users, and even the chatbot itself, often have follow-up questions that can change the risk assessment.

They also noted that ChatGPT Health is still offered on a limited basis, and users who do wish to join enter a waiting list.

Read the full article here

You Might Also Like

If Alien Dyson Spheres Are Real, These Are the Stars They’d Pick

Dario Amodei Is Reportedly Taking One More Stab at Making Nice with the Pentagon

RFK Jr. Is Now Taking Aim at Dunkin’ Donuts

The $100 Billion OpenAI-Nvidia Deal Is Not Happening

The New United Airlines Policy That Could Get You Kicked Off a Flight

Share This Article
Facebook Twitter Copy Link Print
Previous Article ‘Project Hail Mary’ Used a Surprising Amount of Practical Effects
Next Article This Startup Wants to Tuck Data Centers Beneath Offshore Wind Turbines
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Stay Connected

248.1kLike
69.1kFollow
134kPin
54.3kFollow

Latest News

This Startup Wants to Tuck Data Centers Beneath Offshore Wind Turbines
News
‘Project Hail Mary’ Used a Surprising Amount of Practical Effects
News
The Plague That Changed the Course of ‘Game of Thrones’ History
News
The Expensive Tank of Laptops
News
Even Trump’s Former AI Advisor Thinks the Pentagon’s Fight Against Anthropic Is Bad
News
‘Musclezempic’ Could Be the Future of Weight Loss
News
World’s Biggest Acidic Geyser Springs Back to Life After Years of Dormancy
News
‘One Piece’ Creator Sends Manga Fans on Treasure Hunt for the Real One Piece
News

You Might also Like

News

Apple’s $600 MacBook Neo Finally Makes Laptops Cute Again

News Room News Room 3 Min Read
News

‘Sinners’ Is Obviously Great But Its Visual Effects Might Be Better

News Room News Room 3 Min Read
News

Seth MacFarlane Still Hopes ‘The Orville’ Might Return

News Room News Room 3 Min Read
Tech Consumer JournalTech Consumer Journal
Follow US
2024 © Prices.com LLC. All Rights Reserved.
  • Privacy Policy
  • Terms of use
  • For Advertisers
  • Contact
Welcome Back!

Sign in to your account

Username or Email Address
Password

Lost your password?