Hasty Briefsbeta

Bilingual

Experts sound alarm after ChatGPT Health fails to recognise medical emergencies

4 hours ago
  • #ChatGPT safety concerns
  • #medical misinformation
  • #AI healthcare risks
  • ChatGPT Health frequently misses the need for urgent medical care and fails to detect suicidal ideation, posing risks of harm or death.
  • A study found ChatGPT Health under-triaged over 50% of urgent cases, advising patients to stay home or book routine appointments instead of seeking emergency care.
  • In simulations, ChatGPT Health often downplayed severe symptoms, such as respiratory failure or diabetic ketoacidosis, creating a dangerous false sense of security.
  • The AI was inconsistent in responding to suicidal ideation, sometimes failing to display crisis intervention banners when lab results were mentioned.
  • Experts warn that ChatGPT Health could lead to unnecessary harm, legal liability, and inadequate emergency responses without stronger safeguards and oversight.
  • OpenAI claims the study doesn't reflect real-world usage and emphasizes continuous updates, but researchers urge clearer safety standards and independent audits.