Hasty Briefsbeta

Doctors horrified after Google's healthcare AI makes up body part

15 days ago
  • #Google Med-Gemini
  • #Medical Errors
  • #AI in Healthcare
  • Health practitioners are concerned about the widespread use of error-prone generative AI tools in medicine.
  • AI 'hallucinations'—made-up facts and lies—are a significant issue, with one error taking over a year to be caught.
  • Google's Med-Gemini AI incorrectly identified a non-existent brain part, 'basilar ganglia,' in a research paper.
  • The error was flagged by a neurologist, but Google only fixed its blog post, not the research paper.
  • AI's falsehoods could have devastating consequences in healthcare settings, despite Google's claims of 'substantial potential in medicine.'
  • Experts warn that AI's tendency to make up information without admitting uncertainty is dangerous in high-stakes domains like medicine.
  • Google continues to push AI in healthcare, including error-prone features like AI Overviews for health advice.
  • Human oversight is crucial to monitor AI outputs, but inefficiencies and risks remain if outputs go unverified.
  • Some experts argue AI should have a much higher accuracy bar than humans in healthcare.