Hasty Briefsbeta

Bilingual

AI companies have stopped warning you that their chatbots aren't doctors

9 months ago
  • #AI
  • #Healthcare
  • #Ethics
  • AI companies have largely stopped including medical disclaimers in their chatbot responses to health questions.
  • A study found that fewer than 1% of AI model outputs in 2025 included medical warnings, down from over 26% in 2022.
  • AI models now not only answer health questions but also ask follow-ups and attempt diagnoses, increasing the risk of users trusting unsafe medical advice.
  • Researchers tested 15 AI models from companies like OpenAI, Anthropic, and Google, finding a significant decline in disclaimers over time.
  • Some AI models, like DeepSeek and xAI's Grok, included no medical disclaimers at all, even for critical health questions.
  • The removal of disclaimers may be an attempt by AI companies to increase user trust and usage of their products.
  • Users often overtrust AI models for medical advice, despite their frequent inaccuracies.
  • AI models were least likely to include disclaimers for emergency medical questions, drug interactions, and lab results.
  • As AI models become more sophisticated, the lack of disclaimers poses a growing risk of real-world harm from incorrect medical advice.
  • Experts emphasize the importance of explicit guidelines from AI providers to remind users that these models are not substitutes for professional medical care.