'Probably' doesn't mean the same thing to your AI as it does to you
5 hours ago
- #human-AI interaction
- #AI communication
- #probability interpretation
- AI chatbots like ChatGPT interpret words of estimative probability (e.g., 'likely,' 'maybe') differently from humans, leading to misalignment in communication.
- Large language models (LLMs) often diverge from human interpretations of uncertainty, especially with hedge words like 'maybe,' due to averaging conflicting usages in training data.
- LLMs exhibit sensitivity to gendered language and prompting language (e.g., English vs. Chinese), reflecting biases and cultural differences in expressing uncertainty.
- Misalignment in probability communication poses risks in high-stakes fields like health care, where differing interpretations of terms like 'unlikely' could lead to flawed decisions.
- Future AI development aims to improve consistency in conveying uncertainty, ensuring terms like 'probably' align with human understanding for reliable AI partnerships.