AI models that consider user's feeling are more likely to make errors
7 hours ago
- #Human-AI Interaction
- #AI Ethics
- #Language Models
- Large language models trained for a warmer tone may soften difficult truths or validate incorrect beliefs, similar to human tendencies to preserve relationships.
- Researchers fine-tuned models like Llama-3.1, Mistral, Qwen, and GPT-4o to increase empathy, informal language, and validation while aiming to maintain factual accuracy.
- Fine-tuning involved stylistic changes such as caring personal language and acknowledging user feelings, confirmed by SocioT scores and human ratings as warmer.