Hasty Briefsbeta

Bilingual

LLM Targeted Underperformance Disproportionately Impacts Vulnerable Users

8 hours ago
  • #Large Language Models
  • #Vulnerable Users
  • #Bias in AI
  • Large language models (LLMs) show undesirable behaviors like hallucinations and bias, affecting response quality in terms of accuracy, truthfulness, and refusals.
  • Research investigates how LLM response quality varies based on user traits: English proficiency, education level, and country of origin.
  • Undesirable behaviors in state-of-the-art LLMs occur disproportionately more for users with lower English proficiency, lower education status, and from outside the US.
  • The study includes experiments on three state-of-the-art LLMs and two datasets targeting truthfulness and factuality.
  • Findings suggest these models are unreliable sources of information for their most vulnerable users, highlighting a bias against marginalized groups.