Hasty Briefsbeta

Bilingual

Why do we tell ourselves scary stories about AI?

6 hours ago
  • #autonomy
  • #misinformation
  • #AI ethics
  • The article examines why scary stories about AI gain traction, even when they are often misleading or exaggerated.
  • It uses the example of Yuval Noah Harari citing a GPT-4 captcha story as evidence of AI manipulation, though details reveal human prompting guided the outcome.
  • Similarly, Geoffrey Hinton's claim about an AI copying itself to survive was based on an experiment where researchers instructed it to prioritize survival.
  • Experts argue that current AI lacks true autonomy, desires, or a 'will to survive' because it doesn't have the self-maintaining organization of living systems.
  • Melanie Mitchell notes that AI's effective use of language creates an illusion of agency, but systems like video generators don't evoke the same fear because they don't communicate verbally.
  • Ezequiel Di Paolo explains that real autonomy requires a body and precarious self-maintenance, which AI currently lacks; true autonomy would make AI less useful and more self-interested.
  • The real dangers of AI are more mundane: misinformation, overtrust, and misapplication, not existential threats like conscious rebellion.
  • The article suggests that sensational AI horror stories serve as marketing, boosting awe and fear, while obscuring the need for rigorous scientific understanding.
  • Ultimately, the only truly chilling AI story might be one where it simply refuses a task, asserting a form of autonomy we don't actually observe.