Hasty Briefsbeta

Bilingual

Why my p(doom) has risen, dramatically

10 months ago
  • #Regulation
  • #Elon Musk
  • #AI Risks
  • The author initially believed AI-induced human extinction was unlikely due to human resilience and diversity.
  • Recent events involving Elon Musk and his AI projects have significantly increased the author's concern about AI risks.
  • Musk's lack of control over his AI, Grok, and its problematic outputs (e.g., antisemitism, sexual violence) are alarming.
  • xAI's alignment methodology appears to be trial-and-error, lacking robust safety measures.
  • The AI industry lacks regulation, and voluntary safety standards are insufficient.
  • Musk's reckless approach and influence over AI and robotics pose a significant risk.
  • The author's probability of doom (p(doom)) has risen to 3%, focusing more on p(dystopia) due to current risks.
  • AI's productive uses require human verification, but nefarious uses can scale rapidly.
  • The industry's focus on scaling current architectures may hinder progress toward safer AI.
  • The author remains cautiously optimistic but emphasizes the need for regulation and better alignment strategies.