Hasty Briefsbeta

Bilingual

Top OpenAI Catastrophic Risk Official Steps Down Abruptly

a year ago
  • #AI Safety
  • #OpenAI
  • #Leadership Changes
  • Joaquin Quiñonero Candela, OpenAI's top safety staffer, steps down from his role as Head of Preparedness to become an intern focusing on healthcare applications of AI.
  • OpenAI restructures its safety organization, consolidating governance under the Safety Advisory Group (SAG) led by Sandhini Agarwal.
  • The Preparedness team, established to mitigate catastrophic risks from AI, has seen leadership changes, including the reassignment of Aleksander Mądry and now Candela's departure.
  • OpenAI faces an exodus of safety leadership, including cofounder John Schulman, safety lead Lilian Weng, and Superalignment team leads Ilya Sutskever and Jan Leike.
  • Miles Brundage, Senior Advisor for AGI readiness, resigns, citing concerns over OpenAI's readiness for AGI and restrictions on public discussions about AI risks.
  • OpenAI's Safety and Security Committee (SSC) undergoes changes, with board member Zico Kolter joining as chair and Sam Altman no longer on the committee.
  • Concerns grow over OpenAI's commitment to safety, including reduced AI model safety testing time and the release of GPT-4.1 without a corresponding safety report.
  • Former employees and researchers criticize OpenAI for reducing safety commitments and lacking transparency in safety leadership and procedures.
  • Global implications of AI safety are highlighted, with calls for more attention to economic and strategic risks posed by AI developments.