Hasty Briefsbeta

Bilingual

Anthropic ditches its core safety promise

9 hours ago
  • #Anthropic
  • #AI Safety
  • #Pentagon
  • Anthropic is loosening its core safety principle in response to competition, replacing self-imposed guardrails with a nonbinding safety framework.
  • The company's previous Responsible Scaling Policy was seen as hindering its ability to compete in the rapidly growing AI market.
  • Anthropic's policy change is unrelated to its discussions with the Pentagon over AI safeguards and a $200 million contract.
  • The new policy removes the stipulation to pause training more powerful models if safety controls are outpaced by capabilities.
  • Anthropic will now separate its own safety plans from its recommendations for the AI industry.
  • The company acknowledges that its previous 'race to the top' safety approach did not gain industry-wide adoption.
  • Anthropic's new 'Frontier Safety Roadmap' includes flexible, publicly graded goals rather than hard commitments.
  • The company maintains concerns over AI-controlled weapons and mass domestic surveillance, refusing to compromise on these issues.
  • Anthropic faces pressure from the Pentagon and competition from rivals like OpenAI in the enterprise AI tools market.
  • Anthropic's chief science officer argues the policy change prioritizes safety over competition, given the rapid advance of AI.