Anthropic is untrustworthy
11 hours ago
- #Regulation
- #Corporate Governance
- #AI Safety
- Anthropic's leadership has been accused of being misleading and deceptive, with shifting positions that align with OpenAI's direction.
- The company lobbied against regulations that would slow AI development, despite advocating for safety measures in public.
- Anthropic's leadership allegedly made promises to investors and employees about not pushing the AI frontier, but later released models that advanced capabilities.
- The company's governance lacks strong independent oversight, with concerns about the Long-Term Benefit Trust's effectiveness.
- Anthropic had secret non-disparagement agreements that prevented former employees from criticizing the company, which were only addressed after public exposure.
- The company's lobbying efforts, particularly against SB-1047 and the NY RAISE Act, contradicted its public image of supporting AI safety regulation.
- Anthropic quietly walked back commitments in its Responsible Scaling Policy (RSP), including removing a pledge to plan for a pause in scaling.
- The company's original mission to focus on AI safety research has drifted, with actions suggesting a focus on commercializing AI and competing in the AI race.
- Anthropic's leadership has been inconsistent in communicating risks and commitments, raising questions about their trustworthiness and alignment with safety goals.
- Employees and potential hires are urged to critically assess whether Anthropic's actions align with its stated mission and values.