The Pentagon Threatens Anthropic
5 hours ago
- #AI Ethics
- #Military Contracts
- #Corporate Autonomy
- Anthropic signed a contract with the Pentagon last summer, which initially required the Pentagon to follow Anthropic’s Usage Policy.
- In January, the Pentagon attempted to renegotiate the contract, asking to remove the Usage Policy and have Anthropic’s AIs available for 'all lawful purposes'.
- Anthropic resisted, requesting guarantees that their AIs would not be used for mass surveillance of American citizens or autonomous killbots.
- The Pentagon refused these guarantees and threatened 'consequences' if Anthropic did not comply, including canceling the contract, using the Defense Production Act, or designating Anthropic a 'supply chain risk'.
- The 'supply chain risk' designation could severely damage Anthropic’s business by preventing U.S. companies using their products from doing business with the military.
- The author supports Anthropic’s stance, criticizing the Pentagon’s tactics as unprecedented and harmful to domestic companies.
- The Pentagon’s actions could set a dangerous precedent, chilling investment and allowing future administrations to target companies arbitrarily.
- The author argues that the Pentagon should either accept Anthropic’s terms or switch to another AI vendor rather than coercing compliance.
- The situation highlights broader concerns about AI ethics, military use of AI, and corporate autonomy in the face of government pressure.