Palantir partnership is at heart of Anthropic, Pentagon rift
5 days ago
- #AI ethics
- #Silicon Valley
- #US military
- The US military is considering banning Anthropic's AI from military use due to ethical concerns.
- Anthropic's AI models, available through Amazon and Palantir, are used in classified government operations, including monitoring Venezuelan President Nicolás Maduro.
- Tensions arose when Anthropic expressed disapproval of its technology being used in military operations, leading to a Pentagon review.
- Anthropic refuses to sign an 'all lawful uses' contract with the Pentagon, seeking restrictions on surveillance and autonomous weapons.
- The Pentagon views Anthropic as a 'supply chain risk' and may bar subcontractors from using its AI models.
- Anthropic's stance contrasts with Palantir, which does not restrict government use of its technology.
- Military officials believe suppliers should not dictate how technology is used in operations.
- Current AI models like Claude play a limited role in military applications, but debates highlight Silicon Valley's paternalistic approach to tech use.
- Anthropic must decide whether to fully support military use or risk losing government contracts.