Anthropic bans companies majority-controlled by China, Russia, Iran, North Korea
5 hours ago
- #geopolitical tech
- #national security
- #AI regulation
- Anthropic is updating its terms of service to bar companies majority-controlled by China, Russia, Iran, or North Korea from using Claude AI models due to national security concerns.
- This is the first major US AI company to implement such restrictions, effective immediately, targeting entities from adversarial nations.
- The policy closes a loophole where companies from restricted regions accessed services via subsidiaries in other countries, such as Chinese firms in Singapore.
- Anthropic cites risks of authoritarian regimes compelling firms to share data or collaborate with intelligence agencies, potentially aiding rival military or AI development.
- The ban applies to entities more than 50% owned by firms in unsupported regions, affecting major Chinese tech companies like ByteDance, Tencent, and Alibaba.
- Anthropic acknowledges potential revenue loss but deems the move necessary to address security risks.
- While US AI services like Claude and ChatGPT are officially blocked in China, VPNs allow access, though local alternatives (e.g., Qwen, Deepseek) are widely used.
- The policy's impact in China may be limited until advanced AI training on banned hardware (e.g., Nvidia chips) becomes critical.