Hasty Briefsbeta

Bilingual

The Pentagon strongarmed AI firms before Iran strikes

11 hours ago
  • #military technology
  • #US politics
  • #AI ethics
  • US and Israeli attacks on Iran were preceded by tense negotiations between the US Department of Defense and AI company Anthropic over the ethical use of its Claude systems.
  • Anthropic sought guarantees its AI would not be used for domestic surveillance or autonomous weapons without human control, leading to President Trump banning federal use of its technology.
  • OpenAI, in contrast, struck a deal with the Department of Defense, permitting 'all lawful uses' of its tools without specific ethical restrictions.
  • The Trump administration has opposed AI regulation, with many AI companies aligning with the administration, while Anthropic has warned about AI undermining democracy.
  • International consensus had been emerging on the risks of lethal autonomous weapons, with principles for responsible AI use announced by the US, NATO, and the UK.
  • Military AI development relies heavily on private sector partnerships, with norms around AI use rapidly shifting in both government and industry.
  • Silicon Valley has largely welcomed Trump's deregulatory stance, with significant financial support from AI industry leaders.
  • Ethical AI assumes democratic norms, including algorithmic transparency and public accountability, which are challenged in autocratic regimes.
  • Anthropic's push for ethical discussions was met with the Trump administration labeling it a 'supply chain risk,' while OpenAI faces reputational risks for its lack of ethical limits.
  • The future of ethical military AI depends on strong democratic norms, which are currently under threat as the international order weakens.