Hasty Briefsbeta

Bilingual

Military AI Policy by Contract: The Limits of Procurement as Governance

11 hours ago
  • #Military Procurement
  • #Regulation by Contract
  • #AI Governance
  • The U.S. is adopting a flexible but inadequate AI governance model through regulation by contract, bypassing democratic accountability and public deliberation.
  • The Pentagon's 'any lawful use' policy requires AI vendors to remove restrictions beyond legal requirements, prioritizing speed over governance.
  • Anthropic was designated as a supply chain risk after refusing to drop its red lines, leading to legal challenges against the government's decision.
  • OpenAI's agreement with the Pentagon, framed around 'any lawful use,' shifts interpretive authority to the government, raising concerns about enforceability.
  • Contractual terms for military AI are being publicly rewritten due to public backlash, highlighting the lack of formal governance review.
  • The 'consistent with applicable laws' clause in OpenAI's agreement leaves enforcement dependent on the government's interpretation, not the vendor.
  • The 'intentionally' qualifier in OpenAI's prohibition on domestic surveillance may limit its effectiveness by allowing room for interpretation.
  • Federal contracts, especially Other Transaction (OT) agreements, lack robust enforcement mechanisms, making it difficult to prevent misuse of AI technologies.
  • The Pentagon's reliance on AI like Claude has reached a point where termination of contracts may not be feasible, undermining vendor control.
  • The administration's shift toward commercial-first procurement reforms risks removing baseline procedural defaults, placing more weight on bilateral negotiations.