Hasty Briefsbeta

Bilingual

The AI Bubble Is an Information War

6 hours ago
  • #AI Industry
  • #Financial Analysis
  • #Military AI
  • NVIDIA's earnings beat expectations but raised concerns about the stability of the AI industry, with $27bn in cloud commitments indicating potential revenue issues.
  • CoreWeave's Q4 FY2025 earnings showed a loss of 89 cents per share, with $1.57bn in revenue and a negative 6% operating margin. 67% of its revenue comes from Microsoft.
  • CoreWeave's revenue per megawatt dropped from $2.3m in Q3 to $1.847m in Q4, suggesting weakening business fundamentals as it scales.
  • CoreWeave is described as a 'time bomb' due to deep unprofitability, massive capital expenditures ($10bn in 2025), and punishing debt, despite high-profile customers.
  • OpenAI's reported $13.1bn revenue in 2025 is questioned, with claims it spent $8.67bn on inference costs alone through September, raising doubts about sustainability.
  • Anthropic's Claude Code, despite hype, reportedly made only $203m in revenue, with Anthropic spending $8-$13.50 for every dollar earned.
  • OpenAI's funding rounds are scrutinized, with claims that reported $110bn raise is misleading, as much of it is contingent or not yet disbursed.
  • Sam Altman's comments on CNBC suggested the AI industry relies on continuous revenue growth, drawing comparisons to a Ponzi scheme.
  • Anthropic and OpenAI are accused of using media leaks to mislead investors and the public about their financials and capabilities.
  • Anthropic's stance on military use of AI is criticized as hypocritical, as it supports 'all lawful uses' except mass surveillance and autonomous weapons, while actively participating in military operations.
  • OpenAI's deal with the Pentagon is revealed to allow 'any lawful use,' with loopholes that could permit mass surveillance, despite public claims of ethical safeguards.
  • Both Altman and Amodei are accused of monetizing war and deceiving the public about AI's capabilities, with their models used in military strategies and operations.
  • The ethical implications of AI in military use are highlighted, with LLMs like Claude and ChatGPT being unreliable yet trusted for life-and-death decisions.
  • The media's role in uncritically repeating AI companies' claims is criticized, contributing to misinformation and unethical use of AI in warfare.