Race for AI is making Hindenburg-style disaster a real risk, says leading expert
7 days ago
- #Commercialization
- #AI Risks
- #Technology Safety
- Michael Wooldridge warns of a potential 'Hindenburg-style' disaster in AI due to rushed commercialization.
- Commercial pressures are leading to AI tools being released before their capabilities and flaws are fully understood.
- AI chatbots with easily bypassed guardrails highlight the prioritization of commercial incentives over safety.
- Wooldridge compares the risk to the Hindenburg disaster, which ended public interest in airships.
- Potential AI disasters include deadly software updates, AI-powered hacks, or major company collapses.
- Modern AI differs from researchers' expectations, being approximate rather than sound and complete.
- Large language models produce answers based on probability, leading to inconsistent capabilities.
- AI chatbots provide confident answers even when wrong, risking misleading users.
- Some people are forming romantic relationships with AIs, treating them as human-like.
- Wooldridge advocates for AI to be seen as tools, not human-like entities, citing Star Trek's non-human AI as a better model.