Is AI killing people by accident?
14 hours ago
- #Generative AI risks
- #AI in military
- #Ethics of AI
- The writer discusses whether AI was involved in a mistargeting incident that killed 150 school children in Iran, but admits uncertainty due to lack of transparency from military officials.
- Generative AI has known issues with reasoning, visual cognition, and common sense, making it unreliable for military targeting.
- Military use of AI may vary in effectiveness—potentially helpful in logistics but prone to errors in targeting, especially in unfamiliar situations.
- Beyond technical unreliability, AI in military use raises moral concerns about shifting responsibility for civilian casualties to algorithms rather than human decision-makers.
- The article criticizes the premature integration of AI into military operations, warning of unnecessary deaths and potential escalation, including nuclear war.
- Pathocrats are accused of favoring AI for its 'move fast and break things' ethos, disregarding the need for caution and oversight.
- The piece concludes that generative AI is not ready for military use, yet its deployment continues without adequate understanding of the consequences.