Why I don't think AGI is imminent
3 months ago
- #AI
- #Machine Learning
- #Cognitive Science
- CEOs of OpenAI and Anthropic claim human-level AI is near or already here, sparking public interest but lacking technical scrutiny.
- Human cognition relies on hardwired cognitive primitives like number sense, object permanence, and causality, which language assumes but doesn't state explicitly.
- LLMs struggle to reverse-engineer these primitives from language alone, leading to limitations in arithmetic, logic, and spatial reasoning.
- Training AI on video can teach object permanence to some extent, but lacks deeper understanding of persistence and entity tracking.
- Infants and animals exhibit core cognitive abilities like object permanence without visual training, suggesting evolutionary origins.
- Current AI lacks rich, multisensory perception-action coupling, which is crucial for robust cognitive development.
- Simulated environments like Google DeepMind's SIMA 2 and Dreamer 4 show promise but don't yet bridge the gap between embodied experience and language reasoning.
- Transformer architectures are feed-forward and lack bidirectional processing, limiting their ability to revisit or revise earlier representations.
- Alternative architectures like neurosymbolic AI or recurrent networks may be needed, but scaling them remains an open challenge.
- AGI claims by CEOs are marketing-driven, while researchers highlight fundamental limitations in reasoning, generalization, and embodiment.
- Progress in AI benchmarks often relies on brute-force methods rather than genuine cognitive improvements.
- Solving AGI requires interdisciplinary research over decades, not just scaling current paradigms.