A trillion dollars (potentially) wasted on gen-AI
13 days ago
- #AI
- #Machine Learning
- #Economy
- Ilya Sutskever, a prominent machine learning researcher, suggests that scaling AI through more chips and data is flattening, necessitating new techniques like neurosymbolic approaches and innate constraints.
- Sutskever highlights that current AI models generalize dramatically worse than humans, a fundamental issue that persists despite scaling efforts.
- Critics, including Subbarao Kambhampati and Emily Bender, have long argued about the limitations of large language models (LLMs) and the need for diverse research approaches.
- The AI industry has invested heavily in scaling LLMs, with estimates suggesting a trillion dollars spent, much of it on Nvidia chips and high salaries, without solving core issues like hallucinations and reasoning.
- The economic risks of an AI bubble are significant, with potential for a recession or financial crisis if the promised productivity gains from AI fail to materialize.
- The environmental cost of AI infrastructure, including data centers, is also a growing concern, with high water and power usage impacting communities.
- The AI community's focus on scaling LLMs has overlooked theoretical foundations of intelligence, leading to potential wasted resources and missed opportunities for more effective approaches.