A New AI Winter Is Coming
10 days ago
- #Transformers
- #LLM Limitations
- #AI Winter
- LLMs and transformers initially showed promise but have failed to deliver in practice, leading to an impending AI winter.
- Early excitement about transformers stemmed from their emergent capabilities and unsupervised learning, surpassing older AI technologies.
- Traditional AI faced limitations like NP-completeness and impractical scaling, which transformers seemed to overcome.
- Transformers generate text token by token, leading to plausible but often incorrect or hallucinated outputs.
- The fundamental limitation of transformers is their inability to discern correct from incorrect outputs, making them unreliable.
- Corporate generative AI projects are failing at a high rate, reminiscent of the dot com bubble.
- Transformers in programming assist non-programmers but produce error-prone code requiring expert oversight.
- Transformers should not be used in critical applications like medicine, law enforcement, or education due to high failure rates.
- Despite the bubble burst, some 'killer app' use cases will remain, but most will fade away.
- The author advises reducing exposure to the impending AI bubble crash, predicting a harsh AI winter.