What I don't like about chains of thoughts
11 days ago
- #AI Reasoning
- #Language Models
- #Cognitive Science
- Chain of Thought (COT) enhances LLMs by enabling step-by-step reasoning through language.
- LLMs using COT can adapt compute budget based on task complexity by generating more tokens.
- Reasoning via language is inefficient compared to non-verbal human reasoning, as seen in activities like sports or coding.
- Language is a communication tool, not necessarily the most efficient medium for internal reasoning.
- Non-verbal reasoning in humans is faster and more efficient, suggesting LLMs' reliance on language is a bottleneck.
- Future AI may need reasoning in a specialized embedding space, beyond token prediction, for efficiency gains.
- COT is a useful hack for current LLMs but not the final solution for achieving human-like or superior intelligence.