Is chain-of-thought AI reasoning a mirage?
10 days ago
- #philosophy of AI
- #AI reasoning
- #chain-of-thought
- Chain-of-thought (CoT) reasoning in AI is debated, with some arguing it's just memorized patterns rather than true reasoning.
- Arizona State University's paper claims CoT reasoning fails under distribution shifts, suggesting it's a 'mirage' of reasoning.
- The paper uses a small transformer model trained on toy alphabet problems to test CoT reasoning, finding it struggles with unseen or slightly altered tasks.
- Criticism of the paper includes its use of a tiny model (600k params), which may not generalize to larger models capable of more complex reasoning.
- Human reasoning also relies on heuristics and templates, making the paper's comparison to an ideal 'principled reasoner' unrealistic.
- The question of whether AI reasoning is 'real' is philosophical, lacking a clear consensus definition.
- Good reasoning papers should assess human reasoning or define 'real' reasoning clearly and use tasks requiring multiple approaches, not just computation.