Large Language Model Reasoning Failures
4 days ago
- #Large Language Models
- #Reasoning Failures
- #Artificial Intelligence
- Large Language Models (LLMs) exhibit reasoning failures despite their advanced capabilities.
- A novel categorization framework divides reasoning into embodied and non-embodied types, with non-embodied reasoning further split into informal (intuitive) and formal (logical).
- Reasoning failures are classified into three types: fundamental failures, application-specific limitations, and robustness issues.
- The survey provides definitions, root causes, and mitigation strategies for each type of reasoning failure.
- A GitHub repository is released to compile research on LLM reasoning failures, serving as a resource for future studies.