Hasty Briefsbeta

Cognitive Foundations for Reasoning and Their Manifestation in LLMs

3 days ago
  • #Large Language Models
  • #Cognitive Science
  • #Artificial Intelligence
  • Large language models (LLMs) solve complex problems but fail on simpler variants, indicating different reasoning mechanisms from humans.
  • A taxonomy of 28 cognitive elements is synthesized from cognitive science research to analyze reasoning behaviors in LLMs.
  • A fine-grained cognitive evaluation framework is proposed, analyzing 170K traces from 17 models and 54 human think-aloud traces.
  • Systematic structural differences are found: humans use hierarchical nesting and meta-cognitive monitoring, while models rely on shallow forward chaining.
  • Meta-analysis of 1,598 LLM reasoning papers shows research focuses on easily quantifiable behaviors, neglecting meta-cognitive controls correlated with success.
  • Models possess behavioral repertoires associated with success but fail to deploy them spontaneously.
  • Test-time reasoning guidance is developed to scaffold successful structures, improving performance by up to 60% on complex problems.
  • The study bridges cognitive science and LLM research, aiming for models that reason through principled cognitive mechanisms rather than shortcuts or memorization.