From Tokens to Thoughts: How LLMs and Humans Trade Compression for Meaning
a year ago
- #Natural Language Processing
- #Cognitive Science
- #Artificial Intelligence
- Humans organize knowledge into compact categories through semantic compression, preserving meaning while abstracting diverse instances.
- Large Language Models (LLMs) show linguistic abilities but differ in how they balance compression and semantic fidelity compared to humans.
- A novel information-theoretic framework is introduced to compare human and LLM strategies in knowledge representation.
- LLMs form broad conceptual categories aligned with human judgment but struggle with fine-grained semantic distinctions.
- LLMs favor aggressive statistical compression, whereas humans prioritize adaptive nuance and contextual richness.
- Findings highlight key differences between AI and human cognitive architectures, guiding improvements in LLM conceptual representations.