Human-Like Episodic Memory for Infinite Context LLMs
a year ago
- #Episodic Memory
- #Large Language Models
- #Artificial Intelligence
- Introduces EM-LLM, a novel approach integrating human episodic memory into LLMs for infinite context handling.
- EM-LLM organizes tokens into episodic events using Bayesian surprise and graph-theoretic boundary refinement.
- Features a two-stage memory process for efficient, human-like information retrieval.
- Outperforms state-of-the-art models like InfLLM and RAG on LongBench and InfiniteBench benchmarks.
- Capable of retrieval across 10 million tokens, surpassing full-context models in most tasks.
- Shows strong correlation between EM-LLM's event segmentation and human-perceived events.