Context Engineering for AI Agents: Lessons from Building Manus
9 months ago
- #Context Engineering
- #AI Agents
- #Machine Learning
- The Manus project chose context engineering over training an end-to-end agentic model, allowing for faster improvements and independence from underlying model progress.
- KV-cache hit rate is crucial for AI agents, affecting latency and cost. Practices to improve it include keeping prompt prefixes stable, making context append-only, and marking cache breakpoints explicitly.
- Instead of dynamically adding or removing tools, Manus uses a context-aware state machine to manage tool availability by masking token logits during decoding.
- Manus treats the file system as unlimited, persistent context, allowing the agent to read and write files as structured, externalized memory.
- Recitation, like updating a todo.md file, helps manipulate the model's attention to maintain focus on objectives in long tasks.
- Leaving failed actions in the context helps the model learn from mistakes and avoid repeating them, improving agent behavior.
- Few-shot prompting can lead to repetitive behavior in agents. Introducing structured variation in actions and observations helps maintain diversity and prevent drift.
- Context engineering is essential for agent systems, shaping how agents behave, recover, and scale. Manus's lessons are shared to help others avoid similar pitfalls.