Context engineering
6 months ago
- #Context Engineering
- #LLM
- #AI
- LLMs have evolved from conversational chatbots to integral decision-making components, necessitating a shift from 'prompt engineering' to 'context engineering'.
- Context engineering involves a more dynamic, targeted, and deliberate approach to feeding tokens into LLMs, considering the entire context window.
- Early LLM usage focused on text completion, but chat framing improved usability by structuring conversations with special tokens.
- Prompt engineering often relied on trial-and-error, lacking the systematic approach of true engineering.
- In-context learning allows LLMs to generate outputs based on novel structures in the prompt, not just training data.
- Expanding context with various data types (e.g., documents, tool calls) increases complexity and risks like hallucination.
- Context engineering shifts the mindset from treating LLMs as oracles to briefing them as skilled analysts.
- Retrieval-augmented generation (RAG) is a form of context engineering, injecting external knowledge into the context window.
- Design patterns in context engineering (e.g., RAG, tool calling) enable modular, robust, and maintainable systems.
- Multi-agent systems leverage specialized agents, with context windows serving as contracts for interaction.