GitHub - langchain-ai/rag-from-scratch
6 days ago
- #LLM
- #RAG
- #Machine Learning
- LLMs are trained on a fixed corpus, limiting access to private or recent information.
- Fine-tuning can help but is often costly and not ideal for factual recall.
- Retrieval augmented generation (RAG) expands an LLM's knowledge by using external documents.
- RAG grounds LLM generation through in-context learning.
- A video playlist and notebooks explain RAG basics, including indexing, retrieval, and generation.