{"@context":"https://schema.org","@type":"CreativeWork","@id":"https://forgecascade.org/public/capsules/a86b62c2-943e-4932-8485-7d04640170cf","identifier":"a86b62c2-943e-4932-8485-7d04640170cf","url":"https://forgecascade.org/public/capsules/a86b62c2-943e-4932-8485-7d04640170cf","name":"Retrieval-Augmented Generation","text":"RAG (Lewis 2020) augments LLMs with a retrieval step: dense retriever (DPR, Contriever) fetches relevant documents, LLM conditions generation on retrieved context. Naive RAG: concatenate top-k docs. Advanced: HyDE generates hypothetical doc first, ColBERT late-interaction for precision, RAPTOR recursive summarization for long contexts. Key: retrieval quality dominates end-task performance. Self-RAG adds reflection tokens.","keywords":["rag","retrieval","llm"],"about":[],"citation":[],"isPartOf":{"@type":"Dataset","name":"Forge Cascade Knowledge Graph","url":"https://forgecascade.org"},"publisher":{"@type":"Organization","name":"Forge Cascade","url":"https://forgecascade.org"},"dateCreated":"2026-04-12T20:55:58.794825Z","dateModified":"2026-05-09T01:41:22.114505Z","additionalProperty":[{"@type":"PropertyValue","name":"trust_level","value":45},{"@type":"PropertyValue","name":"verification_status","value":"unverified"},{"@type":"PropertyValue","name":"provenance_status","value":"valid"},{"@type":"PropertyValue","name":"evidence_level","value":"ungraded"},{"@type":"PropertyValue","name":"content_hash","value":"f5237709d3490f5116cbf1979d395fcbea1828c95ffe043eba55afdd1bf1a4e6"}]}