
Episode 139 - RAG is Expensive but is it really
Impossible d'ajouter des articles
Échec de l’élimination de la liste d'envies.
Impossible de suivre le podcast
Impossible de ne plus suivre le podcast
-
Lu par :
-
De :
À propos de cette écoute
🧠 What RAG Actually Does
RAG enhances LLMs by retrieving relevant external information (e.g. from documents or databases) at query time, then feeding that into the prompt. This allows the LLM to answer with up-to-date or domain-specific knowledge without retraining.
💸 Is RAG Expensive?
Yes, it can be — especially if:
* You repeatedly reprocess large documents for every query.
* You use high token counts to include raw content in prompts.
* You rely on real-time parsing of files (e.g. PDFs or Excel) without preprocessing.
This is where vector storage and embedding optimization come in.
📦 Role of Vector Storage
Instead of reloading and reprocessing documents every time:
* Documents are chunked into smaller segments.
* Each chunk is converted into a vector embedding.
* These embeddings are stored in a vector database (e.g. FAISS, Pinecone, Weaviate).
* At query time, the user’s question is embedded and matched against stored vectors to retrieve relevant chunks.
This avoids reprocessing the original files and drastically reduces cost and latency
⚙️ Efficiency Strategies
Here’s how to make RAG more efficient:
Strategy
Description
Benefit
Vector Storage
Store precomputed embeddings
Avoids repeated parsing and embedding
ANN Indexing
Use Approximate Nearest Neighbor search
Fast retrieval from large datasets
Quantization
Compress embeddings (e.g. float8, int8)
Reduces memory footprint with minimal accuracy loss
Dimensionality Reduction
Use PCA or UMAP to reduce vector size
Speeds up search and lowers storage cost
Contextual Compression
Filter retrieved chunks before sending to LLM
Reduces token usage and cost
Get full access to Just Five Mins! at www.justfivemins.com/subscribe

Vous êtes membre Amazon Prime ?
Bénéficiez automatiquement de 2 livres audio offerts.Bonne écoute !