Guide

Enhancing RAG with Contextual Retrieval

Note: For more background information on Contextual Retrieval, including additional performance evaluations on various datasets, we recommend reading our accompanying blog post.

Retrieval Augmented Generation (RAG) enables Claude to leverage your internal knowledge bases, codebases, or any other corpus of documents when providing a response. Enterprises are increasingly building RAG applications to improve workflows in customer support, Q&A over internal company documents, financial & legal analysis, code generation, and much more.

In a separate guide, we walked through setting up a basic retrieval system, demonstrated how to evaluate its performance, and then outlined a few techniques to improve performance. In this guide, we present a technique for improving retrieval performance: Contextual Embeddings.

In traditional RAG, documents are typically split into smaller chunks for efficient retrieval. While this approach works well for many applications, it can lead to problems when individual chunks lack sufficient context. Contextual Embeddings solve this problem by adding relevant context to each chunk before embedding. This method improves the quality of each embedded chunk, allowing for more accurate retrieval and thus better overall performance. Averaged across all data sources we tested, Contextual Embeddings reduced the top-20-chunk retrieval failure rate by 35%.

The same chunk-specific context can also be used with BM25 search to further improve retrieval performance. We introduce this technique in the "Contextual BM25" section.

In this guide, we'll demonstrate how to build and optimize a Contextual Retrieval system using a dataset of 9 codebases as our knowledge base. We'll walk through:

  1. Setting up a basic retrieval pipeline to establish a baseline for performance.

  2. Contextual Embeddings: what it is, why it works, and how prompt caching makes it practical for production use cases.

  3. Implementing Contextual Embeddings and demonstrating performance improvements.

  4. Contextual BM25: improving performance with contextual BM25 hybrid search.

  5. Improving performance with reranking,

Evaluation Metrics & Dataset:

We use a pre-chunked dataset of 9 codebases - all of which have been chunked according to a basic character splitting mechanism. Our evaluation dataset contains 248 queries - each of which contains a 'golden chunk.' We'll use a metric called Pass@k to evaluate performance. Pass@k checks whether or not the 'golden document' was present in the first k documents retrieved for each query. Contextual Embeddings in this case helped us to improve Pass@10 performance from ~87% --> ~95%.

You can find the code files and their chunks in data/codebase_chunks.json and the evaluation dataset in data/evaluation_set.jsonl

Additional Notes:

Prompt caching is helpful in managing costs when using this retrieval method. This feature is currently available on Anthropic's first-party API, and is coming soon to our third-party partner environments in AWS Bedrock and GCP Vertex. We know that many of our customers leverage AWS Knowledge Bases and GCP Vertex AI APIs when building RAG solutions, and this method can be used on either platform with a bit of customization. Consider reaching out to Anthropic or your AWS/GCP account team for guidance on this!

To make it easier to use this method on Bedrock, the AWS team has provided us with code that you can use to implement a Lambda function that adds context to each document. If you deploy this Lambda function, you can select it as a custom chunking option when configuring a Bedrock Knowledge Base. You can find this code in contextual-rag-lambda-function. The main lambda function code is in lambda_function.py.

Table of Contents

  1. Setup

  2. Basic RAG

  3. Contextual Embeddings

  4. Contextual BM25

  5. Reranking

Setup

Before starting this guide, ensure you have:

Technical Skills:

  • Intermediate Python programming
  • Basic understanding of RAG (Retrieval Augmented Generation)
  • Familiarity with vector databases and embeddings
  • Basic command-line proficiency

System Requirements:

  • Python 3.8+
  • Docker installed and running (optional, for BM25 search)
  • 4GB+ available RAM
  • ~5-10 GB disk space for vector databases

API Access:

Time & Cost:

  • Expected completion time: 30-45 minutes
  • API costs: ~$5-10 to run through the full dataset

Libraries

We'll need a few libraries, including:

  1. anthropic - to interact with Claude

  2. voyageai - to generate high quality embeddings

  3. cohere - for reranking

  4. elasticsearch for performant BM25 search

  5. pandas, numpy, matplotlib, and scikit-learn for data manipulation and visualization

Environment Variables

Ensure the following environment variables are set:

	- VOYAGE_API_KEY
- ANTHROPIC_API_KEY
- COHERE_API_KEY

[6]

We define our model names up front to make it easier to change models as new models are released

[ ]

We'll start by initializing the Anthropic client that we'll use for generating contextual descriptions.

[5]

Initialize a Vector DB Class

We'll create a VectorDB class to handle embedding storage and similarity search. This class serves three key functions in our RAG pipeline:

  1. Embedding Generation: Converts text chunks into vector representations using Voyage AI's embedding model
  2. Storage & Caching: Saves embeddings to disk to avoid re-computing them (which saves time and API costs)
  3. Similarity Search: Retrieves the most relevant chunks for a given query using cosine similarity

For this guide, we're using a simple in-memory vector database with pickle serialization. This makes the code easy to understand and requires no external dependencies. The class automatically saves embeddings to disk after generation, so you only pay the embedding cost once.

For production use, consider hosted vector database solutions.

The VectorDB class below follows the same interface patterns you'd use with production solutions, making it easy to swap out later. Key features include batch processing (128 chunks at a time), progress tracking with tqdm, and query caching to speed up repeated searches during evaluation.

[ ]

Now we can use this class to load our dataset

[12]
Processing chunks: 100%|██████████| 737/737 [00:00<00:00, 985400.72it/s]
Embedding chunks: 100%|██████████| 737/737 [00:42<00:00, 17.28it/s]
Vector database loaded and saved. Total chunks processed: 737

Basic RAG

To get started, we'll set up a basic RAG pipeline using a bare bones approach. This is sometimes called 'Naive RAG' by many in the industry. A basic RAG pipeline includes the following 3 steps:

  1. Chunk documents by heading - containing only the content from each subheading

  2. Embed each document

  3. Use Cosine similarity to retrieve documents in order to answer query

[26]

Now let's establish our baseline performance by evaluating the basic RAG system. We'll test at k=5, 10, and 20 to see how many of the golden chunks appear in the top retrieved results. This gives us a benchmark to measure improvement against.

[ ]
============================================================
Evaluation Results: Contextual Embeddings
============================================================

Evaluating Pass@5...
Evaluating retrieval: 100%|██████████| 248/248 [00:03<00:00, 65.26it/s]
Evaluating Pass@10...
Evaluating retrieval: 100%|██████████| 248/248 [00:03<00:00, 64.87it/s]
Evaluating Pass@20...
Evaluating retrieval: 100%|██████████| 248/248 [00:03<00:00, 64.72it/s]
============================================================
Metric          Pass Rate       Score          
------------------------------------------------------------
Pass@5          80.92%          0.8092         
Pass@10         87.15%          0.8715         
Pass@20         90.06%          0.9006         
============================================================

These results show our baseline RAG performance. The system successfully retrieves the correct chunk 81% of the time in the top 5 results, improving to 87% in the top 10, and 90% in the top 20.

Contextual Embeddings

With basic RAG, individual chunks often lack sufficient context when embedded in isolation. Contextual Embeddings solve this by using Claude to generate a brief description that "situates" each chunk within its source document. We then embed the chunk together with this context, creating richer vector representations.

For each chunk in our codebase dataset, we pass both the chunk and its full source file to Claude. Claude generates a concise explanation of what the chunk contains and where it fits in the overall file. This context gets prepended to the chunk before embedding.

Cost and Latency Considerations

When does this cost occur? The contextualization happens once at ingestion time, not during every query. Unlike techniques like HyDE (hypothetical document embeddings) that add latency to each search, contextual embeddings are a one-time cost when building your vector database. Prompt caching makes this practical. Since we process all chunks from the same document sequentially, we can leverage prompt caching for significant savings.

  1. First chunk: We write the full document to cache (pay a small premium)
  2. Subsequent chunks: Read the document from cache (90% discount on those tokens)
  3. Cache lasts 5 minutes, plenty of time to process all chunks in a document

Cost example: For 800-token chunks in 8k-token documents with 100 tokens of generated context, the total cost is $1.02 per million document tokens. You'll see the cache savings in the logs when you run the code below.

Note: Some embedding models have fixed input token limits. If you see worse performance with contextual embeddings, your contextualized chunks may be getting truncated—consider using an embedding model with a larger context window.


Let's see an example of how contextual embeddings work by generating context for a single chunk. We'll use Claude to create a situating context, and you'll also see the prompt caching metrics in action.

[ ]
Situated context: This chunk contains the module documentation and initial struct definition for a differential fuzzing executor. It introduces the `DiffExecutor` struct that wraps two executors (primary and secondary) to run them sequentially with the same input, comparing their behavior for differential testing. The chunk establishes the core data structure and imports needed for the differential fuzzing implementation.
----------
Input tokens: 3412
Output tokens: 76
Cache creation input tokens: 0
Cache read input tokens: 0

Building the Contextual Vector Database

Now that we've seen how to generate contextual descriptions for individual chunks, let's scale this up to process our entire dataset. The ContextualVectorDB class below extends our basic VectorDB with automatic contextualization during ingestion.

Key features:

  • Parallel processing: Uses ThreadPoolExecutor to contextualize multiple chunks simultaneously (configurable thread count)
  • Automatic prompt caching: Processes chunks document-by-document to maximize cache hits
  • Token tracking: Monitors cache performance and calculates actual cost savings
  • Persistent storage: Saves both embeddings and contextualized metadata to disk

When you run this, pay attention to the token usage statistics—you'll see that 70-80% of input tokens are read from cache, demonstrating the dramatic cost savings from prompt caching. On our 737-chunk dataset, this reduces what would be a ~$15 ingestion job down to ~$3.

[21]
[22]
Processing 737 chunks with 5 threads
Processing chunks: 100%|██████████| 737/737 [05:32<00:00,  2.22it/s]
Contextual Vector database loaded and saved. Total chunks processed: 737
Total input tokens without caching: 1223730
Total output tokens: 58161
Total input tokens written to cache: 176079
Total input tokens read from cache: 2267069
Total input token savings from prompt caching: 61.83% of all input tokens used were read from cache.
Tokens read from cache come at a 90 percent discount!

These numbers reveal the power of prompt caching for contextual embeddings:

  • We processed 737 chunks across 9 codebase files
  • 61.83% of input tokens were read from cache (2.27M tokens at 90% discount)
  • Without caching, this would cost ~$9.20 in input tokens
  • With caching, the actual cost drops to ~$2.85 (69% savings)

The cache hit rate depends on how many chunks each document contains. Files with more chunks benefit more from caching since we write the full document to cache once, then read it repeatedly for each chunk in that file. This is why processing documents sequentially (rather than randomly shuffling chunks) is crucial for maximizing cache efficiency.

Now let's evaluate how much this contextualization improves our retrieval performance compared to the baseline.

[28]
============================================================
Evaluation Results: Contextual Embeddings
============================================================

Evaluating Pass@5...
Evaluating retrieval: 100%|██████████| 248/248 [00:03<00:00, 64.58it/s]
Evaluating Pass@10...
Evaluating retrieval: 100%|██████████| 248/248 [00:03<00:00, 64.37it/s]
Evaluating Pass@20...
Evaluating retrieval: 100%|██████████| 248/248 [00:03<00:00, 64.14it/s]
============================================================
Metric          Pass Rate       Score          
------------------------------------------------------------
Pass@5          88.12%          0.8812         
Pass@10         92.34%          0.9234         
Pass@20         94.29%          0.9429         
============================================================

By adding context to each chunk before embedding, we've reduced retrieval failures by ~30-40% across all k values. This means fewer irrelevant results in your top retrieved chunks, leading to better answers when you pass these chunks to Claude for final response generation.

The improvement is most pronounced at Pass@5, where precision matters most—suggesting that contextualized chunks aren't just retrieved more often, but rank higher when relevant.

Contextual BM25: Hybrid Search

Contextual embeddings alone improved our Pass@10 from 87% to 92%. We can push performance even higher by combining semantic search with keyword-based search using Contextual BM25—a hybrid approach that reduces retrieval failure rates further.

Why Hybrid Search?

Semantic search excels at understanding meaning and context, but can miss exact keyword matches. BM25 (a probabilistic keyword ranking algorithm) excels at finding specific terms, but lacks semantic understanding. By combining both, we get the best of both worlds:

  • Semantic search: Captures conceptual similarity and paraphrases
  • BM25: Catches exact terminology, function names, and specific phrases
  • Reciprocal Rank Fusion: Intelligently merges results from both sources

What is BM25?

BM25 is a probabilistic ranking function that improves upon TF-IDF by accounting for document length and term saturation. It's widely used in production search engines (including Elasticsearch) for its effectiveness at ranking keyword relevance. For technical details, see this blog post.

Instead of only searching the raw chunk content, we search both the chunk and the contextual description we generated earlier. This means BM25 can match keywords in either the original text or the explanatory context.

Setup: Running Elasticsearch

Before running the code below, you'll need Elasticsearch running locally. The easiest way is with Docker:

	docker run -d --name elasticsearch -p 9200:9200 -p 9300:9300 \
  -e "discovery.type=single-node" \
  -e "xpack.security.enabled=false" \
  elasticsearch:9.2.0

Troubleshooting:

  • Verify it's running: docker ps | grep elasticsearch
  • If port 9200 is in use: docker stop elasticsearch && docker rm elasticsearch
  • Check logs if issues occur: docker logs elasticsearch

How the Hybrid Search Works

The retrieve_advanced function below implements a three-step process:

  1. Retrieve candidates: Get top 150 results from both semantic search and BM25
  2. Score fusion: Combine rankings using weighted Reciprocal Rank Fusion
    • Default: 80% weight to semantic search, 20% to BM25
    • These weights are tunable—experiment to optimize for your use case
  3. Return top-k: Select the highest-scoring results after fusion

The weighting system lets you balance between semantic understanding and keyword precision based on your data characteristics.

[ ]
[39]
Created index: contextual_bm25_index
======================================================================
Evaluation Results: Contextual BM25 Hybrid Search
======================================================================

Evaluating Pass@5...
Pass@5: 100%|██████████| 248/248 [00:05<00:00, 41.79it/s]
Pass@5: 88.86%
Semantic: 54.6% | BM25: 45.4%

Evaluating Pass@10...
Pass@10: 100%|██████████| 248/248 [00:05<00:00, 42.20it/s]
Pass@10: 92.31%
Semantic: 57.6% | BM25: 42.4%

Evaluating Pass@20...
Pass@20: 100%|██████████| 248/248 [00:05<00:00, 42.15it/s]
Pass@20: 95.23%
Semantic: 60.8% | BM25: 39.2%

======================================================================
Metric       Pass Rate    Score        Semantic     BM25        
----------------------------------------------------------------------
Pass@5            88.86%     0.8886       54.6%       45.4%
Pass@10           92.31%     0.9231       57.6%       42.4%
Pass@20           95.23%     0.9523       60.8%       39.2%
======================================================================

Deleted Elasticsearch index: contextual_bm25_index

Reranking

We've achieved strong results with hybrid search (93.21% Pass@10), but there's one more technique that can squeeze out additional performance: reranking.

What is Reranking?

Reranking is a two-stage retrieval approach:

  1. Stage 1 - Broad Retrieval: Cast a wide net by retrieving more candidates than you need (e.g., retrieve 100 chunks)
  2. Stage 2 - Precise Selection: Use a specialized reranking model to score these candidates and select only the top-k most relevant ones

Why does this work? Initial retrieval methods (embeddings, BM25) are optimized for speed across millions of documents. Reranking models are slower but more accurate—they can afford to do deeper analysis on a smaller candidate set. This creates a speed/accuracy trade-off that works well in practice.

Our Reranking Approach

For this example, we'll use a simpler reranking pipeline that builds on contextual embeddings alone (not the full hybrid search). Here's the process:

  1. Over-retrieve: Get 10x more results than needed (e.g., retrieve 100 chunks when we need 10)
  2. Rerank with Cohere: Use Cohere's rerank-english-v3.0 model to score all candidates
  3. Select top-k: Return only the highest-scoring results

The reranking model has access to both the original chunk content and the contextual descriptions we generated, giving it rich information to make precise relevance judgments.

Expected Performance

Adding reranking delivers a modest but meaningful improvement:

  • Without reranking: 92.34% Pass@10 (contextual embeddings alone)
  • With reranking: ~95% Pass@10 (additional 2-3% gain)

This might seem small, but in production systems, reducing failures from 7.66% to ~5% can significantly improve user experience. The trade-off is query latency—reranking adds ~100-200ms per query depending on candidate set size.

[ ]
[48]
============================================================
Evaluation Results: Contextual Embeddings + Reranking
============================================================

Evaluating Pass@5 with reranking...
Pass@5: 100%|██████████| 248/248 [01:40<00:00,  2.47it/s]
Pass@5: 92.15%
Average Score: 0.9215

Evaluating Pass@10 with reranking...
Pass@10: 100%|██████████| 248/248 [02:29<00:00,  1.66it/s]
Pass@10: 95.26%
Average Score: 0.9526

Evaluating Pass@20 with reranking...
Pass@20: 100%|██████████| 248/248 [03:03<00:00,  1.35it/s]
Pass@20: 97.45%
Average Score: 0.9745

============================================================
Metric          Pass Rate       Score          
------------------------------------------------------------
Pass@5          92.15%          0.9215         
Pass@10         95.26%          0.9526         
Pass@20         97.45%          0.9745         
============================================================

Reranking delivers our strongest results, nearly eliminating retrieval failures. Let's look at how each technique built upon the previous one to achieve this improvement.

Starting from our baseline RAG system at 87% Pass@10, we've climbed to over 95% by systematically applying advanced retrieval techniques. Each method addresses a different weakness: contextual embeddings solve the "isolated chunk" problem, hybrid search catches keyword-specific queries that embeddings miss, and reranking applies more sophisticated relevance scoring to refine the final selection.

ApproachPass@5Pass@10Pass@20
Baseline RAG80.92%87.15%90.06%
+ Contextual Embeddings88.12%92.34%94.29%
+ Hybrid Search (BM25)86.43%93.21%94.99%
+ Reranking92.15%95.26%97.45%

Key Takeaways:

  1. Contextual embeddings provided the largest single improvement (+5-7 percentage points), validating that adding document-level context to chunks significantly improves retrieval quality. This technique alone gets you 90% of the way to optimal performance.

  2. Reranking achieves the highest absolute performance, reaching 95.26% Pass@10—meaning the correct chunk appears in the top 10 results for 95% of queries. This represents a 47% reduction in retrieval failures compared to baseline RAG (from 12.85% failure rate down to 4.74%).

  3. Trade-offs matter: Each technique adds complexity and cost:

    • Contextual embeddings: One-time ingestion cost (~$3 for this dataset with prompt caching)
    • Hybrid search: Requires Elasticsearch infrastructure and maintenance
    • Reranking: Adds 100-200ms query latency and per-query API costs (~$0.002 per query)
  4. Choose your approach based on your requirements:

    • High-volume, cost-sensitive: Contextual embeddings alone (92% Pass@10, no per-query costs)
    • Maximum accuracy, latency-tolerant: Full reranking pipeline (95% Pass@10, best precision)
    • Balanced production system: Hybrid search for strong performance without per-query costs (93% Pass@10)

For most production RAG systems, contextual embeddings provide the best performance-to-cost ratio, delivering 92% Pass@10 with only one-time ingestion costs. Hybrid search and reranking are available when you need that extra 2-3 percentage points of precision and can afford the additional infrastructure or query costs.

Next Steps and Key Takeaways

  1. We demonstrated how to use Contextual Embeddings to improve retrieval performance, then delivered additional improvements with Contextual BM25 and reranking.

  2. This example used codebases, but these methods also apply to other data types such as internal company knowledge bases, financial & legal content, educational content, and much more.

  3. If you are an AWS user, you can get started with the Lambda function in contextual-rag-lambda-function, and if you're a GCP user you can spin up your own Cloud Run instance and follow a similar pattern!