5. RetrievalIntermediate

Query Expansion: Retrieve More Relevant Results

November 14, 2025
10 min read
Ailog Research Team

Improve recall by 40%: expand user queries with synonyms, sub-queries, and LLM-generated variations.

Why Query Expansion?

Problem: User query is too short or uses different words

Example:

  • User: "ML models"
  • Relevant docs use: "machine learning algorithms", "neural networks", "deep learning"

Query expansion rewrites queries to match more documents.

Method 1: Synonym Expansion

DEVELOPERpython
from nltk.corpus import wordnet def expand_with_synonyms(query): words = query.split() expanded_queries = [query] # Include original for word in words: synonyms = [] for syn in wordnet.synsets(word): for lemma in syn.lemmas(): if lemma.name() != word: synonyms.append(lemma.name().replace('_', ' ')) # Add synonym variations if synonyms: expanded = query.replace(word, synonyms[0]) expanded_queries.append(expanded) return list(set(expanded_queries)) # Example queries = expand_with_synonyms("fast car") # ["fast car", "quick car", "fast automobile", "quick automobile"]

Method 2: LLM Query Rewriting

DEVELOPERpython
import openai def expand_with_llm(query): response = openai.ChatCompletion.create( model="gpt-4-turbo", messages=[{ "role": "system", "content": "Generate 3 alternative phrasings of the user's query. Output as JSON array." }, { "role": "user", "content": query }], response_format={"type": "json_object"} ) variations = json.loads(response.choices[0].message.content) return [query] + variations["alternatives"] # Example queries = expand_with_llm("How to reduce costs?") # [ # "How to reduce costs?", # "What are cost reduction strategies?", # "Ways to lower expenses", # "Best practices for cutting costs" # ]

Method 3: Multi-Query Retrieval

Search with all variations, merge results:

DEVELOPERpython
def multi_query_retrieval(query, vector_db): # Generate variations queries = expand_with_llm(query) # Retrieve for each all_results = [] for q in queries: q_emb = embed(q) results = vector_db.search(q_emb, limit=20) all_results.extend(results) # Deduplicate and rank by frequency doc_scores = {} for doc in all_results: if doc.id not in doc_scores: doc_scores[doc.id] = 0 doc_scores[doc.id] += doc.score # Sort by combined score ranked = sorted(doc_scores.items(), key=lambda x: x[1], reverse=True) return ranked[:10]

Method 4: HyDE (Hypothetical Document Embeddings)

Generate fake answer, search for it:

DEVELOPERpython
def hyde_retrieval(query): # Generate hypothetical answer hypothetical_doc = openai.ChatCompletion.create( model="gpt-4-turbo", messages=[{ "role": "system", "content": "Write a detailed answer to this question as if it were a Wikipedia article." }, { "role": "user", "content": query }] ).choices[0].message.content # Embed hypothetical doc (not the query!) doc_embedding = embed(hypothetical_doc) # Search for similar documents results = vector_db.search(doc_embedding, limit=10) return results

Method 5: Step-Back Prompting

Ask broader question first:

DEVELOPERpython
def step_back_expansion(query): # Generate broader question step_back = openai.ChatCompletion.create( model="gpt-4-turbo", messages=[{ "role": "system", "content": "Given a specific question, generate a broader, more general question." }, { "role": "user", "content": query }] ).choices[0].message.content return [query, step_back] # Example queries = step_back_expansion("What is the capital of France?") # [ # "What is the capital of France?", # "What are capitals of European countries?" # ]

Method 6: Sub-Query Decomposition

Break complex queries into parts:

DEVELOPERpython
def decompose_query(query): response = openai.ChatCompletion.create( model="gpt-4-turbo", messages=[{ "role": "system", "content": "Break this complex question into 2-3 simpler sub-questions. Return JSON array." }, { "role": "user", "content": query }], response_format={"type": "json_object"} ) sub_queries = json.loads(response.choices[0].message.content)["sub_questions"] # Retrieve for each sub-query all_results = [] for sq in sub_queries: results = vector_db.search(embed(sq), limit=5) all_results.extend(results) return deduplicate(all_results) # Example sub_queries = decompose_query("How does photosynthesis affect climate change?") # [ # "What is photosynthesis?", # "How do plants remove CO2?", # "What is the relationship between CO2 and climate?" # ]

Langchain Implementation

DEVELOPERpython
from langchain.retrievers.multi_query import MultiQueryRetriever from langchain.llms import OpenAI retriever = MultiQueryRetriever.from_llm( retriever=vector_store.as_retriever(), llm=OpenAI(temperature=0) ) # Automatically expands query docs = retriever.get_relevant_documents("How to train neural networks?")

Evaluation

DEVELOPERpython
# Measure recall improvement def evaluate_expansion(queries, ground_truth_docs): recall_baseline = [] recall_expanded = [] for query, relevant_docs in zip(queries, ground_truth_docs): # Baseline base_results = vector_db.search(embed(query), limit=10) base_recall = len(set(base_results) & set(relevant_docs)) / len(relevant_docs) recall_baseline.append(base_recall) # Expanded expanded_results = multi_query_retrieval(query, vector_db) exp_recall = len(set(expanded_results) & set(relevant_docs)) / len(relevant_docs) recall_expanded.append(exp_recall) print(f"Baseline recall: {np.mean(recall_baseline):.2f}") print(f"Expanded recall: {np.mean(recall_expanded):.2f}")

Query expansion is low-cost, high-impact. Boost recall by 30-50% instantly.

Tags

retrievalquery expansionrecallsearch

Related Guides