Gemini Pro can handle an astonishing 2M token context in comparison with the paltry 15k we were amazed by when GPT-3.5 landed. Does that mean we not care about retrieval or RAG systems? Based on Needle-in-a-Haystack benchmarks, the reply is that while the necessity is diminishing, especially for Gemini models, advanced retrieval techniques still significantly improve performance for many LLMs. Benchmarking results show that long context models perform well at surfacing specific insights. Nevertheless, they struggle when a citation is required. That makes retrieval techniques especially necessary to be used cases where citation quality is essential (think law, journalism, and medical applications amongst others). These are inclined to be higher-value applications where lacking a citation makes the initial insight much less useful. Moreover, while the fee of long context models will likely decrease, augmenting shorter content window models with retrievers is usually a cost-effective and lower latency path to serve the identical use cases. It’s secure to say that RAG and retrieval will stick around some time longer but possibly you won’t get much bang on your buck implementing a naive RAG system.
Advanced RAG covers a variety of techniques but broadly they fall under the umbrella of pre-retrieval query rewriting and post-retrieval re-ranking. Let’s dive in and learn something about each of them.
Q: “What’s the meaning of life?”
A: “42”
Query and answer asymmetry is a big issue in RAG systems. A typical approach to simpler RAG systems is to match the cosine similarity of the query and document embedding. This works when the query is almost restated in the reply, “What’s Meghan’s favorite animal?”, “Meghan’s favorite animal is the giraffe.”, but we’re rarely that lucky.
Listed here are a couple of techniques that may overcome this:
The nomenclature “Rewrite-Retrieve-Read” originated from a paper from the Microsoft Azure team in 2023 (although given how intuitive the technique is it had been used for some time). On this study, an LLM would rewrite a user query right into a search engine-optimized query before fetching relevant context to reply the query.
The important thing example was how this question, “What career do Nicholas Ray and Elia Kazan have in common?” ought to be broken down into two queries, “Nicholas Ray career” and “Elia Kazan career”. This permits for higher results since it’s unlikely that a single document would contain the reply to each questions. By splitting the query into two the retriever can more effectively retrieve relevant documents.
Rewriting can even help overcome issues that arise from “distracted prompting”. Or instances where the user query has mixed concepts of their prompt and taking an embedding directly would end in nonsense. For instance, “Great, thanks for telling me who the Prime Minister of the UK is. Now tell me who the President of France is” could be rewritten like “current French president”. This may also help make your application more robust to a wider range of users as some will think rather a lot about the right way to optimally word their prompts, while others might need different norms.
In query expansion with LLMs, the initial query might be rewritten into multiple reworded questions or decomposed into subquestions. Ideally, by expanding the query into several options, the possibilities of lexical overlap increase between the initial query and the right document in your storage component.
Query expansion is an idea that predates the widespread usage of LLMs. Pseudo Relevance Feedback (PRF) is a method that inspired some LLM researchers. In PRF, the top-ranked documents from an initial search to discover and weight recent query terms. With LLMs, we depend on the creative and generative capabilities of the model to seek out recent query terms. This is useful because LLMs aren’t restricted to the initial set of documents and may generate expansion terms not covered by traditional methods.
Corpus-Steered Query Expansion (CSQE) is a technique that marries the normal PRF approach with the LLMs’ generative capabilities. The initially retrieved documents are fed back to the LLM to generate recent query terms for the search. This system might be especially performant for queries for which LLMs lacks subject knowledge.
There are limitations to each LLM-based query expansion and its predecessors like PRF. Probably the most glaring of which is the belief that the LLM generated terms are relevant or that the highest ranked results are relevant. God forbid I’m trying to seek out information in regards to the Australian journalist Harry Potter as a substitute of the famous boy wizard. Each techniques would further pull my query away from the less popular query subject to the more popular one making edge case queries less effective.
One other technique to reduce the asymmetry between questions and documents is to index documents with a set of LLM-generated hypothetical questions. For a given document, the LLM can generate questions that could be answered by the document. Then throughout the retrieval step, the user’s query embedding is in comparison with the hypothetical query embeddings versus the document embeddings.
Which means that we don’t must embed the unique document chunk, as a substitute, we will assign the chunk a document ID and store that as metadata on the hypothetical query document. Generating a document ID means there may be much less overhead when mapping many questions to 1 document.
The clear downside to this approach is your system will probably be limited by the creativity and volume of questions you store.
HyDE is the alternative of Hypothetical Query Indexes. As a substitute of generating hypothetical questions, the LLM is asked to generate a hypothetical document that could answer the query, and the embedding of that generated document is used to look against the actual documents. The true document is then used to generate the response. This method showed strong improvements over other contemporary retriever methods when it was first introduced in 2022.
We use this idea at Dune for our natural language to SQL product. By rewriting user prompts as a possible caption or title for a chart that might answer the query, we’re higher capable of retrieve SQL queries that may function context for the LLM to jot down a brand new query.