Overcome Failing Document Ingestion & RAG Strategies with Agentic Knowledge Distillation

-

Introduction

Many generative AI use cases still revolve around Retrieval Augmented Generation (RAG), yet consistently fall wanting user expectations. Despite the growing body of research on RAG improvements and even adding Agents into the method, many solutions still fail to return exhaustive results, miss information that’s critical but infrequently mentioned within the documents, require multiple search iterations, and customarily struggle to reconcile key themes across multiple documents. To top all of it off, many implementations still depend on cramming as much “relevant” information as possible into the model’s context window alongside detailed system and user prompts. Reconciling all this information often exceeds the model’s cognitive capability and compromises response quality and consistency.

That is where our Agentic Knowledge Distillation + Pyramid Search Approach comes into play. As an alternative of chasing the very best chunking strategy, retrieval algorithm, or inference-time reasoning method, my team, Jim Brown, Mason Sawtell, Sandi Besen, and I, take an agentic approach to document ingestion.

We specifically goal high-value questions which can be often difficult to guage because they’ve multiple correct answers or solution paths. These cases are where traditional RAG solutions struggle most and existing RAG evaluation datasets are largely insufficient for testing this problem space. For our research implementation, we downloaded annual and quarterly reports from the last 12 months for the 30 corporations within the DOW Jones Industrial Average. These documents might be found through the SEC EDGAR website. The information on EDGAR is accessible and capable of be downloaded free of charge or might be queried through EDGAR public searches. See the SEC privacy policy for extra details, information on the SEC website is “considered public information and will be copied or further distributed by users of the web page without the SEC’s permission”. We chosen this dataset for 2 key reasons: first, it falls outside the knowledge cutoff for the models evaluated, ensuring that the models cannot reply to questions based on their knowledge from pre-training; second, it’s an in depth approximation for real-world business problems while allowing us to debate and share our findings using publicly available data. 

While typical RAG solutions excel at factual retrieval where the reply is well identified within the document dataset (e.g., “When did Apple’s annual shareholder’s meeting occur?”), they struggle with nuanced questions that require a deeper understanding of concepts across documents (e.g., “Which of the DOW corporations has essentially the most promising AI strategy?”). Our Agentic Knowledge Distillation + Pyramid Search Approach addresses these kinds of questions with much greater success in comparison with other standard approaches we tested and overcomes limitations related to using knowledge graphs in RAG systems. 

In this text, we’ll cover how our knowledge distillation process works, key advantages of this approach, examples, and an open discussion on the very best technique to evaluate these kinds of systems where, in lots of cases, there is no such thing as a singular “right” answer.

Constructing the pyramid: How Agentic Knowledge Distillation works

Image by writer and team depicting pyramid structure for document ingestion. Robots meant to represent agents constructing the pyramid.

Overview

Our knowledge distillation process creates a multi-tiered pyramid of knowledge from the raw source documents. Our approach is inspired by the pyramids utilized in deep learning computer vision-based tasks, which permit a model to research a picture at multiple scales. We take the contents of the raw document, convert it to markdown, and distill the content into an inventory of atomic insights, related concepts, document abstracts, and general recollections/memories. During retrieval it’s possible to access all or any levels of the pyramid to reply to the user request. 

The right way to distill documents and construct the pyramid: 

  1. Convert documents to Markdown: Convert all raw source documents to Markdown. We’ve found models process markdown best for this task in comparison with other formats like JSON and it’s more token efficient. We used Azure Document Intelligence to generate the markdown for every page of the document, but there are various other open-source libraries like MarkItDown which do the identical thing. Our dataset included 331 documents and 16,601 pages. 
  2. Extract atomic insights from each page: We process documents using a two-page sliding window, which allows each page to be analyzed twice. This provides the agent the chance to correct any potential mistakes when processing the page initially. We instruct the model to create a numbered list of insights that grows because it processes the pages within the document. The agent can overwrite insights from the previous page in the event that they were incorrect because it sees each page twice. We instruct the model to extract insights in easy sentences following the subject-verb-object (SVO) format and to write down sentences as if English is the second language of the user. This significantly improves performance by encouraging clarity and precision. Rolling over each page multiple times and using the SVO format also solves the disambiguation problem, which is a large challenge for knowledge graphs. The insight generation step can be particularly helpful for extracting information from tables for the reason that model captures the facts from the table in clear, succinct sentences. Our dataset produced 216,931 total insights, about 13 insights per page and 655 insights per document.
  3. Distilling concepts from insights: From the detailed list of insights, we discover higher-level concepts that connect related information in regards to the document. This step significantly reduces noise and redundant information within the document while preserving essential information and themes. Our dataset produced 14,824 total concepts, about 1 concept per page and 45 concepts per document. 
  4. Creating abstracts from concepts: Given the insights and ideas within the document, the LLM writes an abstract that appears each higher than any abstract a human would write and more information-dense than any abstract present in the unique document. The LLM generated abstract provides incredibly comprehensive knowledge in regards to the document with a small token density that carries a major amount of knowledge. We produce one abstract per document, 331 total.
  5. Storing recollections/memories across documents: At the highest of the pyramid we store critical information that is beneficial across all tasks. This might be information that the user shares in regards to the task or information the agent learns in regards to the dataset over time by researching and responding to tasks. For instance, we will store the present 30 corporations within the DOW as a recollection since this list is different from the 30 corporations within the DOW on the time of the model’s knowledge cutoff. As we conduct an increasing number of research tasks, we will constantly improve our recollections and maintain an audit trail of which documents these recollections originated from. For instance, we will keep track of AI strategies across corporations, where corporations are making major investments, etc. These high-level connections are super essential since they reveal relationships and data that should not apparent in a single page or document.
Sample subset of insights extracted from IBM 10Q, Q3 2024
Sample subset of insights extracted from IBM 10Q, Q3 2024 (page 4)

We store the text and embeddings for every layer of the pyramid (pages and up) in Azure PostgreSQL. We originally used Azure AI Search, but switched to PostgreSQL for cost reasons. This required us to write down our own hybrid search function since PostgreSQL doesn’t yet natively support this feature. This implementation would work with any vector database or vector index of your selecting. The important thing requirement is to store and efficiently retrieve each text and vector embeddings at any level of the pyramid. 

This approach essentially creates the essence of a knowledge graph, but stores information in natural language, the best way an LLM natively desires to interact with it, and is more efficient on token retrieval. We also let the LLM pick the terms used to categorize each level of the pyramid, this appeared to let the model resolve for itself the very best technique to describe and differentiate between the data stored at each level. For instance, the LLM preferred “insights” to “facts” because the label for the primary level of distilled knowledge. Our goal in doing this was to raised understand how an LLM thinks in regards to the process by letting it resolve the best way to store and group related information. 

Using the pyramid: How it really works with RAG & Agents

At inference time, each traditional RAG and agentic approaches profit from the pre-processed, distilled information ingested in our knowledge pyramid. The pyramid structure allows for efficient retrieval in each the normal RAG case, where only the highest X related pieces of knowledge are retrieved or within the Agentic case, where the Agent iteratively plans, retrieves, and evaluates information before returning a final response. 

The advantage of the pyramid approach is that information at any and all levels of the pyramid might be used during inference. For our implementation, we used PydanticAI to create a search agent that takes within the user request, generates search terms, explores ideas related to the request, and keeps track of knowledge relevant to the request. Once the search agent determines there’s sufficient information to handle the user request, the outcomes are re-ranked and sent back to the LLM to generate a final reply. Our implementation allows a search agent to traverse the data within the pyramid because it gathers details a few concept/search term. This is comparable to walking a knowledge graph, but in a way that’s more natural for the LLM since all the data within the pyramid is stored in natural language.

Depending on the use case, the Agent could access information in any respect levels of the pyramid or only at specific levels (e.g. only retrieve information from the concepts). For our experiments, we didn’t retrieve raw page-level data since we desired to concentrate on token efficiency and located the LLM-generated information for the insights, concepts, abstracts, and recollections was sufficient for completing our tasks. In theory, the Agent could even have access to the page data; this might provide additional opportunities for the agent to re-examine the unique document text; nevertheless, it could also significantly increase the entire tokens used. 

Here’s a high-level visualization of our Agentic approach to responding to user requests:

Overview of the agentic research & response process
Image created by writer and team providing an outline of the agentic research & response process

Results from the pyramid: Real-world examples

To guage the effectiveness of our approach, we tested it against quite a lot of query categories, including typical fact-finding questions and complicated cross-document research and evaluation tasks. 

Fact-finding (spear fishing): 

These tasks require identifying specific information or facts which can be buried in a document. These are the forms of questions typical RAG solutions goal but often require many searches and eat a lot of tokens to reply accurately. 

: “What was IBM’s total revenue in the newest financial reporting?”

: “IBM’s total revenue for the third quarter of 2024 was $14.968 billion [ibm-10q-q3-2024.pdf, pg. 4]

Screenshot of total tokens used to research and generate response
Total tokens used to research and generate response

This result’s correct (human-validated) and was generated using only 9,994 total tokens, with 1,240 tokens within the generated final response. 

Complex research and evaluation: 

These tasks involve researching and understanding multiple concepts to achieve a broader understanding of the documents and make inferences and informed assumptions based on the gathered facts.

: “Analyze the investments Microsoft and NVIDIA are making in AI and the way they’re positioning themselves out there. The report ought to be clearly formatted.”

Screenshot of the response generated by the agent analyzing AI investments and positioning for Microsoft and NVIDIA.
Response generated by the agent analyzing AI investments and positioning for Microsoft and NVIDIA.

The result’s a comprehensive report that executed quickly and accommodates detailed details about each of the businesses. 26,802 total tokens were used to research and reply to the request with a major percentage of them used for the ultimate response (2,893 tokens or ~11%). These results were also reviewed by a human to confirm their validity.

Screenshot of snippet indicating total token usage for the task
Snippet indicating total token usage for the duty

“Create a report on analyzing the risks disclosed by the assorted financial corporations within the DOW. Indicate which risks are shared and unique.”

:

Screenshot of part 1 of a response generated by the agent on disclosed risks.
Part 1 of response generated by the agent on disclosed risks.
Screenshot of part 2 of a response generated by the agent on disclosed risks.
Part 2 of response generated by the agent on disclosed risks.

Similarly, this task was accomplished in 42.7 seconds and used 31,685 total tokens, with 3,116 tokens used to generate the ultimate report. 

Screenshot of a snippet indicating total token usage for the task
Snippet indicating total token usage for the duty

These results for each fact-finding and complicated evaluation tasks exhibit that the pyramid approach efficiently creates detailed reports with low latency using a minimal amount of tokens. The tokens used for the tasks carry dense meaning with little noise allowing for high-quality, thorough responses across tasks.

Advantages of the pyramid: Why use it?

Overall, we found that our pyramid approach provided a major boost in response quality and overall performance for high-value questions. 

A few of the key advantages we observed include: 

  • Reduced model’s cognitive load: When the agent receives the user task, it retrieves pre-processed, distilled information relatively than the raw, inconsistently formatted, disparate document chunks. This fundamentally improves the retrieval process for the reason that model doesn’t waste its cognitive capability on attempting to break down the page/chunk text for the primary time. 
  • Superior table processing: By breaking down table information and storing it in concise but descriptive sentences, the pyramid approach makes it easier to retrieve relevant information at inference time through natural language queries. This was particularly essential for our dataset since financial reports contain a lot of critical information in tables. 
  • Improved response quality to many forms of requests: The pyramid enables more comprehensive context-aware responses to each precise, fact-finding questions and broad evaluation based tasks that involve many themes across quite a few documents. 
  • Preservation of critical context: Because the distillation process identifies and keeps track of key facts, essential information that may appear just once within the document is simpler to take care of. For instance, noting that every one tables are represented in thousands and thousands of dollars or in a specific currency. Traditional chunking methods often cause such a information to slide through the cracks. 
  • Optimized token usage, memory, and speed: By distilling information at ingestion time, we significantly reduce the variety of tokens required during inference, are capable of maximize the worth of knowledge put within the context window, and improve memory use. 
  • Scalability: Many solutions struggle to perform as the scale of the document dataset grows. This approach provides a far more efficient technique to manage a big volume of text by only preserving critical information. This also allows for a more efficient use of the LLMs context window by only sending it useful, clear information.
  • Efficient concept exploration: The pyramid enables the agent to explore related information just like navigating a knowledge graph, but doesn’t require ever generating or maintaining relationships within the graph. The agent can use natural language exclusively and keep track of essential facts related to the concepts it’s exploring in a highly token-efficient and fluid way. 
  • Emergent dataset understanding: An unexpected advantage of this approach emerged during our testing. When asking questions like “what are you able to tell me about this dataset?” or “what forms of questions can I ask?”, the system is capable of respond and suggest productive search topics since it has a more robust understanding of the dataset context by accessing higher levels within the pyramid just like the abstracts and recollections. 

Beyond the pyramid: Evaluation challenges & future directions

Challenges

While the outcomes we’ve observed when using the pyramid search approach have been nothing short of fantastic, finding ways to ascertain meaningful metrics to guage your complete system each at ingestion time and through information retrieval is difficult. Traditional RAG and Agent evaluation frameworks often fail to handle nuanced questions and analytical responses where many alternative responses are valid.

Our team plans to write down a research paper on this approach in the long run, and we’re open to any thoughts and feedback from the community, especially with regards to evaluation metrics. A lot of the present datasets we found were focused on evaluating RAG use cases inside one document or precise information retrieval across multiple documents relatively than robust concept and theme evaluation across documents and domains. 

The essential use cases we’re fascinated by relate to broader questions which can be representative of how businesses actually need to interact with GenAI systems. For instance, “tell me all the things I would like to find out about customer X” or “how do the behaviors of Customer A and B differ? Which am I more prone to have a successful meeting with?”. A majority of these questions require a deep understanding of knowledge across many sources. The answers to those questions typically require an individual to synthesize data from multiple areas of the business and think critically about it. Consequently, the answers to those questions are rarely written or saved anywhere which makes it unattainable to easily store and retrieve them through a vector index in a typical RAG process. 

One other consideration is that many real-world use cases involve dynamic datasets where documents are consistently being added, edited, and deleted. This makes it difficult to guage and track what a “correct” response is for the reason that answer will evolve because the available information changes. 

Future directions

In the long run, we imagine that the pyramid approach can address a few of these challenges by enabling more practical processing of dense documents and storing learned information as recollections. Nonetheless, tracking and evaluating the validity of the recollections over time shall be critical to the system’s overall success and stays a key focus area for our ongoing work. 

When applying this approach to organizational data, the pyramid process is also used to discover and assess discrepancies across areas of the business. For instance, uploading all of an organization’s sales pitch decks could surface where certain services or products are being positioned inconsistently. It is also used to check insights extracted from various line of business data to assist understand if and where teams have developed conflicting understandings of topics or different priorities. This application goes beyond pure information retrieval use cases and would allow the pyramid to function an organizational alignment tool that helps discover divergences in messaging, terminology, and overall communication. 

Conclusion: Key takeaways and why the pyramid approach matters

The knowledge distillation pyramid approach is important since it leverages the total power of the LLM at each ingestion and retrieval time. Our approach permits you to store dense information in fewer tokens which has the additional benefit of reducing noise within the dataset at inference. Our approach also runs in a short time and is incredibly token efficient, we’re capable of generate responses inside seconds, explore potentially lots of of searches, and on average use <40K tokens for your complete search, retrieval, and response generation process (this includes all of the search iterations!). 

We discover that the LLM is far higher at writing atomic insights as sentences and that these insights effectively distill information from each text-based and tabular data. This distilled information written in natural language may be very easy for the LLM to know and navigate at inference because it doesn’t need to expend unnecessary energy reasoning about and breaking down document formatting or filtering through noise

The flexibility to retrieve and aggregate information at any level of the pyramid also provides significant flexibility to handle quite a lot of query types. This approach offers promising performance for big datasets and enables high-value use cases that require nuanced information retrieval and evaluation. 


ASK ANA

What are your thoughts on this topic?
Let us know in the comments below.

0 0 votes
Article Rating
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

Share this article

Recent posts

0
Would love your thoughts, please comment.x
()
x