Home Artificial Intelligence Constructing a RAG chain using LangChain Expression Language (LCEL)

Constructing a RAG chain using LangChain Expression Language (LCEL)

0
Constructing a RAG chain using LangChain Expression Language (LCEL)

QA RAG with Self Evaluation II

For this variation, we make a change to the evaluation procedure. Along with the question-answer pair, we also pass the retrieved context to the evaluator LLM.

To perform this, we add a further itemgetter function within the second RunnableParallel to gather the context string and pass it to the brand new qa_eval_prompt_with_context prompt template.

rag_chain = ( 
RunnableParallel(context = retriever | format_docs, query = RunnablePassthrough() ) |
RunnableParallel(answer= qa_prompt | llm | retrieve_answer, query = itemgetter("query"), context = itemgetter("context") ) |
qa_eval_prompt_with_context |
llm_selfeval |
json_parser
)

Implementation Flowchart :

Certainly one of the common pain points with using a series implementation like LCEL is the issue in accessing the intermediate variables, which is essential for debugging pipelines. We have a look at few options where we are able to still access any intermediate variables we have an interest using manipulations of the LCEL

Using RunnableParallel to hold forward intermediate outputs

As we saw earlier, RunnableParallel allows us to hold multiple arguments forward to the following step within the chain. So we use this ability of RunnableParallel to hold forward the required intermediate values all the way in which till the tip.

Within the below example, we modify the unique self eval RAG chain to output the retrieved context text together with the ultimate self evaluation output. The first change is that we add a RunnableParallel object to each step of the method to hold forward the context variable.

Moreover, we also use the itemgetter function to obviously specify the inputs for the following steps. For instance, for the last two RunnableParallel objects, we use itemgetter(‘input’) to be certain that only the input argument from the previous step is passed on to the LLM/ Json parser objects.

rag_chain = ( 
RunnableParallel(context = retriever | format_docs, query = RunnablePassthrough() ) |
RunnableParallel(answer= qa_prompt | llm | retrieve_answer, query = itemgetter("query"), context = itemgetter("context") ) |
RunnableParallel(input = qa_eval_prompt, context = itemgetter("context")) |
RunnableParallel(input = itemgetter("input") | llm_selfeval , context = itemgetter("context") ) |
RunnableParallel(input = itemgetter("input") | json_parser, context = itemgetter("context") )
)

The output from this chain looks just like the following :

A more concise variation:

rag_chain = ( 
RunnableParallel(context = retriever | format_docs, query = RunnablePassthrough() ) |
RunnableParallel(answer= qa_prompt | llm | retrieve_answer, query = itemgetter("query"), context = itemgetter("context") ) |
RunnableParallel(input = qa_eval_prompt | llm_selfeval | json_parser, context = itemgetter("context"))
)

Using Global variables to save lots of intermediate steps

This method essentially uses the principle of a logger. We introduce a latest function that saves its input to a world variable, thus allowing us access to the intermediate variable through the worldwide variable

global context

def save_context(x):
global context
context = x
return x

rag_chain = (
RunnableParallel(context = retriever | format_docs | save_context, query = RunnablePassthrough() ) |
RunnableParallel(answer= qa_prompt | llm | retrieve_answer, query = itemgetter("query") ) |
qa_eval_prompt |
llm_selfeval |
json_parser
)

Here we define a world variable called context and a function called save_context that saves its input value to the worldwide context variable before returning the identical input. Within the chain, we add the save_context function because the last step of the context retrieval step.

This feature means that you can access any intermediate steps without making major changes to the chain.

Accessing intermediate variables using global variables

Using callbacks

Attaching callbacks to your chain is one other common method used for logging intermediate variable values. Theres loads to cover on the subject of callbacks in LangChain, so I can be covering this intimately in a distinct post.

LEAVE A REPLY

Please enter your comment!
Please enter your name here