Home Machine Learning Constructing a RAG chain utilizing LangChain Expression Language (LCEL) | by Roshan Santhosh | Apr, 2024

Constructing a RAG chain utilizing LangChain Expression Language (LCEL) | by Roshan Santhosh | Apr, 2024

0
Constructing a RAG chain utilizing LangChain Expression Language (LCEL) | by Roshan Santhosh | Apr, 2024

[ad_1]

QA RAG with Self Analysis II

For this variation, we make a change to the analysis process. Along with the question-answer pair, we additionally cross the retrieved context to the evaluator LLM.

To perform this, we add an extra itemgetter perform within the second RunnableParallel to gather the context string and cross it to the brand new qa_eval_prompt_with_context immediate template.

rag_chain = ( 
RunnableParallel(context = retriever | format_docs, query = RunnablePassthrough() ) |
RunnableParallel(reply= qa_prompt | llm | retrieve_answer, query = itemgetter("query"), context = itemgetter("context") ) |
qa_eval_prompt_with_context |
llm_selfeval |
json_parser
)

Implementation Flowchart :

One of many frequent ache factors with utilizing a sequence implementation like LCEL is the problem in accessing the intermediate variables, which is essential for debugging pipelines. We have a look at few choices the place we are able to nonetheless entry any intermediate variables we have an interest utilizing manipulations of the LCEL

Utilizing RunnableParallel to hold ahead intermediate outputs

As we noticed earlier, RunnableParallel permits us to hold a number of arguments ahead to the subsequent step within the chain. So we use this capability of RunnableParallel to hold ahead the required intermediate values all the way in which until the top.

Within the under instance, we modify the unique self eval RAG chain to output the retrieved context textual content together with the ultimate self analysis output. The first change is that we add a RunnableParallel object to each step of the method to hold ahead the context variable.

Moreover, we additionally use the itemgetter perform to obviously specify the inputs for the next steps. For instance, for the final two RunnableParallel objects, we use itemgetter(‘enter’) to make sure that solely the enter argument from the earlier step is handed on to the LLM/ Json parser objects.

rag_chain = ( 
RunnableParallel(context = retriever | format_docs, query = RunnablePassthrough() ) |
RunnableParallel(reply= qa_prompt | llm | retrieve_answer, query = itemgetter("query"), context = itemgetter("context") ) |
RunnableParallel(enter = qa_eval_prompt, context = itemgetter("context")) |
RunnableParallel(enter = itemgetter("enter") | llm_selfeval , context = itemgetter("context") ) |
RunnableParallel(enter = itemgetter("enter") | json_parser, context = itemgetter("context") )
)

The output from this chain seems like the next :

A extra concise variation:

rag_chain = ( 
RunnableParallel(context = retriever | format_docs, query = RunnablePassthrough() ) |
RunnableParallel(reply= qa_prompt | llm | retrieve_answer, query = itemgetter("query"), context = itemgetter("context") ) |
RunnableParallel(enter = qa_eval_prompt | llm_selfeval | json_parser, context = itemgetter("context"))
)

Utilizing World variables to save lots of intermediate steps

This technique primarily makes use of the precept of a logger. We introduce a brand new perform that saves its enter to a worldwide variable, thus permitting us entry to the intermediate variable by way of the worldwide variable

international context

def save_context(x):
international context
context = x
return x

rag_chain = (
RunnableParallel(context = retriever | format_docs | save_context, query = RunnablePassthrough() ) |
RunnableParallel(reply= qa_prompt | llm | retrieve_answer, query = itemgetter("query") ) |
qa_eval_prompt |
llm_selfeval |
json_parser
)

Right here we outline a worldwide variable referred to as context and a perform referred to as save_context that saves its enter worth to the worldwide context variable earlier than returning the identical enter. Within the chain, we add the save_context perform because the final step of the context retrieval step.

This selection lets you entry any intermediate steps with out making main adjustments to the chain.

Accessing intermediate variables utilizing international variables

Utilizing callbacks

Attaching callbacks to your chain is one other frequent technique used for logging intermediate variable values. Theres so much to cowl on the subject of callbacks in LangChain, so I shall be overlaying this intimately in a distinct put up.

[ad_2]