Home Artificial Intelligence A Gentle Intro to Chaining LLMs, Agents, and utils via LangChain

A Gentle Intro to Chaining LLMs, Agents, and utils via LangChain

0
A Gentle Intro to Chaining LLMs, Agents, and utils via LangChain

#LLM for beginners

Understand the fundamentals of agents, tools, and prompts and a few learnings along the way in which

Audience: For those feeling overwhelmed with the large (yet sensible) library…

Image generated by Creator using DALL.E 2

I’d be lying if I said I actually have got all the LangChain library covered — actually, I’m removed from it. But the thrill surrounding it was enough to shake me out of my writing hiatus and provides it a go 🚀.

The initial motivation was to see what was it that LangChain was adding (on a practical level) that set it aside from the chatbot I built last month using the ChatCompletion.create() function from the openai package. Whilst doing so, I noticed I needed to know the constructing blocks for LangChain first before moving on to the more complex parts.

That is what this text does. Heads-up though, this will probably be more parts coming as I’m truly fascinated by the library and can proceed to explore to see what all could be built through it.

Let’s begin by understanding the elemental constructing blocks of LangChain — i.e. Chains. If you happen to’d wish to follow along, here’s the GitHub repo.

What are chains in LangChain?

Chains are what you get by connecting a number of large language models (LLMs) in a logical way. (Chains could be built of entities apart from LLMs but for now, let’s keep on with this definition for simplicity).

OpenAI is a form of LLM (provider) you can use but there are others like Cohere, Bloom, Huggingface, etc.

Note: Just about most of those LLM providers will need you to request an API key to be able to use them. So ensure that you do this before proceeding with the rest of this blog. For instance:

import os
os.environ["OPENAI_API_KEY"] = "..."

P.S. I’m going to make use of OpenAI for this tutorial because I actually have a key with credits that expire in a month’s time, but be at liberty to interchange it with some other LLM. The concepts covered here will probably be useful regardless.

Chains could be easy (i.e. Generic) or specialized (i.e. Utility).

  1. Generic — A single LLM is the only chain. It takes an input prompt and the name of the LLM after which uses the LLM for text generation (i.e. output for the prompt). Here’s an example:

Let’s construct a basic chain — create a prompt and get a prediction

Prompt creation (using PromptTemplate) is a bit fancy in Lanchain but this might be because there are quite just a few alternative ways prompts could be created depending on the use case (we are going to cover AIMessagePromptTemplate,
HumanMessagePromptTemplate etc. in the following blog post). Here’s an easy one for now:

from langchain.prompts import PromptTemplate

prompt = PromptTemplate(
input_variables=["product"],
template="What's a very good name for a corporation that makes {product}?",
)

print(prompt.format(product="podcast player"))

# OUTPUT
# What's a very good name for a corporation that makes podcast player?

Note: If you happen to require multiple input_variables, as an illustration: input_variables=["product", "audience"] for a template akin to “What's a very good name for a corporation that makes {product} for {audience}”, it’s essential to do print(prompt.format(product="podcast player", audience="children”) to get the updated prompt.

Once you’ve got built a prompt, we are able to call the specified LLM with it. To accomplish that, we create an LLMChain instance (in our case, we use OpenAI‘s large language model text-davinci-003). To get the prediction (i.e. AI-generated text), we use run function with the name of the product.

from langchain.llms import OpenAI
from langchain.chains import LLMChain

llm = OpenAI(
model_name="text-davinci-003", # default model
temperature=0.9) #temperature dictates how whacky the output ought to be
llmchain = LLMChain(llm=llm, prompt=prompt)
llmchain.run("podcast player")

# OUTPUT
# PodConneXion

If you happen to had multiple input_variables, then you definately won’t have the option to make use of run. As an alternative, you’ll should pass all of the variables as a dict. For instance, llmchain({“product”: “podcast player”, “audience”: “children”}).

Note 1: In accordance with OpenAI, davinci text-generation models are 10x costlier than their chat counterparts i.e gpt-3.5-turbo, so I attempted to modify from a text model to a chat model (i.e. from OpenAI to ChatOpenAI) and the outcomes are just about the identical.

Note 2: You would possibly see some tutorials using OpenAIChatas an alternative of ChatOpenAI. The previous is deprecated and can not be supported and we’re purported to use ChatOpenAI.

from langchain.chat_models import ChatOpenAI

chatopenai = ChatOpenAI(
model_name="gpt-3.5-turbo")
llmchain_chat = LLMChain(llm=chatopenai, prompt=prompt)
llmchain_chat.run("podcast player")

# OUTPUT
# PodcastStream

This concludes our section on easy chains. It will be important to notice that we rarely use generic chains as standalone chains. More often they’re used as constructing blocks for Utility chains (as we are going to see next).

2. Utility — These are specialized chains, comprised of many LLMs to assist solve a selected task. For instance, LangChain supports some end-to-end chains (akin to AnalyzeDocumentChain for summarization, QnA, etc) and a few specific ones (akin to GraphQnAChain for creating, querying, and saving graphs). We’ll take a look at one specific chain called PalChain on this tutorial for digging deeper.

PAL stands for Programme Aided Language Model. PALChain reads complex math problems (described in natural language) and generates programs (for solving the mathematics problem) because the intermediate reasoning steps, but offloads the answer step to a runtime akin to a Python interpreter.

To substantiate that is actually true, we are able to inspect the _call() in the bottom code here. Under the hood, we are able to see this chain:

P.S. It’s a very good practice to examine _call() in base.py for any of the chains in LangChain to see how things are working under the hood.

from langchain.chains import PALChain
palchain = PALChain.from_math_prompt(llm=llm, verbose=True)
palchain.run("If my age is half of my dad's age and he's going to be 60 next yr, what's my current age?")

# OUTPUT
# > Entering latest PALChain chain...
# def solution():
# """If my age is half of my dad's age and he's going to be 60 next yr, what's my current age?"""
# dad_age_next_year = 60
# dad_age_now = dad_age_next_year - 1
# my_age_now = dad_age_now / 2
# result = my_age_now
# return result
#
# > Finished chain.
# '29.5'

Note1: verbose could be set to False in the event you don’t have to see the intermediate step.

Now a few of you could be wondering — but what in regards to the prompt? We definitely didn’t pass one as we did for the generic llmchain we built. The actual fact is, it’s routinely loaded when using .from_math_prompt(). You’ll be able to check the default prompt using palchain.prompt.template or you’ll be able to directly inspect the prompt file here.

print(palchain.prompt.template)
# OUTPUT
# 'Q: Olivia has $23. She bought five bagels for $3 each. How much money does she have left?nn# solution in Python:nnndef solution():n """Olivia has $23. She bought five bagels for $3 each. How much money does she have left?"""n money_initial = 23n bagels = 5n bagel_cost = 3n money_spent = bagels * bagel_costn money_left = money_initial - money_spentn result = money_leftn return resultnnnnnnQ: Michael had 58 golf balls. On tuesday, he lost 23 golf balls. On wednesday, he lost 2 more. What number of golf balls did he have at the top of wednesday?nn# solution in Python:nnndef solution():n """Michael had 58 golf balls. On tuesday, he lost 23 golf balls. On wednesday, he lost 2 more. What number of golf balls did he have at the top of wednesday?"""n golf_balls_initial = 58n golf_balls_lost_tuesday = 23n golf_balls_lost_wednesday = 2n golf_balls_left = golf_balls_initial - golf_balls_lost_tuesday - golf_balls_lost_wednesdayn result = golf_balls_leftn return resultnnnnnnQ: There have been nine computers within the server room. Five more computers were installed every day, from monday to thursday. What number of computers at the moment are within the server room?nn# solution in Python:nnndef solution():n """There have been nine computers within the server room. Five more computers were installed every day, from monday to thursday. What number of computers at the moment are within the server room?"""n computers_initial = 9n computers_per_day = 5n num_days = 4 # 4 days between monday and thursdayn computers_added = computers_per_day * num_daysn computers_total = computers_initial + computers_addedn result = computers_totaln return resultnnnnnnQ: Shawn has five toys. For Christmas, he got two toys each from his mom and pop. What number of toys does he have now?nn# solution in Python:nnndef solution():n """Shawn has five toys. For Christmas, he got two toys each from his mom and pop. What number of toys does he have now?"""n toys_initial = 5n mom_toys = 2n dad_toys = 2n total_received = mom_toys + dad_toysn total_toys = toys_initial + total_receivedn result = total_toysn return resultnnnnnnQ: Jason had 20 lollipops. He gave Denny some lollipops. Now Jason has 12 lollipops. What number of lollipops did Jason give to Denny?nn# solution in Python:nnndef solution():n """Jason had 20 lollipops. He gave Denny some lollipops. Now Jason has 12 lollipops. What number of lollipops did Jason give to Denny?"""n jason_lollipops_initial = 20n jason_lollipops_after = 12n denny_lollipops = jason_lollipops_initial - jason_lollipops_aftern result = denny_lollipopsn return resultnnnnnnQ: Leah had 32 chocolates and her sister had 42. In the event that they ate 35, what number of pieces have they got left in total?nn# solution in Python:nnndef solution():n """Leah had 32 chocolates and her sister had 42. In the event that they ate 35, what number of pieces have they got left in total?"""n leah_chocolates = 32n sister_chocolates = 42n total_chocolates = leah_chocolates + sister_chocolatesn chocolates_eaten = 35n chocolates_left = total_chocolates - chocolates_eatenn result = chocolates_leftn return resultnnnnnnQ: If there are 3 cars within the parking zone and a couple of more cars arrive, what number of cars are within the parking zone?nn# solution in Python:nnndef solution():n """If there are 3 cars within the parking zone and a couple of more cars arrive, what number of cars are within the parking zone?"""n cars_initial = 3n cars_arrived = 2n total_cars = cars_initial + cars_arrivedn result = total_carsn return resultnnnnnnQ: There are 15 trees within the grove. Grove employees will plant trees within the grove today. After they're done, there will probably be 21 trees. What number of trees did the grove employees plant today?nn# solution in Python:nnndef solution():n """There are 15 trees within the grove. Grove employees will plant trees within the grove today. After they're done, there will probably be 21 trees. What number of trees did the grove employees plant today?"""n trees_initial = 15n trees_after = 21n trees_added = trees_after - trees_initialn result = trees_addedn return resultnnnnnnQ: {query}nn# solution in Python:nnn'

Note: A lot of the utility chains can have their prompts pre-defined as a part of the library (check them out here). They’re, at times, quite detailed (read: plenty of tokens) so there is certainly a trade-off between cost and the standard of response from the LLM.

Are there any Chains that don’t need LLMs and prompts?

Though PalChain requires an LLM (and a corresponding prompt) to parse the user’s query written in natural language, there are some chains in LangChain that don’t need one. These are mainly transformation chains that preprocess the prompt, akin to removing extra spaces, before inputting it into the LLM. You’ll be able to see one other example here.

Can we get to the nice part and begin creating chains?

In fact, we are able to! We’ve all the fundamental constructing blocks we’d like to start out chaining together LLMs logically such that input from one could be fed to the following. To accomplish that, we are going to use SimpleSequentialChain.

The documentation has some great examples on this, for instance, you’ll be able to see here the best way to have two chains combined where chain#1 is used to scrub the prompt (remove extra whitespaces, shorten prompt, etc) and chain#2 is used to call an LLM with this clean prompt. Here’s one other one where chain#1 is used to generate a synopsis for a play and chain#2 is used to jot down a review based on this synopsis.

While these are excellent examples, I would like to concentrate on something else. If you happen to remember before, I discussed that chains could be composed of entities apart from LLMs. More specifically, I’m focused on chaining agents and LLMs together. But first, what are agents?

Using agents for dynamically calling LLMs

It should be much easier to elucidate what an agent does vs. what it’s.

Say, we wish to know the weather forecast for tomorrow. If were to make use of the straightforward ChatGPT API and provides it a prompt Show me the weather for tomorrow in London, it won’t know the reply since it doesn’t have access to real-time data.

Wouldn’t it’s useful if we had an arrangement where we could utilize an LLM for understanding our query (i.e prompt) in natural language after which call the weather API on our behalf to fetch the info needed? This is strictly what an agent does (amongst other things, after all).

An agent has access to an LLM and a set of tools for instance Google Search, Python REPL, math calculator, weather APIs, etc.

There are quite just a few agents that LangChain supports — see here for the whole list, but quite frankly probably the most common one I got here across in tutorials and YT videos was zero-shot-react-description. This agent uses ReAct (Reason + Act) framework to choose probably the most usable tool (from an inventory of tools), based on what the input query is.

P.S.: Here’s a pleasant article that goes in-depth into the ReAct framework.

Let’s initialize an agent using initialize_agent and pass it the tools and LLM it needs. There’s an extended list of tools available here that an agent can use to interact with the skin world. For our example, we’re using the identical math-solving tool as above, called pal-math. This one requires an LLM on the time of initialization, so we pass to it the identical OpenAI LLM instance as before.

from langchain.agents import initialize_agent
from langchain.agents import AgentType
from langchain.agents import load_tools

llm = OpenAI(temperature=0)
tools = load_tools(["pal-math"], llm=llm)

agent = initialize_agent(tools,
llm,
agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
verbose=True)

Let’s try it out on the identical example as above:

agent.run("If my age is half of my dad's age and he's going to be 60 next yr, what's my current age?")

# OUTPUT
# > Entering latest AgentExecutor chain...
# I would like to determine my dad's current age after which divide it by two.
# Motion: PAL-MATH
# Motion Input: What's my dad's current age if he's going to be 60 next yr?
# Commentary: 59
# Thought: I now know my dad's current age, so I can divide it by two to get my age.
# Motion: Divide 59 by 2
# Motion Input: 59/2
# Commentary: Divide 59 by 2 will not be a legitimate tool, try one other one.
# Thought: I can use PAL-MATH to divide 59 by 2.
# Motion: PAL-MATH
# Motion Input: Divide 59 by 2
# Commentary: 29.5
# Thought: I now know the ultimate answer.
# Final Answer: My current age is 29.5 years old.

# > Finished chain.
# 'My current age is 29.5 years old.'

Note 1: At each step, you’ll notice that an agent does certainly one of three things — it either has an remark, a thought, or it takes an motion. This is principally on account of the ReAct framework and the associated prompt that the agent is using:

print(agent.agent.llm_chain.prompt.template)
# OUTPUT
# Answer the next questions as best you'll be able to. You might have access to the next tools:
# PAL-MATH: A language model that is basically good at solving complex word math problems. Input ought to be a totally worded hard word math problem.

# Use the next format:

# Query: the input query you will need to answer
# Thought: it is best to all the time take into consideration what to do
# Motion: the motion to take, ought to be certainly one of [PAL-MATH]
# Motion Input: the input to the motion
# Commentary: the results of the motion
# ... (this Thought/Motion/Motion Input/Commentary can repeat N times)
# Thought: I now know the ultimate answer
# Final Answer: the ultimate answer to the unique input query
# Begin!
# Query: {input}
# Thought:{agent_scratchpad}

Note2: You is likely to be wondering what’s the purpose of getting an agent to do the identical thing that an LLM can do. Some applications would require not only a predetermined chain of calls to LLMs/other tools, but potentially an unknown chain that is dependent upon the user’s input [Source]. In a lot of these chains, there may be an “agent” which has access to a set of tools.
As an example,
here’s an example of an agent that may fetch the right documents (from the vectorstores) for RetrievalQAChain depending on whether the query refers to document A or document B.

For fun, I attempted making the input query more complex (using Demi Moore’s age as a placeholder for Dad’s actual age).

agent.run("My age is half of my dad's age. Next yr he's going to be same age as Demi Moore. What's my current age?")

Unfortunately, the reply was barely off because the agent was not using the most recent age for Demi Moore (since Open AI models were trained on data until 2020). This could be easily fixed by including one other tool —
tools = load_tools([“pal-math”, "serpapi"], llm=llm). serpapi is helpful for answering questions on current events.

Note: It will be important so as to add as many tools as you think that could also be relevant to the user query. The issue with using a single tool is that the agent keeps attempting to use the identical tool even when it’s not probably the most relevant for a selected remark/motion step.

Here’s one other example of a tool you need to use — podcast-api. It’s essential to get your individual API key and plug it into the code below.


tools = load_tools(["podcast-api"], llm=llm, listen_api_key="...")
agent = initialize_agent(tools,
llm,
agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
verbose=True)

agent.run("Show me episodes for money saving suggestions.")

# OUTPUT
# > Entering latest AgentExecutor chain...
# I should seek for podcasts or episodes related to money saving
# Motion: Podcast API
# Motion Input: Money saving suggestions
# Commentary: The API call returned 3 podcasts related to money saving suggestions: The Money Nerds, The Rachel Cruze Show, and The Martin Lewis Podcast. These podcasts offer priceless money saving suggestions and advice to assist people take control of their funds and create a life they love.
# Thought: I now have some options to pick from
# Final Answer: The Money Nerds, The Rachel Cruze Show, and The Martin Lewis Podcast are great podcast options for money saving suggestions.

# > Finished chain.

# 'The Money Nerds, The Rachel Cruze Show, and The Martin Lewis Podcast are great podcast options for money saving suggestions.'

Note1: There’s a known error with using this API where you may see, openai.error.InvalidRequestError: This model’s maximum context length is 4097 tokens, nonetheless you requested XXX tokens (XX in your prompt; XX for the completion). Please reduce your prompt; or completion length. This happens when the response returned by the API is likely to be too big. To work around this, the documentation suggests returning fewer search results, for instance, by updating the query to "Show me episodes for money saving suggestions, return just one result".

Note2: While tinkering around with this tool, I noticed some inconsistencies. The responses aren’t all the time complete the primary time around, as an illustration listed below are the input and responses from two consecutive runs:

Input: “Podcasts for recovering at French”

Response 1: “The most effective podcast for learning French is the one with the very best review rating.”
Response 2: ‘The most effective podcast for learning French is “FrenchPod101”.

Under the hood, the tool is first using an LLMChain for constructing the API URL based on our input instructions (something along the lines of https://listen-api.listennotes.com/api/v2/search?q=french&type=podcast&page_size=3) and making the API call. Upon receiving the response, it uses one other LLMChain that summarizes the response to get the reply to our original query. You’ll be able to take a look at the prompts here for each LLMchains which describe the method in additional detail.

I’m inclined to guess the inconsistent results seen above are resulting from the summarization step because I actually have individually debugged and tested the API URL (created by LLMChain#1) via Postman and received the precise response. To further confirm my doubts, I also stress-tested the summarization chain as a standalone chain with an empty API URL hoping it could throw an error but got the response “Investing’ podcasts were found, containing 3 ends in total.” 🤷‍♀ I’d be curious to see if others had higher luck than me with this tool!

Use Case 2: Mix chains to create an age-appropriate gift generator

Let’s put our knowledge of agents and sequential chaining to good use and create our own sequential chain. We’ll mix:

  • Chain #1 — The agent we just created that may solve age problems in math.
  • Chain #2 — An LLM that takes the age of an individual and suggests an appropriate gift for them.
# Chain1 - solve math problem, get the age
chain_one = agent

# Chain2 - suggest age-appropriate gift
template = """You might be a present recommender. Given an individual's age,n
it's your job to suggest an appropriate gift for them.

Person Age:
{age}
Suggest gift:"""
prompt_template = PromptTemplate(input_variables=["age"], template=template)
chain_two = LLMChain(llm=llm, prompt=prompt_template)

Now that we have now each chains ready we are able to mix them using SimpleSequentialChain.

from langchain.chains import SimpleSequentialChain

overall_chain = SimpleSequentialChain(
chains=[chain_one, chain_two],
verbose=True)

A few things to notice:

  • We want not explicitly pass input_variables and output_variables for SimpleSequentialChain because the underlying assumption is that the output from chain 1 is passed as input to chain 2.

Finally, we are able to run it with the identical math problem as before:

query = "If my age is half of my dad's age and he's going to be 60 next yr, what's my current age?"
overall_chain.run(query)

# OUTPUT
# > Entering latest SimpleSequentialChain chain...

# > Entering latest AgentExecutor chain...
# I would like to determine my dad's current age after which divide it by two.
# Motion: PAL-MATH
# Motion Input: What's my dad's current age if he's going to be 60 next yr?
# Commentary: 59
# Thought: I now know my dad's current age, so I can divide it by two to get my age.
# Motion: Divide 59 by 2
# Motion Input: 59/2
# Commentary: Divide 59 by 2 will not be a legitimate tool, try one other one.
# Thought: I would like to make use of PAL-MATH to divide 59 by 2.
# Motion: PAL-MATH
# Motion Input: Divide 59 by 2
# Commentary: 29.5
# Thought: I now know the ultimate answer.
# Final Answer: My current age is 29.5 years old.

# > Finished chain.
# My current age is 29.5 years old.

# Given your age, an amazing gift could be something you can use and luxuriate in now like a pleasant bottle of wine, a luxury watch, a cookbook, or a present card to a favourite store or restaurant. Or, you might get something that can last for years like a pleasant piece of bijou or a high quality leather wallet.

# > Finished chain.

# 'nGiven your age, an amazing gift could be something you can use and luxuriate in now like a pleasant bottle of wine, a luxury watch, a cookbook, or a present card to a favourite store or restaurant. Or, you might get something that can last for years like a pleasant piece of bijou or a high quality leather wallet

There is likely to be times when it’s essential to pass along some additional context to the second chain, along with what it’s receiving from the primary chain. As an example, I would like to set a budget for the gift, depending on the age of the person who is returned by the primary chain. We are able to accomplish that using SimpleMemory.

First, let’s update the prompt for chain_two and pass to it a second variable called budget inside input_variables.

template = """You might be a present recommender. Given an individual's age,n
it's your job to suggest an appropriate gift for them. If age is under 10,n
the gift should cost not more than {budget} otherwise it should cost atleast 10 times {budget}.

Person Age:
{output}
Suggest gift:"""
prompt_template = PromptTemplate(input_variables=["output", "budget"], template=template)
chain_two = LLMChain(llm=llm, prompt=prompt_template)

If you happen to compare the template we had for SimpleSequentialChain with the one above, you’ll notice that I actually have also updated the primary input’s variable name from ageoutput. This is an important step, failing which an error could be raised on the time of chain validationMissing required input keys: {age}, only had {input, output, budget}.
It is because the output from the primary entity within the chain (i.e. agent) will probably be the input for the second entity within the chain (i.e. chain_two) and subsequently the variable names must match. Upon inspecting agent’s output keys, we see that the output variable known as output, hence the update.

print(agent.agent.llm_chain.output_keys)

# OUTPUT
["output"]

Next, let’s update the sort of chain we’re making. We are able to not work with SimpleSequentialChain since it only works in cases where it is a single input and single output. Since chain_two is now taking two input_variables, we’d like to make use of SequentialChain which is tailored to handle multiple inputs and outputs.

overall_chain = SequentialChain(
input_variables=["input"],
memory=SimpleMemory(memories={"budget": "100 GBP"}),
chains=[agent, chain_two],
verbose=True)

A few things to notice:

  • Unlike SimpleSequentialChain, passing input_variables parameter is mandatory for SequentialChain. It’s an inventory containing the name of the input variables that the primary entity within the chain (i.e. agent in our case) expects.
    Now a few of you could be wondering the best way to know the precise name utilized in the input prompt that the agent goes to make use of. We definitely didn’t write the prompt for this agent (as we did for chain_two)! It’s actually pretty straightforward to seek out it out by inspecting the prompt template of the llm_chain that the agent is made up of.
print(agent.agent.llm_chain.prompt.template)

# OUTPUT
#Answer the next questions as best you'll be able to. You might have access to the next tools:

#PAL-MATH: A language model that is basically good at solving complex word math problems. Input ought to be a totally worded hard word math problem.

#Use the next format:

#Query: the input query you will need to answer
#Thought: it is best to all the time take into consideration what to do
#Motion: the motion to take, ought to be certainly one of [PAL-MATH]
#Motion Input: the input to the motion
#Commentary: the results of the motion
#... (this Thought/Motion/Motion Input/Commentary can repeat N times)
#Thought: I now know the ultimate answer
#Final Answer: the ultimate answer to the unique input query

#Begin!

#Query: {input}
#Thought:{agent_scratchpad}

As you’ll be able to see toward the top of the prompt, the questions being asked by the end-user is stored in an input variable by the name input. If for some reason you had to govern this name within the prompt, ensure that you’re also updating the input_variables on the time of the creation of SequentialChain.

Finally, you might have came upon the identical information without going through the entire prompt:

print(agent.agent.llm_chain.prompt.input_variables)

# OUTPUT
# ['input', 'agent_scratchpad']

  • SimpleMemory is a straightforward solution to store context or other bits of knowledge that shouldn’t ever change between prompts. It requires one parameter on the time of initialization — memories. You’ll be able to pass elements to it in dict form. As an example, SimpleMemory(memories={“budget”: “100 GBP”}).

Finally, let’s run the brand new chain with the identical prompt as before. You’ll notice, the ultimate output has some luxury gift recommendations akin to weekend getaways in accordance with the upper budget in our updated prompt.

overall_chain.run("If my age is half of my dad's age and he's going to be 60 next yr, what's my current age?")

# OUTPUT
#> Entering latest SequentialChain chain...

#> Entering latest AgentExecutor chain...
# I would like to determine my dad's current age after which divide it by two.
#Motion: PAL-MATH
#Motion Input: What's my dad's current age if he's going to be 60 next yr?
#Commentary: 59
#Thought: I now know my dad's current age, so I can divide it by two to get my age.
#Motion: Divide 59 by 2
#Motion Input: 59/2
#Commentary: Divide 59 by 2 will not be a legitimate tool, try one other one.
#Thought: I can use PAL-MATH to divide 59 by 2.
#Motion: PAL-MATH
#Motion Input: Divide 59 by 2
#Commentary: 29.5
#Thought: I now know the ultimate answer.
#Final Answer: My current age is 29.5 years old.

#> Finished chain.

# For somebody of your age, a very good gift could be something that's each practical and meaningful. Consider something like a pleasant watch, a bit of bijou, a pleasant leather bag, or a present card to a favourite store or restaurant.nIf you've got a bigger budget, you might consider something like a weekend getaway, a spa package, or a special experience.'}

#> Finished chain.

For somebody of your age, a very good gift could be something that's each practical and meaningful. Consider something like a pleasant watch, a bit of bijou, a pleasant leather bag, or a present card to a favourite store or restaurant.nIf you've got a bigger budget, you might consider something like a weekend getaway, a spa package, or a special experience.'}

LEAVE A REPLY

Please enter your comment!
Please enter your name here