In Part 1 of this tutorial series, we introduced AI Agents, autonomous programs that perform tasks, make decisions, and communicate with others.
Agents perform actions through Tools. It would occur that a Tool doesn’t work on the primary try, or that multiple Tools have to be activated in sequence. Agents should have the option to prepare tasks right into a logical progression and alter their strategies in a dynamic environment.
To place it simply, the Agent’s structure have to be solid, and the behavior have to be reliable. Probably the most common option to try this is thru:
- Iterations – repeating a certain motion multiple times, often with slight changes or improvements in each cycle. Each time might involve the Agent revisiting certain steps to refine its output or reach an optimal solution.
- Chains – a series of actions which might be linked together in a sequence. Each step within the chain depends on the previous one, and the output of 1 motion becomes the input for the subsequent.
On this tutorial, I’m going to indicate how you can use iterations and chains for Agents. I’ll present some useful Python code that will be easily applied in other similar cases (just copy, paste, run) and walk through every line of code with comments so which you could replicate this instance (link to full code at the tip of the article).
Setup
Please discuss with Part 1 for the setup of and the principal LLM.
import Ollama
llm = "qwen2.5"
We are going to use the public APIs with the Python library (pip install yfinance==0.2.55
) to download financial data.
import yfinance as yf
stock = "MSFT"
yf.Ticker(ticker=stock).history(period='5d') #1d,5d,1mo,3mo,6mo,1y,2y,5y,10y,ytd,max
Let’s embed that right into a Tool.
import matplotlib.pyplot as plt
def get_stock(ticker:str, period:str, col:str):
data = yf.Ticker(ticker=ticker).history(period=period)
if len(data) > 0:
data[col].plot(color="black", legend=True, xlabel='', title=f"{ticker.upper()} ({period})").grid()
plt.show()
return 'okay'
else:
return 'no'
tool_get_stock = {'type':'function', 'function':{
'name': 'get_stock',
'description': 'Download stock data',
'parameters': {'type': 'object',
'required': ['ticker','period','col'],
'properties': {
'ticker': {'type':'str', 'description':'the ticker symbol of the stock.'},
'period': {'type':'str', 'description':"for 1 month input '1mo', for six months input '6mo', for 1 yr input '1y'. Use '1y' if not specified."},
'col': {'type':'str', 'description':"certainly one of 'Open','High','Low','Close','Volume'. Use 'Close' if not specified."},
}}}}
## test
get_stock(ticker="msft", period="1y", col="Close")
Furthermore, taking the code from the previous article as a reference, I shall write a general function to process the model response, resembling when the Agent wants to make use of a Tool or when it just returns text.
def use_tool(agent_res:dict, dic_tools:dict) -> dict:
## use tool
if "tool_calls" in agent_res["message"].keys():
for tool in agent_res["message"]["tool_calls"]:
t_name, t_inputs = tool["function"]["name"], tool["function"]["arguments"]
if f := dic_tools.get(t_name):
### calling tool
print('🔧 >', f"x1b[1;31m{t_name} -> Inputs: {t_inputs}x1b[0m")
### tool output
t_output = f(**tool["function"]["arguments"])
print(t_output)
### final res
res = t_output
else:
print('🤬 >', f"x1b[1;31m{t_name} -> NotFoundx1b[0m")
## don't use tool
if agent_res['message']['content'] != '':
res = agent_res["message"]["content"]
t_name, t_inputs = '', ''
return {'res':res, 'tool_used':t_name, 'inputs_used':t_inputs}
Let’s start a fast conversation with our Agent. For now, I’m going to make use of an easy generic prompt.
prompt = '''You're a financial analyst, assist the user using your available tools.'''
messages = [{"role":"system", "content":prompt}]
dic_tools = {'get_stock':get_stock}
while True:
## user input
try:
q = input('🙂 >')
except EOFError:
break
if q == "quit":
break
if q.strip() == "":
proceed
messages.append( {"role":"user", "content":q} )
## model
agent_res = ollama.chat(model=llm, messages=messages,
tools=[tool_get_stock])
dic_res = use_tool(agent_res, dic_tools)
res, tool_used, inputs_used = dic_res["res"], dic_res["tool_used"], dic_res["inputs_used"]
## final response
print("👽 >", f"x1b[1;30m{res}x1b[0m")
messages.append( {"role":"assistant", "content":res} )
As you can see, I started by asking an “easy” question. The LLM already knows that the symbol of Microsoft stock is MSFT, therefore the Agent was able to activate the Tool with the right inputs. But what if I ask something that might not be included in the LLM knowledge base?
Seems that the LLM doesn’t know that Facebook changed its name to META, so it used the Tool with the wrong inputs. I will enable the Agent to try an action several times through iterations.
Iterations
Iterations refer to the repetition of a process until a certain condition is met. We can let the Agent try a specific number of times, but we need to let it know that the previous parameters didn’t work, by adding the details in the message history.
max_i, i = 3, 0
while res == 'no' and i < max_i:
comment = f'''I used tool '{tool_used}' with inputs {inputs_used}. But it didn't work, so I must try again with different inputs'''
messages.append( {"role":"assistant", "content":comment} )
agent_res = ollama.chat(model=llm, messages=messages,
tools=[tool_get_stock])
dic_res = use_tool(agent_res, dic_tools)
res, tool_used, inputs_used = dic_res["res"], dic_res["tool_used"], dic_res["inputs_used"]
i += 1
if i == max_i:
res = f'I attempted {i} times but something is incorrect'
## final response
print("👽 >", f"x1b[1;30m{res}x1b[0m")
messages.append( {"role":"assistant", "content":res} )
The Agent tried 3 times with different inputs but it couldn’t find a solution because there is a gap in the LLM knowledge base. In this case, the model needed human input to understand how to use the Tool.
Next, we’re going to enable the Agent to fill the knowledge gap by itself.
Chains
A chain refers to a linear sequence of actions where the output of one step is used as the input for the next step. In this example, I will add another Tool that the Agent can use in case the first one fails.
We can use the web-searching Tool from the previous article.
from langchain_community.tools import DuckDuckGoSearchResults
def search_web(query:str) -> str:
return DuckDuckGoSearchResults(backend="news").run(query)
tool_search_web = {'type':'function', 'function':{
'name': 'search_web',
'description': 'Search the web',
'parameters': {'type': 'object',
'required': ['query'],
'properties': {
'query': {'type':'str', 'description':'the subject or subject to look on the internet'},
}}}}
## test
search_web(query="facebook stock")
Thus far, I’ve at all times used very generic prompts because the tasks were relatively easy. Now, I would like to be certain that that the Agent understands how you can use the Tools in the proper order, so I’m going to jot down a proper prompt. That is how a prompt ought to be done:
- The goal of the Agent
- What it must return (i.e. format, content)
- Any relevant warnings that may affect the output
- Context dump
prompt = '''
[GOAL] You're a financial analyst, assist the user using your available tools.
[RETURN] You could return the stock data that the user asks for.
[WARNINGS] So as to retrieve stock data, you might want to know the ticker symbol of the corporate.
[CONTEXT] First ALWAYS try to make use of the tool 'get_stock'.
If it doesn't work, you need to use the tool 'search_web' and search 'company name stock'.
Get information in regards to the stock and deduct what's the proper ticker symbol of the corporate.
Then, you need to use AGAIN the tool 'get_stock' with the ticker you bought using the previous tool.
'''
We will simply add the chain to the iteration loop that we have already got. This time the Agent has two Tools, and when the primary fails, the model can determine whether to retry or to make use of the second. Then, if the second Tool is used, the Agent must process the output and learn what’s the proper input for the primary Tool that originally failed.
max_i, i = 3, 0
while res in ['no',''] and that i < max_i:
comment = f'''I used tool '{tool_used}' with inputs {inputs_used}. But it didn't work, so I must try a different way.'''
messages.append( {"role":"assistant", "content":comment} )
agent_res = ollama.chat(model=llm, messages=messages,
tools=[tool_get_stock, tool_search_web])
dic_res = use_tool(agent_res, dic_tools)
res, tool_used, inputs_used = dic_res["res"], dic_res["tool_used"], dic_res["inputs_used"]
## chain: output of previous tool = input of next tool
if tool_used == 'search_web':
query = q+". You must return just the compay ticker.nContext: "+res
llm_res = ollama.generate(model=llm, prompt=query)["response"]
messages.append( {"role":"user", "content":f"try ticker: {llm_res}"} )
print("👽 >", f"x1b[1;30mI can try with {llm_res}x1b[0m")
agent_res = ollama.chat(model=llm, messages=messages, tools=[tool_get_stock])
dic_res = use_tool(agent_res, dic_tools)
res, tool_used, inputs_used = dic_res["res"], dic_res["tool_used"], dic_res["inputs_used"]
i += 1 if i == max_i: res = f'I attempted {i} times but something is incorrect'
## final response
print("👽 >", f"x1b[1;30m{res}x1b[0m")
messages.append( {"role":"assistant", "content":res} )
As expected, the Agent tried to make use of the primary Tool with the incorrect inputs, but as a substitute of trying the identical motion again as before, it decided to make use of the second Tool. By consuming information, it should understand the answer without the necessity for human input.
In summary, the AI tried to do an motion but failed as a consequence of a spot in its knowledge base. So it activated Tools to fill that gap and deliver the output requested by the user… that’s indeed the true essence of AI Agents.
Conclusion
This text has covered more structured ways to make Agents more reliable, using iterations and chains. With these constructing blocks in place, you might be already equipped to begin developing your personal Agents for various use cases.
Stay tuned for Part 3, where we’ll dive deeper into more advanced examples.
Full code for this text: GitHub
I hope you enjoyed it! Be happy to contact me for questions and feedback or simply to share your interesting projects.
👉 Let’s Connect 👈