AI Agents from Zero to Hero – Part 1

-


Intro

AI Agents are autonomous programs that perform tasks, make decisions, and communicate with others. Normally, they use a set of tools to assist complete tasks. In GenAI applications, these Agents process sequential reasoning and might use external tools (like web searches or database queries) when the LLM knowledge isn’t enough. Unlike a basic chatbot, which generates random text when uncertain, an AI Agent prompts tools to offer more accurate, specific responses.

We’re moving closer and closer to the concept of Agentic Ai: systems that exhibit a better level of autonomy and decision-making ability, without direct human intervention. While today’s AI Agents respond reactively to human inputs, tomorrow’s Agentic AIs proactively engage in problem-solving and might adjust their behavior based on the situation.

Today, constructing Agents from scratch is becoming as easy as training a logistic regression model 10 years ago. Back then, provided a simple library to quickly train Machine Learning models with just a number of lines of code, abstracting away much of the underlying complexity.

On this tutorial, I’m going to indicate the way to construct from scratch several types of AI Agents, from easy to more advanced systems. I’ll present some useful Python code that might be easily applied in other similar cases (just copy, paste, run) and walk through every line of code with comments so you can replicate this instance.

Setup

As I said, anyone can have a custom Agent running locally free of charge without GPUs or API keys. The one obligatory library is (pip install ollama==0.4.7), because it allows users to run LLMs locally, without having cloud-based services, giving more control over data privacy and performance.

To start with, that you must download from the web site. 

Then, on the prompt shell of your laptop, use the command to download the chosen LLM. I’m going with Alibaba’s , because it’s each smart and lite.

After the download is accomplished, you may move on to Python and begin writing code.

import ollama
llm = "qwen2.5"

Let’s test the LLM:

stream = ollama.generate(model=llm, prompt=""'what time is it?''', stream=True)
for chunk in stream:
    print(chunk['response'], end='', flush=True)

Obviously, the LLM per se may be very limited and it might probably’t do much besides chatting. Subsequently, we’d like to offer it the likelihood to take motion, or in other words, to activate Tools.

One of the common tools is the flexibility to search the Web. In Python, the simplest technique to do it’s with the famous private browser (pip install duckduckgo-search==6.3.5). You possibly can directly use the unique library or import the wrapper (pip install langchain-community==0.3.17). 

With , so as to use a Tool, the function have to be described in a dictionary.

from langchain_community.tools import DuckDuckGoSearchResults
def search_web(query: str) -> str:
  return DuckDuckGoSearchResults(backend="news").run(query)

tool_search_web = {'type':'function', 'function':{
  'name': 'search_web',
  'description': 'Search the online',
  'parameters': {'type': 'object',
                'required': ['query'],
                'properties': {
                    'query': {'type':'str', 'description':'the subject or subject to look on the net'},
}}}}
## test
search_web(query="nvidia")

Web searches could possibly be very broad, and I would like to offer the Agent the choice to be more precise. Let’s say, I’m planning to make use of this Agent to find out about financial updates, so I can provide it a selected tool for that topic, like searching only a finance website as an alternative of the entire web.

def search_yf(query: str) -> str:
  engine = DuckDuckGoSearchResults(backend="news")
  return engine.run(f"site:finance.yahoo.com {query}")

tool_search_yf = {'type':'function', 'function':{
  'name': 'search_yf',
  'description': 'Seek for specific financial news',
  'parameters': {'type': 'object',
                'required': ['query'],
                'properties': {
                    'query': {'type':'str', 'description':'the financial topic or subject to look'},
}}}}

## test
search_yf(query="nvidia")

Easy Agent (WebSearch)

In my view, probably the most basic Agent should no less than have the opportunity to make a choice from one or two Tools and re-elaborate the output of the motion to offer the user a correct and concise answer. 

First, that you must write a prompt to explain the Agent’s purpose, the more detailed the higher (mine may be very generic), and that can be the primary message within the chat history with the LLM. 

prompt=""'You might be an assistant with access to tools, you will need to determine when to make use of tools to reply user message.''' 
messages = [{"role":"system", "content":prompt}]

With a view to keep the chat with the AI alive, I’ll use a loop that starts with user’s input after which the Agent is invoked to reply (which generally is a text from the LLM or the activation of a Tool).

while True:
    ## user input
    try:
        q = input('🙂 >')
    except EOFError:
        break
    if q == "quit":
        break
    if q.strip() == "":
        proceed
    messages.append( {"role":"user", "content":q} )
   
    ## model
    agent_res = ollama.chat(
        model=llm,
        tools=[tool_search_web, tool_search_yf],
        messages=messages)

Up so far, the chat history could look something like this:

If the model wants to make use of a Tool, the suitable function must be run with the input parameters suggested by the LLM in its response object:

So our code must get that information and run the Tool function.

## response
    dic_tools = {'search_web':search_web, 'search_yf':search_yf}

    if "tool_calls" in agent_res["message"].keys():
        for tool in agent_res["message"]["tool_calls"]:
            t_name, t_inputs = tool["function"]["name"], tool["function"]["arguments"]
            if f := dic_tools.get(t_name):
                ### calling tool
                print('🔧 >', f"x1b[1;31m{t_name} -> Inputs: {t_inputs}x1b[0m")
                messages.append( {"role":"user", "content":"use tool '"+t_name+"' with inputs: "+str(t_inputs)} )
                ### tool output
                t_output = f(**tool["function"]["arguments"])
                print(t_output)
                ### final res
                p = f'''Summarize this to reply user query, be as concise as possible: {t_output}'''
                res = ollama.generate(model=llm, prompt=q+". "+p)["response"]
            else:
                print('🤬 >', f"x1b[1;31m{t_name} -> NotFoundx1b[0m")
 
    if agent_res['message']['content'] != '':
        res = agent_res["message"]["content"]
     
    print("👽 >", f"x1b[1;30m{res}x1b[0m")
    messages.append( {"role":"assistant", "content":res} )

Now, if we run the full code, we can chat with our Agent.

Advanced Agent (Coding)

LLMs know how to code by being exposed to a large corpus of both code and natural language text, where they learn patterns, syntax, and semantics of Programming languages. The model learns the relationships between different parts of the code by predicting the next token in a sequence. In short, LLMs can generate Python code but can’t execute it, Agents can.

I shall prepare a Tool allowing the Agent to execute code. In Python, you can easily create a shell to run code as a string with the native command .

import io
import contextlib

def code_exec(code: str) -> str:
    output = io.StringIO()
    with contextlib.redirect_stdout(output):
        try:
            exec(code)
        except Exception as e:
            print(f"Error: {e}")
    return output.getvalue()

tool_code_exec = {'type':'function', 'function':{
  'name': 'code_exec',
  'description': 'execute python code',
  'parameters': {'type': 'object',
                'required': ['code'],
                'properties': {
                    'code': {'type':'str', 'description':'code to execute'},
}}}}

## test
code_exec("a=1+1; print(a)")

Similar to before, I’ll write a prompt, but this time, in the beginning of the chat-loop, I’ll ask the user to offer a file path.

prompt=""'You might be an authority data scientist, and you might have tools to execute python code.
To start with, execute the next code exactly because it is: 'df=pd.read_csv(path); print(df.head())'
For those who create a plot, ALWAYS add 'plt.show()' at the tip.
'''
messages = [{"role":"system", "content":prompt}]
start = True

while True:
    ## user input
    try:
        if start is True:
            path = input('📁 Provide a CSV path >')
            q = "path = "+path
        else:
            q = input('🙂 >')
    except EOFError:
        break
    if q == "quit":
        break
    if q.strip() == "":
        proceed
   
    messages.append( {"role":"user", "content":q} )

Since coding tasks might be a little bit trickier for LLMs, I’m going so as to add also memory reinforcement. By default, during one session, there isn’t a real long-term memory. LLMs have access to the chat history, in order that they can remember information temporarily, and track the context and directions you’ve given earlier within the conversation. Nevertheless, memory doesn’t all the time work as expected, especially if the LLM is small. Subsequently, practice is to strengthen the model’s memory by adding periodic reminders within the chat history.

prompt=""'You might be an authority data scientist, and you might have tools to execute python code.
To start with, execute the next code exactly because it is: 'df=pd.read_csv(path); print(df.head())'
For those who create a plot, ALWAYS add 'plt.show()' at the tip.
'''
messages = [{"role":"system", "content":prompt}]
memory = '''Use the dataframe 'df'.'''
start = True

while True:
    ## user input
    try:
        if start is True:
            path = input('📁 Provide a CSV path >')
            q = "path = "+path
        else:
            q = input('🙂 >')
    except EOFError:
        break
    if q == "quit":
        break
    if q.strip() == "":
        proceed
   
    ## memory
    if start is False:
        q = memory+"n"+q
    messages.append( {"role":"user", "content":q} )

Please note that the default memory length in Ollama is 2048 characters. In case your machine can handle it, you may increase it by changing the number when the LLM is invoked:

    ## model
    agent_res = ollama.chat(
        model=llm,
        tools=[tool_code_exec],
        options={"num_ctx":2048},
        messages=messages)

On this usecase, the output of the Agent is usually code and data, so I don’t want the LLM to re-elaborate the responses.

    ## response
    dic_tools = {'code_exec':code_exec}
   
    if "tool_calls" in agent_res["message"].keys():
        for tool in agent_res["message"]["tool_calls"]:
            t_name, t_inputs = tool["function"]["name"], tool["function"]["arguments"]
            if f := dic_tools.get(t_name):
                ### calling tool
                print('🔧 >', f"x1b[1;31m{t_name} -> Inputs: {t_inputs}x1b[0m")
                messages.append( {"role":"user", "content":"use tool '"+t_name+"' with inputs: "+str(t_inputs)} )
                ### tool output
                t_output = f(**tool["function"]["arguments"])
                ### final res
                res = t_output
            else:
                print('🤬 >', f"x1b[1;31m{t_name} -> NotFoundx1b[0m")
 
    if agent_res['message']['content'] != '':
        res = agent_res["message"]["content"]
     
    print("👽 >", f"x1b[1;30m{res}x1b[0m")
    messages.append( {"role":"assistant", "content":res} )
    start = False

Now, if we run the total code, we are able to chat with our Agent.

Conclusion

This text has covered the foundational steps of making Agents from scratch using only . With these constructing blocks in place, you’re already equipped to start out developing your personal Agents for various use cases. 

Stay tuned for Part 2, where we are going to dive deeper into more advanced examples.

Full code for this text: GitHub

I hope you enjoyed it! Be happy to contact me for questions and feedback or simply to share your interesting projects.

👉 Let’s Connect 👈

ASK ANA

What are your thoughts on this topic?
Let us know in the comments below.

0 0 votes
Article Rating
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

Share this article

Recent posts

0
Would love your thoughts, please comment.x
()
x