Construct LLM Agents Faster with Datapizza AI

-

Organizations are increasingly investing in AI as these latest tools are adopted in on a regular basis operations increasingly more. This continuous wave of innovation is fueling the demand for more efficient and reliable frameworks. Following this trend, Datapizza (the startup behind Italy’s tech community) just released an open-source framework for GenAI with Python, called .

When creating LLM-powered Agents, it’s worthwhile to pick an AI stack:

  • Language Model – the brain of the Agent. The primary wide selection is open-source (i.e. , , ) vs paid (i.e. , , ). Then, based on the usecase, one needs to think about the LLM knowledge: generic (knows a bit of little bit of every little thing like Wikipedia) vs topic-specific (i.e. fine-tuned for coding or finance).
  • LLM Engine – it’s what runs the language model, responding to prompts, inferring meaning, and creating text. Mainly, it generates intelligence. Probably the most used are OpenAI (), Anthropic (), Google (), and Ollama (runs open-source models locally).
  • AI Framework – it’s the orchestration layer to construct and manage workflows. To place it in one other way, the framework must structure the intelligence created by LLMs. In the intervening time, the landscape is dominated by , , and . The brand new library falls on this category and needs to be an alternative choice to the opposite predominant frameworks.

In this text, I’m going to point out  use the brand new Datapizza framework for constructing LLM-powered AI Agents. I’ll present some useful Python code that will be easily applied in other similar cases (just copy, paste, run) and walk through every line of code with comments so you can replicate this instance.

Setup

I’ll use Ollama because the LLM engine, because I wish to host models locally on my computer. That’s the usual practice for all firms with sensitive data. Keeping every little thing local gives full control over data privacy, model behavior, and price.

Initially, it’s worthwhile to download Ollama from the web site. Then, pick a model and run the command indicated on the page to tug the LLM. I’m going with Alibaba’s , because it’s each smart and lite (ollama run qwen3).

supports all of the predominant LLM engines. We are able to complete the setup by running the next commands:

pip install datapizza-ai
pip install datapizza-ai-clients-openai-like

As indicated within the official documentation, we will quickly test our AI stack by calling the model with an easy prompt and asking an issue. The item OpenAILikeClient() is the way you hook up with the Ollama API, which is often hosted on the identical localhost URL.

from datapizza.clients.openai_like import OpenAILikeClient

llm = "qwen3"

prompt = '''
You might be an intelligent assistant, provide one of the best possible answer to user's request. 
''' 

ollama = OpenAILikeClient(api_key="", model=llm, system_prompt=prompt, base_url="http://localhost:11434/v1")

q = '''
what time is it?
'''

llm_res = ollama.invoke(q)
print(llm_res.text)

Chatbot

One other solution to test the aptitude of the LLM is to construct an easy Chatbot and do some conversation. To achieve this, at every interaction, we’d like to store the chat history and feed it back to the model, specifying what was said by whom. The Datapizza frameworkalready has a built-in memory system.

from datapizza.memory import Memory
from datapizza.type import TextBlock, ROLE

memory = Memory()
memory.add_turn(TextBlock(content=prompt), role=ROLE.SYSTEM)

while True:
    ## User
    q = input('🙂 >')
    if q == "quit":
        break
    
    ## LLM
    llm_res = ollama.invoke(q, memory=memory)
    res = llm_res.text
    print("🍕 >", f"x1b[1;30m{res}x1b[0m")

    ## Update Memory
    memory.add_turn(TextBlock(content=q), role=ROLE.USER)
    memory.add_turn(TextBlock(content=res), role=ROLE.ASSISTANT)

If you want to retrieve the chat history, you can just access the memory. Usually, AI frameworks use three roles in the interaction with an LLM: “” (core instructions), “” (what was said by the human), “” (what the chatbot replied).

memory.to_dict()

Obviously, the LLM alone is very limited and it can’t do much besides chatting. Therefore, we need to give it the possibility to take action, or in other words, to activate Tools.

Tools

Tools are the main difference between a simple LLM and an AI Agent. When the user requests something that goes beyond the LLM knowledge base (i.e. ““), the Agent should understand that it doesn’t know the answer, activate a Tool to get additional information (i.e. checking the clock), elaborate the result through the LLM, and generate an answer.

The Datapizza framework allows you to create Tools from scratch very easily. You just need to import the decorator and any function can become actionable for the Agent.

from datapizza.tools import tool

@tool
def get_time() -> str:
    '''Get the current time.'''
    from datetime import datetime
    return datetime.now().strftime("%H:%M")

get_time()

Then, assign the designated Tool to the Agent, and you’ll have an AI that combines language understanding + autonomy decision making + tool use.

from datapizza.agents import Agent
import os

os.environ["DATAPIZZA_AGENT_LOG_LEVEL"] = "DEBUG"  #max logging

agent = Agent(name="single-agent", client=ollama, system_prompt=prompt, 
              tools=[get_time], max_steps=2)

q = '''
what time is it?
'''

agent_res = agent.run(q)

An LLM-powered AI Agent is an intelligent system built around a language model that doesn’t just respond, but it surely reasons, decides, and acts. Besides conversation (which suggests chatting with a general-purpose knowledge base), essentially the most common actions that Agents can do are RAG (chatting along with your documents), Querying (chatting with a database), Web Search (chatting with the entire Web).

For example, let’s try an internet searching Tool. In Python, the simplest solution to do it’s with the famous private browser . You may directly use the original library or the Datapizza framework wrapper (pip install datapizza-ai-tools-duckduckgo).

from datapizza.tools.duckduckgo import DuckDuckGoSearchTool

DuckDuckGoSearchTool().search(query="powell")

Let’s create an Agent that may search the online for us. If you ought to make it more interactive, you may structure the AI like I did for the Chatbot.

os.environ["DATAPIZZA_AGENT_LOG_LEVEL"] = "ERROR" #turn off logging

prompt = '''
You might be a journalist. You should make assumptions, use your tool to research, make a guess, and formulate a final answer.
The ultimate answer must contain facts, dates, evidences to support your guess.
'''

memory = Memory()

agent = Agent(name="single-agent", client=ollama, system_prompt=prompt, 
              tools=[DuckDuckGoSearchTool()], 
              memory=memory, max_steps=2)

while True:
    ## User
    q = input('🙂 >')
    if q == "quit":
        break
    
    ## Agent
    agent_res = agent.run(q)
    res = agent_res.text
    print("🍕 >", f"x1b[1;30m{res}x1b[0m")

    ## Update Memory
    memory.add_turn(TextBlock(content=q), role=ROLE.USER)
    memory.add_turn(TextBlock(content=res), role=ROLE.ASSISTANT)

Multi-Agent System

The real strength of Agents is the ability to collaborate with each other, just like humans do. These teams are called Multi-Agent Systems (MAS), a group of AI Agents that work together in a shared environment to solve complex problems that are too difficult for a single one to handle alone.

This time, let’s create a more advanced Tool: code execution. Please note that LLMs know how to code by being exposed to a large corpus of both code and natural language text, where they learn patterns, syntax, and semantics of programming languages. But since they can not complete any real action, the code they create is just text. In short, LLMs can generate Python code but can’t execute it, Agents can.

import io
import contextlib

@tool
def code_exec(code:str) -> str:
    '''Execute python code. Use always the function print() to get the output'''
    output = io.StringIO()
    with contextlib.redirect_stdout(output):
        try:
            exec(code)
        except Exception as e:
            print(f"Error: {e}")
    return output.getvalue()

code_exec("from datetime import datetime; print(datetime.now().strftime('%H:%M'))")

There are two types of MAS: the sequential process ensures tasks are executed one after the other, following a linear progression. On the other hand, the hierarchical structure simulates traditional organizational hierarchies for efficient task delegation and execution. Personally, I tend to prefer the latter as there is more parallelism and flexibility.

Withthe Datapizza framework, you can link two or more Agents with the function can_call(). In this way, one Agent can pass the current task to another Agent.

prompt_senior = '''
You are a senior Python coder. You check the code generated by the Junior, 
and use your tool to execute the code only if it's correct and safe.
'''
agent_senior = Agent(name="agent-senior", client=ollama, system_prompt=prompt_senior, 
                     tools=[code_exec])

prompt_junior = '''
You might be a junior Python coder. You may generate code but you may't execute it. 
You receive a request from the Manager, and your final output should be Python code to pass on.
If you happen to do not know some specific commands, you should utilize your tool and search the online for " ... with python?".
'''
agent_junior = Agent(name="agent-junior", client=ollama, system_prompt=prompt_junior, 
                     tools=[DuckDuckGoSearchTool()])
agent_junior.can_call([agent_senior])

prompt_manager = '''
You realize nothing, you are only a manager. After you get a request from the user, 
first you ask the Junior to generate the code, you then ask the Senior to execute it.
'''
agent_manager = Agent(name="agent-manager", client=ollama, system_prompt=prompt_manager, 
                      tools=[])
agent_manager.can_call([agent_junior, agent_senior])

q = '''
Plot the Titanic dataframe. You'll find the info here: 
https://raw.githubusercontent.com/mdipietro09/DataScience_ArtificialIntelligence_Utils/master/machine_learning/data_titanic.csv
'''

agent_res = agent_manager.run(q)
#print(agent_res.text)

Conclusion

This text has been a tutorial to introduce , a brand latest framework to construct LLM-powered Chatbots and AI Agents. The library may be very flexible and user-friendly, and may cover different GenAI usecases. I used it with Ollama, but it will possibly be linked with all of the famous engines, like OpenAI.

Full code for this text: GitHub

I hope you enjoyed it! Be happy to contact me for questions and feedback or simply to share your interesting projects.

👉 Let’s Connect 👈

(All images are by the writer unless otherwise noted)

ASK ANA

What are your thoughts on this topic?
Let us know in the comments below.

0 0 votes
Article Rating
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

Share this article

Recent posts

0
Would love your thoughts, please comment.x
()
x