Trace & Evaluate your Agent with Arize Phoenix

-



So, you’ve built your agent. It takes in inputs and tools, processes them, and generates responses. Possibly it’s making decisions, retrieving information, executing tasks autonomously, or all three. But now comes the large query – how effectively is it performing? And more importantly, how do you recognize?

Constructing an agent is one thing; understanding its behavior is one other. That’s where tracing and evaluations are available. Tracing lets you see exactly what your agent is doing step-by-step—what inputs it receives, the way it processes information, and the way it arrives at its final output. Consider it like having an X-ray to your agent’s decision-making process. Meanwhile, evaluation helps you measure performance, ensuring your agent isn’t just functional, but actually effective. Is it producing the best answers? How relevant are its findings at each step? How well-crafted is the agent’s response? Does it align together with your goals?

Arize Phoenix provides a centralized platform to trace, evaluate, and debug your agent’s decisions in real time—multi functional place. We’ll dive into how you may implement them to refine and optimize your agent. Because constructing is only the start—true intelligence comes from knowing exactly what’s happening under the hood.

For this, let’s be sure that we’ve an Agent setup! You’ll be able to follow together with the next steps or use your individual agent.



Make An Agent



Step 1: Install the Required Libraries

pip install -q smolagents



Step 2: Import all of the Essential Constructing Blocks

Now let’s herald the classes and tools we’ll be using:

from smolagents import (
   CodeAgent,
   DuckDuckGoSearchTool,
   VisitWebpageTool,
   HfApiModel,
)



Step 3: Set Up Our Base Models

We’ll create a model instance powered by the Hugging Face Hub Serverless API:

hf_model = HfApiModel()



Step 4: Create the Tool-Calling Agent

agent = CodeAgent(
    tools=[DuckDuckGoSearchTool(), VisitWebpageTool()],
    model=hf_model,
    add_base_tools=True
)



Step 5: Run the Agent

Now for the magic moment—let’s see our agent in motion. The query we’re asking our agent is:
“Fetch the share price of Google from 2020 to 2024, and create a line graph from it?”

agent.run("fetch the share price of google from 2020 to 2024, and create a line graph from it?")

Your agent will now:

  1. Use DuckDuckGoSearchTool to look for historical share prices of Google.
  2. Potentially visit pages with the VisitWebpageTool to seek out that data.
  3. Attempt to collect information and generate or describe learn how to create the road graph.



Trace Your Agent

Once your agent is running, the following challenge is making sense of its internal workflow. Tracing helps you track each step your agent takes—from invoking tools to processing inputs and generating responses—allowing you to debug issues, optimize performance, and ensure it behaves as expected.

To enable tracing, we’ll use Arize Phoenix for visualization, and OpenTelemetry + OpenInference for instrumentation.

Install the telemetry module from smolagents:

pip install -q 'smolagents[telemetry]'

You’ll be able to run Phoenix in a bunch of alternative ways. This command will run a neighborhood instance of Phoenix in your machine:

python -m phoenix.server.foremost serve

For other hosting options of Phoenix, you may create a free online instance of Phoenix, self-host the appliance locally, or host the appliance on Hugging Face Spaces.

After launching, we register a tracer provider, pointing to our Phoenix instance.

from phoenix.otel import register
from openinference.instrumentation.smolagents import SmolagentsInstrumentor

tracer_provider = register(project_name="my-smolagents-app") 
SmolagentsInstrumentor().instrument(tracer_provider=tracer_provider) 

Now any calls made to smolagents can be sent through to our Phoenix instance.

Now that tracing is enabled, let’s test it with a straightforward query:

agent.run("What time is it in Tokyo without delay?")

Once OpenInference is ready up with SmolAgents, every agent invocation can be robotically traced in Phoenix.



Evaluate Your Agent

Once your agent is up and its run is monitored, the following step is to evaluate its performance. Evaluations (evals) help determine how well your agent is retrieving, processing, and presenting information.

There are numerous sorts of evals you may run, reminiscent of response relevance, factual accuracy, latency, and more. Try the Phoenix documentation for a deeper dive into different evaluation techniques.

In this instance, we’ll deal with evaluating the DuckDuckGo search tool utilized by our agent. We’ll measure the relevance of its search results using a Large Language Model (LLM) as a judge—specifically, OpenAI’s GPT-4o.



Step 1: Install OpenAI

First, install the obligatory packages:

pip install -q openai

We’ll be using GPT-4o to judge whether the search tool’s responses are relevant.
This method, generally known as LLM-as-a-judge, leverages language models to categorise and rating responses.



Step 2: Retrieve Tool Execution Spans

To judge how well DuckDuckGo is retrieving information, we first have to extract the execution traces where the tool was called.

from phoenix.trace.dsl import SpanQuery
import phoenix as px
import json

query = SpanQuery().where(
    "name == 'DuckDuckGoSearchTool'",
).select(
    input="input.value", 
    reference="output.value", 
)


tool_spans = px.Client().query_spans(query, project_name="my-smolagents-app")

tool_spans["input"] = tool_spans["input"].apply(lambda x: json.loads(x).get("kwargs", {}).get("query", ""))
tool_spans.head()



Step 3: Import Prompt Template

Next, we load the RAG Relevancy Prompt Template, which is able to help the LLM classify whether the search results are relevant or not.

from phoenix.evals import (
    RAG_RELEVANCY_PROMPT_RAILS_MAP,
    RAG_RELEVANCY_PROMPT_TEMPLATE,
    OpenAIModel,
    llm_classify
)
import nest_asyncio
nest_asyncio.apply()

print(RAG_RELEVANCY_PROMPT_TEMPLATE)



Step 4: Run the Evaluation

Now, we run the evaluation using GPT-4o because the judge:

from phoenix.evals import (
    llm_classify,
    OpenAIModel,
    RAG_RELEVANCY_PROMPT_TEMPLATE,
)

eval_model = OpenAIModel(model="gpt-4o")

eval_results = llm_classify(
    dataframe=tool_spans,
    model=eval_model,
    template=RAG_RELEVANCY_PROMPT_TEMPLATE,
    rails=["relevant", "unrelated"],
    concurrency=10,
    provide_explanation=True,
)
eval_results["score"] = eval_results["explanation"].apply(lambda x: 1 if "relevant" in x else 0)

What’s happening here?

  • We use GPT-4o to research the search query (input) and search result (output).
  • The LLM classifies whether the result’s relevant or unrelated based on the prompt.
  • We assign a binary rating (1 = relevant, 0 = unrelated) for further evaluation.

To see your results:

eval_results.head()



Step 5: Send Evaluation Results to Phoenix

from phoenix.trace import SpanEvaluations

px.Client().log_evaluations(SpanEvaluations(eval_name="DuckDuckGoSearchTool Relevancy", dataframe=eval_results))

With this setup, we will now systematically evaluate the effectiveness of the DuckDuckGo search tool inside our agent. Using LLM-as-a-judge, we will ensure our agent retrieves accurate and relevant information, leading to raised performance.
Any evaluation is straightforward to establish using this tutorial—just swap out the RAG_RELEVANCY_PROMPT_TEMPLATE for a special prompt template that matches your needs. Phoenix provides quite a lot of pre-written and pre-tested evaluation templates, covering areas like faithfulness, response coherence, factual accuracy, and more. Try the Phoenix docs to explore the total list and find one of the best fit to your agent!

Evaluation Template Applicable Agent Type
Hallucination Detection RAG agents General chatbots Knowledge-based assistants
Q&A on Retrieved Data RAG agents Research Assistants Document Search Tools
RAG Relevance RAG agents Search-based AI assistants
Summarization Summarization tools Document digesters Meeting note generators
Code Generation Code assistants AI programming bots
Toxicity Detection Moderation bots Content filtering AI
AI vs Human (Ground Truth) Evaluation & benchmarking tools AI-generated content validators
Reference (Citation) Link Research assistants Citation tools Academic writing aids
SQL Generation Evaluation Database query agents SQL automation tools
Agent Function Calling Evaluation Multi-step reasoning agents API-calling AI Task automation bots



Source link

ASK ANA

What are your thoughts on this topic?
Let us know in the comments below.

0 0 votes
Article Rating
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

Share this article

Recent posts

0
Would love your thoughts, please comment.x
()
x