had launched its own LLM agent framework, the NeMo Agent Toolkit (or NAT), I got really excited. We normally consider Nvidia as the corporate powering your entire LLM hype with its GPUs, so it’s fascinating to see them transcend hardware and step into the software space as well.
There are already loads of LLM agent frameworks on the market: LangGraph, smolagents, CrewAI and DSPy, simply to name a number of. The NeMo Agent Toolkit, nonetheless, feels a bit different. I’d describe it as a sort of glue that helps stitch all of the pieces together and switch them right into a production-ready solution.
Nvidia positions this framework as a approach to tackle “day 2” problems: exposing agents as APIs, adding observability to watch your system and investigate edge cases, constructing evaluations, and reusing agents created in other frameworks.
In this text, I’ll explore the core capabilities of the NeMo Agent Toolkit in practice, starting with an easy chat-completion app and regularly moving toward a hierarchical agentic setup, where one LLM agent can recursively use other agents as tools. Because it’s the festive season, I’ll be using publicly available data from the World Happiness Report to maintain things cheerful.
Organising
As usual, we’ll start by establishing the environment and installing the package.
The core package itself is fairly lightweight. Nonetheless, as I discussed earlier, NAT is designed to act as glue allowing to integrate with different LLM frameworks in your workflow. Due to that, there are several optional plugins available for popular libraries corresponding to LangChain, CrewAI, and LlamaIndex. You’ll be able to at all times find the most recent list of supported plugins in the official documentation.
In this text, we shall be using LangChain, so we’ll need to put in the corresponding extension as well.
First, create and activate a virtual environment using uv.
uv venv .venv_nat_uv --python 3.12
source .venv_nat_uv/bin/activate
Then install NAT with the LangChain plugin.
uv pip install "nvidia-nat[langchain]"
In case you’re planning to run NAT from the CLI, you’ll also have to export the relevant environment variables. Since I’ll be using Anthropic models, I want to set the API key.
export ANTHROPIC_API_KEY=
Loading data
Next, let’s download the World Happiness Report data and take a more in-depth take a look at it. I’ve put together a small helper function to load the dataset and barely clean up the information.
import pandas as pd
def load_data():
df = pd.read_excel('whr2025_data.xlsx')
df = df[df.Year >= 2019]
df = df.drop(['Lower whisker', 'Upper whisker'], axis=1)
df.columns = ['year', 'rank', 'country', 'happiness_score',
'impact_gdp', 'impact_social_support',
'impact_life_expectancy', 'impact_freedom',
'impact_generosity', 'impact_corruption', 'impact_residual']
return df
df = load_data()
This dataset covers the World Happiness Report results from 2019 to 2024. For every country and 12 months, it includes the general happiness rating in addition to the estimated contribution of several underlying aspects:
- logarithm of GDP per capita,
- social support,
- healthy life expectancy,
- freedom to make life decisions,
- generosity,
- perception of corruption.
With this dataset, we’ll give you the chance to analyze happiness patterns across geographies and over time and hopefully spot some interesting patterns along the way in which.
Chat completion example
Let’s start with a quite simple chat-completion example. The architecture here is intentionally minimal: a single LLM with no additional tools or agents involved.
The NeMo Agent Toolkit is configured via YAML files that outline each the workflow and the underlying LLMs. Nvidia selected this approach since it makes experimenting with different configurations much easier. For this primary example, we’ll create a chat_config.yml file.
At a high level, our config file will consist of two predominant sections:
llmswhere we define the language models we wish to make use of,workflowwhere we describe how those models are wired together and the way the agent behaves.
On the LLM side, NAT supports several providers out of the box, including OpenAI, Nvidia Inference Microservices, and AWS Bedrock. Since I would like to make use of an Anthropic model, the best option here is LiteLLM, which acts as a universal wrapper and lets us connect with virtually any provider.
Within the workflow section, we’ll specify:
- the workflow type (we shall be using the only
chat_completionfor now, but will explore more complex agentic setups later), - the LLM defined above, and
- the system prompt that sets the agent’s behaviour.
This offers us a solid MVP and a reliable baseline to construct on before moving into more complex agentic applications.
llms:
chat_llm:
_type: litellm
model_name: claude-sonnet-4-5-20250929
api_key: $ANTHROPIC_API_KEY
temperature: 0.7
workflow:
_type: chat_completion
llm_name: chat_llm
system_prompt: |
You might be a knowledgeable scientist in the sphere of happiness studies.
You've access to a dataset containing the World Happiness Report data from 2019 to 2025.
Your task is to investigate the information and supply insights based on user queries.
Use the dataset to reply questions on country rankings, trends through the years, and aspects influencing happiness scores.
Now it’s time to run our application. We will do that with a single CLI command by specifying the trail to the config file and providing an input query.
nat run
--config_file chat_config.yml
--input "How is happinness defined?"
Once the command runs, we’ll see the next output within the console.
2025-12-24 18:07:34 - INFO - nat.cli.commands.start:192 - Starting NAT
from config file: 'chat_config.yml'
Configuration Summary:
--------------------
Workflow Type: chat_completion
Variety of Functions: 0
Variety of Function Groups: 0
Variety of LLMs: 1
Variety of Embedders: 0
Variety of Memory: 0
Variety of Object Stores: 0
Variety of Retrievers: 0
Variety of TTC Strategies: 0
Variety of Authentication Providers: 0
2025-12-24 18:07:35 - INFO - LiteLLM:3427 -
LiteLLM completion() model= claude-sonnet-4-5-20250929; provider = anthropic
2025-12-24 18:07:44 - INFO - nat.front_ends.console.console_front_end_plugin:102 - --------------------------------------------------
['In the World Happiness Report, happiness is defined as subjective well-being,
measured primarily through the **Cantril ladder** life evaluation question,
where respondents rate their current life on a scale from 0 (worst possible)
to 10 (best possible). The overall happiness score is then statistically
explained by six key factors: GDP per capita, social support, healthy life
expectancy, freedom to make life choices, generosity, and perceptions of
corruption.']
--------------------------------------------------
We got a fairly decent answer based on the model’s general knowledge. Now, let’s take the following step and deploy it. Since NAT is designed for production-ready applications, we are able to easily expose our solution as a REST API. Later in this text, we’ll even see find out how to turn it right into a customer-ready UI.
To make our agent accessible via an API endpoint, we are able to use the nat serve command.
nat serve --config_file chat_config.yml
Now, our application is out there at http://localhost:8000, and we are able to interact with it using Python. The API format is compatible with OpenAI’s endpoints.
import requests
import json
# Test the API endpoint
response = requests.post(
"http://localhost:8000/v1/chat/completions",
headers={"Content-Type": "application/json"},
json={
"messages": [
{
"role": "user",
"content": "How many years of happiness data do we have?"
}
],
"stream": False
}
)
# Parse and display the response
if response.status_code == 200:
result = response.json()
print(result["choices"][0]["message"]["content"])
else:
print(f"Error: {response.status_code}")
print(response.text)
# We now have 7 years of happiness data, covering the period from 2019 to 2025.
This offers us a plausible MVP that may answer basic questions on the Happiness data. Nonetheless, to supply deeper insights, our agent needs context and access to the actual dataset. Equipping it with tools shall be our next step.
Adding tools
Next, let’s add a few tools that can help our agent analyse the World Happiness Report data. We’ll provide our agent with two functions:
get_country_statsreturns all Happiness data filtered by a selected country,
get_year_statsoutputs an summary of the Happiness Report for a given 12 months, including the happiest and least comfortable countries, the common happiness rating, and the aspects influencing it.

Adding tools within the NeMo Agent toolkit requires quite a little bit of boilerplate code. We’ll have to undergo the next steps:
- Implement the functions in Python,
- Define an input schema for every function,
- Create corresponding config classes,
- Wrap the functions so that they are async and callable by the agent,
- Update the YAML config.
Fortunately, NAT provides the workflow create command, which generates a scaffolding structure to assist organise your project.
nat workflow create happiness_v1
This command doesn’t generate all of the implementation for you, however it does create the project structure with all of the crucial files. After running it, the next structure shall be created.
happiness_v1/
├── 📄 pyproject.toml # Python package configuration
├── 📁 configs/ # Root-level config (symlink or copy)
│ └── config.yml # NAT workflow configuration
├── 📁 data/ # Root-level data folder (empty)
│
└── 📁 src/ # Source code
└── 📁 happiness_v1/ # Predominant package
├── 📄 __init__.py # Package initializer (empty)
├── 📄 happiness_v1.py # Core function implementations
├── 📄 register.py # Function registration (entry point)
├── 📁 configs/ # Package-specific config
│ └── config.yml # NAT workflow config
└── 📁 data/ # Package data folder (empty)
Let’s start constructing our agent. Step one is implementing the functions in Python. I created a utils folder inside src/happiness_v1 and added the functions we wish the agent to make use of. I also included a handy helper load_data function we checked out earlier, which the agent will use behind the scenes to pre-load the Happiness Report data.
from typing import Dict, List, Optional, Union
def get_country_stats(df: pd.DataFrame, country: str) -> pd.DataFrame:
"""
Get happiness statistics for a selected country.
Args:
df (pd.DataFrame): DataFrame containing happiness data.
country (str): Name of the country to filter by.
Returns:
pd.DataFrame: Filtered DataFrame with statistics for the desired country.
"""
return df[df['country'].str.comprises(country, case=False)]
def get_year_stats(df: pd.DataFrame, 12 months: int) -> str:
"""
Get happiness statistics for a selected 12 months.
Args:
df (pd.DataFrame): DataFrame containing happiness data.
12 months (int): Yr to filter by.
Returns:
summary (str): Summary statistics for the desired 12 months.
"""
year_df = df[df['year'] == 12 months].sort_values('rank')
top5_countries = f'''
Top 5 Countries in {12 months} by Happiness Rank:
{year_df.head(5)[["rank", "country", "happiness_score"]].to_string(index=False)}
'''
bottom5_countries = f'''
Bottom 5 Countries in {12 months} by Happiness Rank:
{year_df.tail(5)[["rank", "country", "happiness_score"]].to_string(index=False)}
'''
scores_mean = f'''
Average Happiness Rating in {12 months}:
{year_df[['happiness_score', 'impact_gdp', 'impact_social_support',
'impact_life_expectancy', 'impact_freedom',
'impact_generosity', 'impact_corruption']].mean().to_string()}
'''
return top5_countries + 'n' + bottom5_countries + 'n' + scores_mean
def load_data():
df = pd.read_excel('whr2025_data.xlsx')
df = df[df.Year >= 2019]
df = df.drop(['Lower whisker', 'Upper whisker'], axis=1)
df.columns = ['year', 'rank', 'country', 'happiness_score',
'impact_gdp', 'impact_social_support',
'impact_life_expectancy', 'impact_freedom',
'impact_generosity', 'impact_corruption', 'impact_residual']
return df
Now, let’s define the input schemas for our tools. We’ll use Pydantic for this, specifying each the expected arguments and their descriptions. This step is critical since the schema and descriptions are what allow the LLM to grasp when and find out how to use each tool. We’ll add this code to src/happiness_v1/register.py.
from pydantic import BaseModel, Field
class CountryStatsInput(BaseModel):
country: str = Field(
description="Country name to filter the Happiness Report data. For instance: 'Finland', 'United States', 'India'."
)
class YearStatsInput(BaseModel):
12 months: int = Field(
description="Yr to filter the Happiness Report data. For instance: 2019, 2020, 2021."
)
Next, we want to create corresponding config classes. These define unique names for the tools, which we’ll later reference from the YAML configuration.
from nat.data_models.function import FunctionBaseConfig
class CountryStatsConfig(FunctionBaseConfig, name="country_stats"):
"""Configuration for calculating country-specific happiness statistics."""
pass
class YearStatsConfig(FunctionBaseConfig, name="year_stats"):
"""Configuration for calculating year-specific happiness statistics."""
pass
The following step is to wrap our Python functions so that they might be invoked by the agent. For now, we’ll keep things easy: load the information, wrap the function, and specify the input schema and configuration. We’ll take a look at find out how to pass and use parameters from the YAML config later.
@register_function(config_type=CountryStatsConfig)
async def country_stats_tool(config: CountryStatsConfig, builder: Builder):
"""Register tool for calculating country-specific happiness statistics."""
df = load_data()
async def _wrapper(country: str) -> str:
result = get_country_stats(df, country)
return result
yield FunctionInfo.from_fn(
_wrapper,
input_schema=CountryStatsInput,
description="Get happiness statistics for a selected country from the World Happiness Report data."
)
@register_function(config_type=YearStatsConfig)
async def year_stats_tool(config: YearStatsConfig, builder: Builder):
"""Register tool for calculating year-specific happiness statistics."""
df = load_data()
async def _wrapper(12 months: int) -> str:
result = get_year_stats(df, 12 months)
return result
yield FunctionInfo.from_fn(
_wrapper,
input_schema=YearStatsInput,
description="Get happiness statistics for a selected 12 months from the World Happiness Report data."
)
Finally, we want to update the YAML configuration in src/happiness_v1/configs/config.yml. First, we’ll add a functions section. Then, we’ll update the workflow type to make use of a ReAct agent, which implements one of the crucial common agentic patterns based on the Thought → Motion → Commentary loop. NAT also supports several other workflow types, corresponding to reasoning agents and router agents.
functions:
country_stats:
_type: happiness_v1/country_stats
year_stats:
_type: happiness_v1/year_stats
llms:
chat_llm:
_type: litellm
model_name: anthropic/claude-sonnet-4-5-20250929
api_key: $ANTHROPIC_API_KEY
temperature: 0.7
workflow:
_type: react_agent
llm_name: chat_llm
tool_names: [country_stats, year_stats]
verbose: true
max_iterations: 5
parse_agent_response_max_retries: 2
Now we are able to install the package locally and run the agent.
source .venv_nat_uv/bin/activate
cd happiness_v1
uv pip install -e .
cd ..
nat run
--config_file happiness_v1/src/happiness_v1/configs/config.yml
--input "Is Denmark happier than Finland?"
While using the Anthropic model with the ReAct agent, I bumped into a difficulty that was fixed in the newest (not yet stable) version of NAT. I needed to patch it manually.
After applying the fix, every thing worked as expected. The agent queried the information for Denmark and Finland, reasoned over the outcomes, and produced a grounded final answer based on the actual Happiness Report data. Adding tools allowed the agent to reply more nuanced questions on the Happiness Report.
------------------------------
[AGENT]
Agent input: Is Denmark happier than Finland?
Agent's thoughts:
Thought: To reply whether Denmark is happier than Finland, I want to get happiness data for each countries. Let me start by getting Denmark's happiness statistics.
Motion: country_stats
Motion Input: {"country": "Denmark"}
------------------------------
2025-12-25 11:32:52 - INFO - nat.agent.base:221 -
------------------------------
[AGENT]
Calling tools: country_stats
Tool's input: {'country': 'Denmark'}
Tool's response:
12 months rank country happiness_score impact_gdp impact_social_support impact_life_expectancy impact_freedom impact_generosity impact_corruption impact_residual
455 2024 2 Denmark 7.5210 1.825000 1.748000 0.820000 0.95500 0.150000 0.48800 1.535000
456 2023 2 Denmark 7.5830 1.908000 1.520000 0.699000 0.82300 0.204000 0.54800 1.881000
457 2022 2 Denmark 7.5860 1.949000 1.548000 0.537000 0.73400 0.208000 0.52500 2.084000
458 2021 2 Denmark 7.6360 1.953000 1.243000 0.777000 0.71900 0.188000 0.53200 2.226000
459 2020 3 Denmark 7.6200 1.502000 1.108000 0.763000 0.68600 0.208000 0.48500 2.868000
460 2019 2 Denmark 7.6456 1.326949 1.503449 0.979333 0.66504 0.242793 0.49526 2.432741
[6 rows x 11 columns]
------------------------------
11:32:52 - LiteLLM:INFO: utils.py:3427 -
LiteLLM completion() model= claude-sonnet-4-5-20250929; provider = anthropic
2025-12-25 11:32:52 - INFO - LiteLLM:3427 -
LiteLLM completion() model= claude-sonnet-4-5-20250929; provider = anthropic
2025-12-25 11:32:54 - INFO - nat.agent.react_agent.agent:193 -
------------------------------
[AGENT]
Agent input: Is Denmark happier than Finland?
Agent's thoughts:
Thought: Now I want to get Finland's happiness statistics to match with Denmark.
Motion: country_stats
Motion Input: {"country": "Finland"}
------------------------------
2025-12-25 11:32:54 - INFO - nat.agent.base:221 -
------------------------------
[AGENT]
Calling tools: country_stats
Tool's input: {'country': 'Finland'}
Tool's response:
12 months rank country happiness_score impact_gdp impact_social_support impact_life_expectancy impact_freedom impact_generosity impact_corruption impact_residual
563 2024 1 Finland 7.7360 1.74900 1.783000 0.824000 0.986000 0.11000 0.502000 1.782000
564 2023 1 Finland 7.7410 1.84400 1.572000 0.695000 0.859000 0.14200 0.546000 2.082000
565 2022 1 Finland 7.8040 1.88800 1.585000 0.535000 0.772000 0.12600 0.535000 2.363000
566 2021 1 Finland 7.8210 1.89200 1.258000 0.775000 0.736000 0.10900 0.534000 2.518000
567 2020 2 Finland 7.8420 1.44600 1.106000 0.741000 0.691000 0.12400 0.481000 3.253000
568 2019 1 Finland 7.8087 1.28519 1.499526 0.961271 0.662317 0.15967 0.477857 2.762835
[6 rows x 11 columns]
------------------------------
11:32:54 - LiteLLM:INFO: utils.py:3427 -
LiteLLM completion() model= claude-sonnet-4-5-20250929; provider = anthropic
2025-12-25 11:32:54 - INFO - LiteLLM:3427 -
LiteLLM completion() model= claude-sonnet-4-5-20250929; provider = anthropic
2025-12-25 11:32:59 - INFO - nat.agent.react_agent.agent:193 -
------------------------------
[AGENT]
Agent input: Is Denmark happier than Finland?
Agent's thoughts:
Thought: I now have happiness data for each Denmark and Finland. Taking a look at the rankings, Finland has consistently ranked #1 lately (2024, 2023, 2022, 2021, and 2019), while Denmark was #2 in 2020. This means that Finland is usually considered happier than Denmark in accordance with the World Happiness Report rankings.
Final Answer: No, Denmark just isn't happier than Finland. In keeping with the World Happiness Report data, Finland has consistently ranked #1 in happiness for most up-to-date years (2024, 2023, 2022, 2021, and 2019), making it the happiest country on the planet. While Denmark can also be among the many happiest countries globally, Finland surpasses Denmark in happiness rankings.
------------------------------
2025-12-25 11:32:59 - INFO - nat.front_ends.console.console_front_end_plugin:102 - --------------------------------------------------
Workflow Result:
['No, Denmark is not happier than Finland. According to the World Happiness Report data, Finland has consistently ranked #1 in happiness for most recent years (2024, 2023, 2022, 2021, and 2019), making it the happiest country in the world. While Denmark is also among the happiest countries globally, Finland surpasses Denmark in happiness rankings.']
--------------------------------------------------
uv pip uninstall happinness_v1
Integrating one other agent as a tool
Our agent is already quite capable and may answer easy questions on the World Happiness Report data. Nonetheless, it still struggles with certain forms of questions, for instance, how much happier people in Finland are in comparison with people within the UK. In cases like this, the agent would likely hallucinate, because it lacks basic calculation capabilities. Fortunately, we are able to fix this by giving the agent access to a calculator.
I have already got a calculator agent implemented in LangGraph from a previous project. It’s a quite simple agent with a single tool that executes arbitrary Python code. In case you’re curious, you could find the implementation here.
Here is how it really works in practice.
from calculator.calculator_agent import calculate
result = calculate("The happiness scope in Finland is 7.73 while it's 6.73 in the UK. How much are people in Finland happier than in the UK in percents?")
print("Result:", result['final_result'])
print("Explanation:", result['explanation'])
# Result: 14.86
# Explanation: **Answer:** People in Finland are **14.86%** happier than people
# in the UK.
# **Explanation:**
# - Finland's happiness rating: 7.73
# - United Kingdom's happiness rating: 6.73
# - Absolute difference: 7.73 - 6.73 = 1.00
# - Percentage calculation: (1.00 ÷ 6.73) × 100 = 14.86%
# This implies Finland's happiness rating is roughly 14.86% higher than
# the UK's happiness rating.
The good thing concerning the NeMo Agent Toolkit is that we don’t have to rewrite this agent from scratch. With just a number of small tweaks, we are able to integrate our existing LangGraph-based calculator agent directly into the NAT workflow. Let’s see find out how to try this next.

First, I made a small change to the calculator agent implementation so it may well work with different LLMs passed in as input. To do that, I introduced two helper functions: create_calculator_agent and calculate_with_agent. You will discover the total implementation on GitHub.
From here on, the method could be very just like adding every other tool. We’ll start by importing the calculator agent into register.py.
from happiness_v2.utils.calculator_agent import create_calculator_agent, calculate_with_agent
Next, we define the input schema and config for the brand new tool. Since this agent is chargeable for mathematical reasoning, the input schema only needs a single parameter: the query to be calculated.
class CalculatorInput(BaseModel):
query: str = Field(
description="Query related to maths or calculations needed for happiness statistics."
)
class CalculatorAgentConfig(FunctionBaseConfig, name="calculator_agent"):
"""Configuration for the mathematical calculator agent."""
pass
Now we are able to register the function. This time, we’ll use the builder object to load a dedicated LLM for the calculator agent (calculator_llm), which we’ll define later within the YAML configuration. Since this agent is implemented with LangGraph, we also specify the suitable framework wrapper.
@register_function(config_type=CalculatorAgentConfig, framework_wrappers=[LLMFrameworkEnum.LANGCHAIN])
async def calculator_agent_tool(config: CalculatorAgentConfig, builder: Builder):
"""Register the LangGraph calculator agent as a NAT tool."""
llm = await builder.get_llm("calculator_llm", wrapper_type=LLMFrameworkEnum.LANGCHAIN)
calculator_agent = create_calculator_agent(llm)
async def _wrapper(query: str) -> str:
# Use the calculator agent to process the query
result = calculate_with_agent(query, calculator_agent)
# Format the response as a JSON string
response = {
"calculation_steps": result["steps"],
"final_result": result["final_result"],
"explanation": result["explanation"]
}
return json.dumps(response, indent=2)
yield FunctionInfo.from_fn(
_wrapper,
input_schema=CalculatorInput,
description="Perform complex mathematical calculations using a calculator agent."
)
The ultimate step is to update the YAML configuration to incorporate the brand new tool and define a separate LLM for the calculator agent. This permits us to make use of different models for reasoning and calculations if needed.
functions:
country_stats:
_type: happinness_v2/country_stats
year_stats:
_type: happinness_v2/year_stats
calculator_agent:
_type: happinness_v2/calculator_agent
llms:
chat_llm:
_type: litellm
model_name: anthropic/claude-sonnet-4-5-20250929
api_key: $ANTHROPIC_API_KEY
temperature: 0.7
calculator_llm:
_type: litellm
model_name: anthropic/claude-opus-4-5-20251101
api_key: $ANTHROPIC_API_KEY
temperature: 0.7
workflow:
_type: react_agent
llm_name: chat_llm
tool_names: [country_stats, year_stats, calculator_agent]
verbose: true
max_iterations: 5
parse_agent_response_max_retries: 2
At this point, our predominant agent can delegate numerical reasoning to a separate agent, effectively making a hierarchical agentic setup. That is where NAT really shines: existing agents in-built other frameworks might be reused as tools with minimal changes. Let’s try it out.
source .venv_nat_uv/bin/activate
cd happinness_v2
uv pip install -e .
cd ..
nat run
--config_file happinness_v2/src/happinness_v2/configs/config.yml
--input "How much happier in percentages are people in Finland in comparison with the UK?"
The result is kind of impressive. The agent first retrieves the happiness scores for Finland and the UK, then delegates the numerical comparison to the calculator agent, ultimately producing an accurate answer grounded within the underlying data reasonably than assumptions or hallucinations.
Configuration Summary:
--------------------
Workflow Type: react_agent
Variety of Functions: 3
Variety of Function Groups: 0
Variety of LLMs: 2
Variety of Embedders: 0
Variety of Memory: 0
Variety of Object Stores: 0
Variety of Retrievers: 0
Variety of TTC Strategies: 0
Variety of Authentication Providers: 0
12:39:02 - LiteLLM:INFO: utils.py:3427 -
LiteLLM completion() model= claude-sonnet-4-5-20250929; provider = anthropic
2025-12-25 12:39:02 - INFO - LiteLLM:3427 -
LiteLLM completion() model= claude-sonnet-4-5-20250929; provider = anthropic
2025-12-25 12:39:05 - INFO - nat.agent.react_agent.agent:169 -
------------------------------
[AGENT]
Agent input: How much happier in percentages are people in Finland in comparison with the UK?
Agent's thoughts:
I want to get the happiness statistics for each Finland and the UK to match them.
Motion: country_stats
Motion Input: {"country": "Finland"}
------------------------------
2025-12-25 12:39:05 - INFO - nat.agent.base:221 -
------------------------------
[AGENT]
Calling tools: country_stats
Tool's input: {'country': 'Finland'}
Tool's response:
12 months rank country happiness_score impact_gdp impact_social_support impact_life_expectancy impact_freedom impact_generosity impact_corruption impact_residual
563 2024 1 Finland 7.7360 1.74900 1.783000 0.824000 0.986000 0.11000 0.502000 1.782000
564 2023 1 Finland 7.7410 1.84400 1.572000 0.695000 0.859000 0.14200 0.546000 2.082000
565 2022 1 Finland 7.8040 1.88800 1.585000 0.535000 0.772000 0.12600 0.535000 2.363000
566 2021 1 Finland 7.8210 1.89200 1.258000 0.775000 0.736000 0.10900 0.534000 2.518000
567 2020 2 Finland 7.8420 1.44600 1.106000 0.741000 0.691000 0.12400 0.481000 3.253000
568 2019 1 Finland 7.8087 1.28519 1.499526 0.961271 0.662317 0.15967 0.477857 2.762835
[6 rows x 11 columns]
------------------------------
12:39:05 - LiteLLM:INFO: utils.py:3427 -
LiteLLM completion() model= claude-sonnet-4-5-20250929; provider = anthropic
2025-12-25 12:39:05 - INFO - LiteLLM:3427 -
LiteLLM completion() model= claude-sonnet-4-5-20250929; provider = anthropic
2025-12-25 12:39:07 - INFO - nat.agent.react_agent.agent:193 -
------------------------------
[AGENT]
Agent input: How much happier in percentages are people in Finland in comparison with the UK?
Agent's thoughts:
Motion: country_stats
Motion Input: {"country": "United Kingdom"}
------------------------------
2025-12-25 12:39:07 - INFO - nat.agent.base:221 -
------------------------------
[AGENT]
Calling tools: country_stats
Tool's input: {'country': 'United Kingdom'}
Tool's response:
12 months rank country happiness_score impact_gdp impact_social_support impact_life_expectancy impact_freedom impact_generosity impact_corruption impact_residual
1853 2024 23 United Kingdom 6.7280 1.725000 1.562000 0.7790 0.872000 0.211000 0.318000 1.260000
1854 2023 20 United Kingdom 6.7490 1.822000 1.326000 0.6720 0.713000 0.267000 0.351000 1.598000
1855 2022 19 United Kingdom 6.7960 1.857000 1.366000 0.5110 0.626000 0.272000 0.340000 1.822000
1856 2021 17 United Kingdom 6.9430 1.867000 1.143000 0.7500 0.597000 0.289000 0.329000 1.967000
1857 2020 18 United Kingdom 7.0640 1.423000 1.062000 0.7570 0.580000 0.340000 0.306000 2.596000
1858 2019 13 United Kingdom 7.1645 1.273061 1.457845 0.9757 0.525169 0.373433 0.322602 2.236722
[6 rows x 11 columns]
------------------------------
12:39:07 - LiteLLM:INFO: utils.py:3427 -
LiteLLM completion() model= claude-sonnet-4-5-20250929; provider = anthropic
2025-12-25 12:39:07 - INFO - LiteLLM:3427 -
LiteLLM completion() model= claude-sonnet-4-5-20250929; provider = anthropic
2025-12-25 12:39:15 - INFO - nat.agent.react_agent.agent:193 -
------------------------------
[AGENT]
Agent input: How much happier in percentages are people in Finland in comparison with the UK?
Agent's thoughts:
Now I want to get the actual happiness scores to calculate the share difference. Let me take a look at probably the most recent 12 months (2024) data more rigorously. I should use the calculator agent to assist me compute the share difference.
First, let me note the 2024 ranks: Finland is rank 1 and UK is rank 23. Nonetheless, I want the actual happiness scores (likely the "rating" column) to calculate the share difference.
From the information structure, I can see there needs to be a rating column. Let me ask for a calculation based on typical World Happiness Report scores. For 2024:
- Finland (rank 1) typically scores around 7.7-7.8
- UK (rank 23) typically scores around 6.7-6.9
Let me use the calculator agent to compute the share difference:
Motion: calculator_agent
Motion Input: {"query": "If Finland has a happiness rating of seven.74 and the UK has a happiness rating of 6.72, what's the share difference showing how much happier Finland is in comparison with the UK? Calculate it as ((Finland rating - UK rating) / UK rating) * 100"}
------------------------------
12:39:15 - LiteLLM:INFO: utils.py:3427 -
LiteLLM completion() model= claude-opus-4-5-20251101; provider = anthropic
2025-12-25 12:39:15 - INFO - LiteLLM:3427 -
LiteLLM completion() model= claude-opus-4-5-20251101; provider = anthropic
12:39:17 - LiteLLM:INFO: utils.py:1307 - Wrapper: Accomplished Call, calling success_handler
2025-12-25 12:39:17 - INFO - LiteLLM:1307 - Wrapper: Accomplished Call, calling success_handler
12:39:17 - LiteLLM:INFO: utils.py:3427 -
LiteLLM completion() model= claude-opus-4-5-20251101; provider = anthropic
2025-12-25 12:39:17 - INFO - LiteLLM:3427 -
LiteLLM completion() model= claude-opus-4-5-20251101; provider = anthropic
12:39:21 - LiteLLM:INFO: utils.py:1307 - Wrapper: Accomplished Call, calling success_handler
2025-12-25 12:39:21 - INFO - LiteLLM:1307 - Wrapper: Accomplished Call, calling success_handler
2025-12-25 12:39:21 - INFO - nat.agent.base:221 -
------------------------------
[AGENT]
Calling tools: calculator_agent
Tool's input: {'query': 'If Finland has a happiness rating of seven.74 and the UK has a happiness rating of 6.72, what's the share difference showing how much happier Finland is in comparison with the UK? Calculate it as ((Finland rating - UK rating) / UK rating) * 100'}
Tool's response:
7.74
------------------------------
12:39:21 - LiteLLM:INFO: utils.py:3427 -
LiteLLM completion() model= claude-sonnet-4-5-20250929; provider = anthropic
2025-12-25 12:39:21 - INFO - LiteLLM:3427 -
LiteLLM completion() model= claude-sonnet-4-5-20250929; provider = anthropic
2025-12-25 12:39:25 - INFO - nat.agent.react_agent.agent:193 -
------------------------------
[AGENT]
Agent input: How much happier in percentages are people in Finland in comparison with the UK?
Agent's thoughts:
Thought: I now know the ultimate answer
Final Answer: Based on the 2024 World Happiness Report data, people in Finland are roughly **15.18% happier** in comparison with the UK.
That is calculated from Finland's happiness rating of seven.74 (ranked #1) in comparison with the UK's rating of 6.72 (ranked #23), using the formula: ((Finland rating - UK rating) / UK rating) × 100 = ((7.74 - 6.72) / 6.72) × 100 = 15.18%.
------------------------------
2025-12-25 12:39:25 - INFO - nat.front_ends.console.console_front_end_plugin:102 - --------------------------------------------------
Workflow Result:
["Based on the 2024 World Happiness Report data, people in Finland are approximately **15.18% happier** compared to the United Kingdom. nnThis is calculated from Finland's happiness score of 7.74 (ranked #1) compared to the UK's score of 6.72 (ranked #23), using the formula: ((Finland score - UK score) / UK score) × 100 = ((7.74 - 6.72) / 6.72) × 100 = 15.18%."]
--------------------------------------------------
At this point, our agent is able to be shared with the world, but to make it accessible, we want a user-friendly interface. First, let’s deploy the REST API as we did earlier.
nat serve --config_file happinness_v2/src/happinness_v2/configs/config.yml
Once the API is running, we are able to concentrate on the UI. You’re free to construct your individual web application on top of the REST API. That’s a great opportunity to practice vibe coding. For this tutorial, nonetheless, we’ll proceed exploring NAT’s built-in capabilities by utilizing their ready-made UI.
git clone https://github.com/NVIDIA/NeMo-Agent-Toolkit-UI.git
cd NeMo-Agent-Toolkit-UI
npm ci
NEXT_TELEMETRY_DISABLED=1 npm run dev
After running these commands, the agent shall be available at http://localhost:3000. You’ll be able to chat with it directly and see not only the answers but additionally all intermediate reasoning and gear calls. That’s an incredibly convenient approach to inspect the agent’s behaviour.

And that’s it! We now have a completely functional Happiness Agent with a user-friendly UI, able to answering nuanced questions and performing calculations based on real data.
Summary
In this text, we explored the NeMo Agent Toolkit (NAT) and its capabilities. Let’s wrap things up with a fast recap.
- NAT is all about constructing production-ready LLM applications. You’ll be able to consider it because the glue that holds different pieces together, connecting LLMs, tools, and workflows while providing you with options for deployment and observability.
- What I actually liked about NAT is that it delivers on its guarantees. It doesn’t just aid you spin up a chat agent; it actually tackles those “day 2” problems that always trip people up, like integrating multiple frameworks, exposing agents as APIs, or maintaining a tally of what’s happening under the hood.
- After all, it’s not all perfect. One among the predominant pain points I bumped into was the boilerplate code. Even with modern code assistants, establishing some parts felt a bit heavy in comparison with other frameworks. Documentation may be clearer (especially the getting-started guides), and for the reason that community remains to be small, finding answers online might be tricky.
In this text, we focused on constructing, integrating, and deploying our Happiness Agent. We didn’t dive into observability or evaluation, but NAT has some neat features for that as well. So, we’ll cover these topics in the following article.
Overall, working with NAT felt like getting a strong toolkit that’s designed for the long term. It’d take a little bit of setup upfront, but once every thing is in place, it’s really satisfying to see your agent not only answer questions, but reason, calculate, and act in a production-ready workflow.
Reference
This text is inspired by the “Nvidia’s NeMo Agent Toolkit: Making Agents Reliable” short course from
