
In a latest paper that studies tool-use in large language model (LLM) agents, researchers at Google and UC Santa Barbara have developed a framework that allows agents to make more efficient use of tool and compute budgets. The researchers introduce two latest techniques: a straightforward "Budget Tracker" and a more comprehensive framework called "Budget Aware Test-time Scaling." These techniques make agents explicitly aware of their remaining reasoning and tool-use allowance.
As AI agents depend on tool calls to work in the true world, test-time scaling has change into less about smarter models and more about controlling cost and latency.
For enterprise leaders and developers, budget-aware scaling techniques offer a practical path to deploying effective AI agents without facing unpredictable costs or diminishing returns on compute spend.
The challenge of scaling tool use
Traditional test-time scaling focuses on letting models "think" longer. Nevertheless, for agentic tasks like web browsing, the variety of tool calls directly determines the depth and breadth of exploration.
This introduces significant operational overhead for businesses. "Tool calls comparable to webpage browsing leads to more token consumption, increases the context length and introduces additional time latency," Zifeng Wang and Tengxiao Liu, co-authors of the paper, told VentureBeat. "Tool calls themselves introduce additional API costs."
The researchers found that simply granting agents more test-time resources doesn’t guarantee higher performance. "In a deep research task, if the agent has no sense of budget, it often goes down blindly," Wang and Liu explained. "It finds one somewhat related lead, then spends 10 or 20 tool calls digging into it, only to appreciate that your complete path was a dead end."
Optimizing resources with Budget Tracker
To judge how they will optimize tool-use budgets, the researchers first tried a light-weight approach called "Budget Tracker." This module acts as a plug-in that gives the agent with a continuous signal of resource availability, enabling budget-aware tool use.
The team hypothesized that "providing explicit budget signals enables the model to internalize resource constraints and adapt its strategy without requiring additional training."
Budget Tracker operates purely on the prompt level, which makes it easy to implement. (The paper provides full details on the prompts used for Budget Tracker, which makes it easy to implement.)
In Google's implementation, the tracker provides a temporary policy guideline describing the budget regimes and corresponding recommendations for using tools. At each step of the response process, Budget Tracker makes the agent explicitly aware of its resource consumption and remaining budget, enabling it to condition subsequent reasoning steps on the updated resource state.
To check this, the researchers experimented with two paradigms: sequential scaling, where the model iteratively refines its output, and parallel scaling, where multiple independent runs are conducted and aggregated. They ran experiments on search agents equipped with search and browse tools following a ReAct-style loop. ReAct (Reasoning + Acting) is a preferred method where the model alternates between internal pondering and external actions. To trace a real cost-performance scaling trend, they developed a unified cost metric that jointly accounts for the prices of each internal token consumption and external tool interactions.
They tested Budget Tracker on three information-seeking QA datasets requiring external search, including BrowseComp and HLE-Search, using models comparable to Gemini 2.5 Pro, Gemini 2.5 Flash, and Claude Sonnet 4. The experiments show that this easy plug-in improves performance across various budget constraints.
"Adding Budget Tracker achieves comparable accuracy using 40.4% fewer search calls, 19.9% fewer browse calls, and reducing overall cost … by 31.3%," the authors told VentureBeat. Finally, Budget Tracker continued to scale because the budget increased, whereas plain ReAct plateaued after a certain threshold.
BATS: A comprehensive framework for budget-aware scaling
To further improve tool-use resource optimization, the researchers introduced Budget Aware Test-time Scaling (BATS), a framework designed to maximise agent performance under any given budget. BATS maintains a continuous signal of remaining resources and uses this information to dynamically adapt the agent's behavior because it formulates its response.
BATS uses multiple modules to orchestrate the agent's actions. A planning module adjusts stepwise effort to match the present budget, while a verification module decides whether to "dig deeper" right into a promising lead or "pivot" to alternative paths based on resource availability.
Given an information-seeking query and a tool-call budget, BATS begins by utilizing the planning module to formulate a structured motion plan and choose which tools to invoke. When tools are invoked, their responses are appended to the reasoning sequence to offer the context with latest evidence. When the agent proposes a candidate answer, the verification module verifies it and decides whether to proceed the present sequence or initiate a brand new attempt with the remaining budget.
The iterative process ends when budgeted resources are exhausted, at which point an LLM-as-a-judge selects the perfect answer across all verified answers. Throughout the execution, the Budget Tracker constantly updates each resource usage and remaining budget at every iteration.
The researchers tested BATS on the BrowseComp, BrowseComp-ZH, and HLE-Search benchmarks against baselines including standard ReAct and various training-based agents. Their experiments show that BATS achieves higher performance while using fewer tool calls and incurring lower overall cost than competing methods. Using Gemini 2.5 Pro because the backbone, BATS achieved 24.6% accuracy on BrowseComp in comparison with 12.6% for normal ReAct, and 27.0% on HLE-Search in comparison with 20.5% for ReAct.
BATS not only improves effectiveness under budget constraints but additionally yields higher cost–performance trade-offs. For instance, on the BrowseComp dataset, BATS achieved higher accuracy at a value of roughly 23 cents in comparison with a parallel scaling baseline that required over 50 cents to realize an analogous result.
In response to the authors, this efficiency makes previously expensive workflows viable. "This unlocks a variety of long-horizon, data-intensive enterprise applications… comparable to complex codebase maintenance, due-diligence investigations, competitive landscape research, compliance audits, and multi-step document evaluation," they said.
As enterprises look to deploy agents that manage their very own resources, the flexibility to balance accuracy with cost will change into a critical design requirement.
"We consider the connection between reasoning and economics will change into inseparable," Wang and Liu said. "In the long run, [models] must reason about value."
