
Zoom Video Communications, the corporate best known for keeping distant employees connected in the course of the pandemic, announced last week that it had achieved the very best rating ever recorded on certainly one of artificial intelligence's most demanding tests — a claim that sent ripples of surprise, skepticism, and real curiosity through the technology industry.
The San Jose-based company said its AI system scored 48.1 percent on the Humanity's Last Exam, a benchmark designed by subject-matter experts worldwide to stump even probably the most advanced AI models. That result edges out Google's Gemini 3 Pro, which held the previous record at 45.8 percent.
"Zoom has achieved a brand new state-of-the-art result on the difficult Humanity's Last Exam full-set benchmark, scoring 48.1%, which represents a considerable 2.3% improvement over the previous SOTA result," wrote Xuedong Huang, Zoom's chief technology officer, in a blog post.
The announcement raises a provocative query that has consumed AI watchers for days: How did a video conferencing company — one with no public history of coaching large language models — suddenly vault past Google, OpenAI, and Anthropic on a benchmark built to measure the frontiers of machine intelligence?
The reply reveals as much about where AI is headed because it does about Zoom's own technical ambitions. And depending on whom you ask, it's either an ingenious demonstration of practical engineering or a hole claim that appropriates credit for others' work.
How Zoom built an AI traffic controller as an alternative of coaching its own model
Zoom didn’t train its own large language model. As an alternative, the corporate developed what it calls a "federated AI approach" — a system that routes queries to multiple existing models from OpenAI, Google, and Anthropic, then uses proprietary software to pick, mix, and refine their outputs.
At the guts of this method sits what Zoom calls its "Z-scorer," a mechanism that evaluates responses from different models and chooses the very best one for any given task. The corporate pairs this with what it describes as an "explore-verify-federate strategy," an agentic workflow that balances exploratory reasoning with verification across multiple AI systems.
"Our federated approach combines Zoom's own small language models with advanced open-source and closed-source models," Huang wrote. The framework "orchestrates diverse models to generate, challenge, and refine reasoning through dialectical collaboration."
In simpler terms: Zoom built a complicated traffic controller for AI, not the AI itself.
This distinction matters enormously in an industry where bragging rights — and billions in valuation — often hinge on who can claim probably the most capable model. The main AI laboratories spend lots of of thousands and thousands of dollars training frontier systems on vast computing clusters. Zoom's achievement, against this, appears to rest on clever integration of those existing systems.
Why AI researchers are divided over what counts as real innovation
The response from the AI community was swift and sharply divided.
Max Rumpf, an AI engineer who says he has trained state-of-the-art language models, posted a pointed critique on social media. "Zoom strung together API calls to Gemini, GPT, Claude et al. and barely improved on a benchmark that delivers no value for his or her customers," he wrote. "They then claim SOTA."
Rumpf didn’t dismiss the technical approach itself. Using multiple models for various tasks, he noted, is "actually quite smart and most applications should do that." He pointed to Sierra, an AI customer support company, for instance of this multi-model strategy executed effectively.
His objection was more specific: "They didn’t train the model, but obfuscate this fact within the tweet. The injustice of taking credit for the work of others sits deeply with people."
But other observers saw the achievement in another way. Hongcheng Zhu, a developer, offered a more measured assessment: "To top an AI eval, you’ll most certainly need model federation, like what Zoom did. An analogy is that each Kaggle competitor knows you have got to ensemble models to win a contest."
The comparison to Kaggle — the competitive data science platform where combining multiple models is standard practice amongst winning teams — reframes Zoom's approach as industry best practice reasonably than sleight of hand. Academic research has long established that ensemble methods routinely outperform individual models.
Still, the talk exposed a fault line in how the industry understands progress. Ryan Pream, founding father of Exoria AI, was dismissive: "Zoom are only making a harness around one other LLM and reporting that. It’s just noise." One other commenter captured the sheer unexpectedness of the news: "That the video conferencing app ZOOM developed a SOTA model that achieved 48% HLE was not on my bingo card."
Perhaps probably the most pointed critique concerned priorities. Rumpf argued that Zoom could have directed its resources toward problems its customers actually face. "Retrieval over call transcripts shouldn’t be 'solved' by SOTA LLMs," he wrote. "I figure Zoom's users would care about this way more than HLE."
The Microsoft veteran betting his status on a special type of AI
If Zoom's benchmark result appeared to come from nowhere, its chief technology officer didn’t.
Xuedong Huang joined Zoom from Microsoft, where he spent a long time constructing the corporate's AI capabilities. He founded Microsoft's speech technology group in 1993 and led teams that achieved what the corporate described as human parity in speech recognition, machine translation, natural language understanding, and computer vision.
Huang holds a Ph.D. in electrical engineering from the University of Edinburgh. He’s an elected member of the National Academy of Engineering and the American Academy of Arts and Sciences, in addition to a fellow of each the IEEE and the ACM. His credentials place him amongst probably the most completed AI executives within the industry.
His presence at Zoom signals that the corporate's AI ambitions are serious, even when its methods differ from the research laboratories that dominate headlines. In his tweet celebrating the benchmark result, Huang framed the achievement as validation of Zoom's strategy: "We have now unlocked stronger capabilities in exploration, reasoning, and multi-model collaboration, surpassing the performance limits of any single model."
That final clause — "surpassing the performance limits of any single model" — stands out as the most vital. Huang shouldn’t be claiming Zoom built a greater model. He’s claiming Zoom built a greater system for using models.
Contained in the test designed to stump the world's smartest machines
The benchmark at the middle of this controversy, Humanity's Last Exam, was designed to be exceptionally difficult. Unlike earlier tests that AI systems learned to game through pattern matching, HLE presents problems that require real understanding, multi-step reasoning, and the synthesis of data across complex domains.
The exam draws on questions from experts world wide, spanning fields from advanced mathematics to philosophy to specialized scientific knowledge. A rating of 48.1 percent might sound unimpressive to anyone accustomed to highschool grading curves, but within the context of HLE, it represents the present ceiling of machine performance.
"This benchmark was developed by subject-matter experts globally and has grow to be a vital metric for measuring AI's progress toward human-level performance on difficult mental tasks," Zoom’s announcement noted.
The corporate's improvement of two.3 percentage points over Google's previous best may appear modest in isolation. But in competitive benchmarking, where gains often are available in fractions of a percent, such a jump commands attention.
What Zoom's approach reveals concerning the way forward for enterprise AI
Zoom's approach carries implications that stretch well beyond benchmark leaderboards. The corporate is signaling a vision for enterprise AI that differs fundamentally from the model-centric strategies pursued by OpenAI, Anthropic, and Google.
Relatively than betting all the things on constructing the one most capable model, Zoom is positioning itself as an orchestration layer — an organization that may integrate the very best capabilities from multiple providers and deliver them through products that companies already use every single day.
This strategy hedges against a critical uncertainty within the AI market: nobody knows which model will probably be best next month, let alone next yr. By constructing infrastructure that may swap between providers, Zoom avoids vendor lock-in while theoretically offering customers the very best available AI for any given task.
The announcement of OpenAI's GPT-5.2 the next day underscored this dynamic. OpenAI's own communications named Zoom as a partner that had evaluated the brand new model's performance "across their AI workloads and saw measurable gains across the board." Zoom, in other words, is each a customer of the frontier labs and now a competitor on their benchmarks — using their very own technology.
This arrangement may prove sustainable. The main model providers have every incentive to sell API access widely, even to corporations that may aggregate their outputs. The more interesting query is whether or not Zoom's orchestration capabilities constitute real mental property or merely sophisticated prompt engineering that others could replicate.
The true test arrives when Zoom's 300 million users start asking questions
Zoom titled its announcement section on industry relations "A Collaborative Future," and Huang struck notes of gratitude throughout. "The long run of AI is collaborative, not competitive," he wrote. "By combining the very best innovations from across the industry with our own research breakthroughs, we create solutions which are greater than the sum of their parts."
This framing positions Zoom as a beneficent integrator, bringing together the industry's best work for the good thing about enterprise customers. Critics see something else: an organization claiming the prestige of an AI laboratory without doing the foundational research that earns it.
The controversy will likely be settled not by leaderboards but by products. When AI Companion 3.0 reaches Zoom's lots of of thousands and thousands of users in the approaching months, they are going to render their very own verdict — not on benchmarks they’ve never heard of, but on whether the meeting summary actually captured what mattered, whether the motion items made sense, whether the AI saved them time or wasted it.
In the long run, Zoom's most provocative claim is probably not that it topped a benchmark. It stands out as the implicit argument that within the age of AI, the very best model shouldn’t be the one you construct — it's the one you recognize how one can use.
