Hirundo, the primary startup dedicated to machine unlearning, has raised $8 million in seed funding to handle a few of the most pressing challenges in artificial intelligence: hallucinations, bias, and embedded data vulnerabilities. The round was led by Maverick Ventures Israel with participation from SuperSeed, Alpha Intelligence Capital, Tachles VC, AI.FUND, and Plug and Play Tech Center.
Making AI Forget: The Promise of Machine Unlearning
Unlike traditional AI tools that concentrate on refining or filtering AI outputs, Hirundo’s core innovation is machine unlearning—a method that enables AI models to “forget” specific knowledge or behaviors after they’ve already been trained. This approach enables enterprises to surgically remove hallucinations, biases, personal or proprietary data, and adversarial vulnerabilities from deployed AI models without retraining them from scratch. Retraining large-scale models can take weeks and tens of millions of dollars; Hirundo offers a way more efficient alternative.
Hirundo likens this process to AI neurosurgery: the corporate pinpoints exactly where in a model’s parameters undesired outputs originate and precisely removes them, all while preserving performance. This groundbreaking technique empowers organizations to remediate models in production environments and deploy AI with much greater confidence.
Why AI Hallucinations Are So Dangerous
AI hallucinations consult with a model’s tendency to generate false or misleading information that sounds plausible and even factual. These hallucinations are especially problematic in enterprise environments, where decisions based on misinformation can result in legal exposure, operational errors, and reputational damage. Studies have shown that 58 to 82% % of “facts” generated by AI for legal queries contained some form of hallucination.
Despite efforts to attenuate hallucinations using guardrails or fine-tuning, these methods often mask problems somewhat than eliminating them. Guardrails act like filters, and fine-tuning typically fails to remove the foundation cause—especially when the hallucination is baked deep into the model’s learned weights. Hirundo goes beyond this by actually removing the behavior or knowledge from the model itself.
A Scalable Platform for Any AI Stack
Hirundo’s platform is built for flexibility and enterprise-grade deployment. It integrates with each generative and non-generative systems across a wide selection of information types—natural language, vision, radar, LiDAR, tabular, speech, and timeseries. The platform mechanically detects mislabeled items, outliers, and ambiguities in training data. It then allows users to debug specific faulty outputs and trace them back to problematic training data or learned behaviors, which may be unlearned immediately.
That is all achieved without changing workflows. Hirundo’s SOC-2 certified system may be run via SaaS, private cloud (VPC), and even air-gapped on-premises, making it suitable for sensitive environments corresponding to finance, healthcare, and defense.
Demonstrated Impact Across Models
The corporate has already demonstrated strong performance improvements across popular large language models (LLMs). In tests using Llama and DeepSeek, Hirundo achieved a 55% reduction in hallucinations, 70% decrease in bias, and 85% reduction in successful prompt injection attacks. These results have been verified using independent benchmarks corresponding to HaluEval, PurpleLlama, and Bias Benchmark Q&A.
While current solutions work well with open-source models like Llama, Mistral, and Gemma, Hirundo is actively expanding support to gated models like ChatGPT and Claude. This makes their technology applicable across the total spectrum of enterprise LLMs.
Founders with Academic and Industry Depth
Hirundo was founded in 2023 by a trio of experts on the intersection of academia and enterprise AI. CEO Ben Luria is a Rhodes Scholar and former visiting fellow at Oxford, who previously founded fintech startup Worqly and co-founded ScholarsIL, a nonprofit supporting higher education. Michael Leybovich, Hirundo’s CTO, is a former graduate researcher on the Technion and award-winning R&D officer at Ofek324. Prof. Oded Shmueli, the corporate’s Chief Scientist, is the previous Dean of Computer Science on the Technion and has held research positions at IBM, HP, AT&T, and more.
Their collective experience spans foundational AI research, real-world deployment, and secure data management—making them uniquely qualified to handle the AI industry’s current reliability crisis.
Investor Backing for a Trustworthy AI Future
Investors on this round are aligned with Hirundo’s vision of constructing trustworthy, enterprise-ready AI. Yaron Carni, founding father of Maverick Ventures Israel, noted the urgent need for a platform that may remove hallucinated or biased intelligence before it causes real-world harm.
SuperSeed’s Managing Partner, Mads Jensen, echoed this sentiment:
Addressing a Growing Challenge in AI Deployment
As AI systems are increasingly integrated into critical infrastructure, concerns about hallucinations, bias, and embedded sensitive data have gotten harder to disregard. These issues pose significant risks in high-stakes environments, from finance to healthcare and defense.
Machine unlearning is emerging as a critical tool within the AI industry’s response to rising concerns over model reliability and safety. As hallucinations, embedded bias, and exposure of sensitive data increasingly undermine trust in deployed AI systems, unlearning offers a direct strategy to mitigate these risks—after a model is trained and in use.
Slightly than counting on retraining or surface-level fixes like filtering, machine unlearning enables targeted removal of problematic behaviors and data from models already in production. This approach is gaining traction amongst enterprises and government agencies in search of scalable, compliant solutions for high-stakes applications.