In a serious step toward safeguarding the longer term of AI, SplxAI, a trailblazer in offensive security for Agentic AI, has raised $7 million in seed funding. The round was led by LAUNCHub Ventures, with strategic participation from Rain Capital, Inovo, Runtime Ventures, DNV Ventures, and South Central Ventures. The brand new capital will speed up the event of the SplxAI Platform, designed to guard organizations deploying advanced AI agents and applications.
As enterprises increasingly integrate AI into day by day operations, the threat landscape is rapidly evolving. By 2028, it’s projected that 33% of enterprise applications will incorporate agentic AI — AI systems able to autonomous decision-making and sophisticated task execution. But this shift brings with it a vastly expanded attack surface that traditional cybersecurity tools are ill-equipped to handle.
said Kristian Kamber, CEO and Co-Founding father of SplxAI.
What Is Agentic AI and Why Is It a Security Risk?
Unlike conventional AI assistants that reply to direct prompts, agentic AI refers to systems able to performing multi-step tasks autonomously. Consider AI agents that may schedule meetings, book travel, or manage workflows — all without ongoing human input. This autonomy, while powerful, introduces serious risks including prompt injections, off-topic responses, context leakage, and AI hallucinations (false or misleading outputs).
Furthermore, most existing protections — akin to AI guardrails — are reactive and infrequently poorly trained, leading to either overly restrictive behavior or dangerous permissiveness. That’s where SplxAI steps in.
The SplxAI Platform: Red Teaming for AI at Scale
The SplxAI Platform delivers fully automated red teaming for GenAI systems, enabling enterprises to conduct continuous, real-time penetration testing across AI-powered workflows. It simulates sophisticated adversarial attacks — the sort that mimic real-world, highly expert attackers — across multiple modalities, including text, images, voice, and even documents.
Some standout capabilities include:
-
Dynamic Risk Evaluation: Constantly probes AI apps to detect vulnerabilities and supply actionable insights.
-
Domain-Specific Pentesting: Tailors testing to the unique use-cases of every organization — from finance to customer support.
-
CI/CD Pipeline Integration: Embeds security directly into the event process to catch vulnerabilities before production.
-
Compliance Mapping: Mechanically assesses alignment with frameworks like NIST AI, OWASP LLM Top 10, EU AI Act, and ISO 42001.
This proactive approach is already gaining traction. Customers include KPMG, Infobip, Brand Engagement Network, and Glean. Since launching in August 2024, the corporate has reported 127% quarter-over-quarter growth.
Investors Back the Vision for AI Security
LAUNCH
Rain Capital’s Dr. Chenxi Wang echoed this sentiment, highlighting the importance of automated red teaming for AI systems of their infancy:
Latest Additions Strengthen the Team
Alongside the funding, SplxAI announced two strategic hires:
-
Stan Sirakov (LAUNCHub Ventures) joins the Board of Directors.
-
Sandy Dunn, former CISO of Brand Engagement Network, steps in as Chief Information Security Officer to guide the corporate’s Governance, Risk, and Compliance (GRC) initiative.
Cutting-Edge Tools: Agentic Radar and Real-Time Remediation
Along with the core platform, SplxAI recently launched Agentic Radar — an open-source tool that maps dependencies in agentic workflows, identifies weak links, and surfaces security gaps through static code evaluation.
Meanwhile, their remediation engine offers an automatic approach to generate hardened system prompts, reducing attack surfaces by 80%, improving prompt leakage prevention by 97%, and minimizing engineering effort by 95%. These system prompts are critical in shaping AI behavior and, if exposed or poorly designed, can change into major security liabilities.
Simulating Real-World Threats in 20+ Languages
SplxAI also supports multi-language security testing, making it a world solution for enterprise AI security. The platform simulates malicious prompts from each adversarial and benign user types, helping organizations uncover threats like:
-
Context leakage (accidental disclosure of sensitive data)
-
Social engineering attacks
-
Prompt injection and jailbreak techniques
-
Toxic or biased outputs
All of that is delivered with minimal false positives, due to SplxAI’s unique AI red-teaming intelligence.
Looking Ahead: The Way forward for Secure AI
As businesses race to integrate AI into all the things from customer support to product development, the necessity for robust, real-time AI security has never been greater. SplxAI is leading the charge to make sure AI systems usually are not only powerful—but trustworthy, secure, and compliant.
Kamber added.
With its fresh capital and momentum, SplxAI is poised to change into a foundational layer within the AI security stack for years to come back.