Steve Wilson, Chief AI and Product Officer at Exabeam – Interview Series

-

Steve Wilson is the Chief AI and Product Officer at Exabeam, where his team applies cutting-edge AI technologies to tackle real-world cybersecurity challenges. He founded and co-chairs the OWASP Gen AI Security Project, the organization behind the industry-standard OWASP Top 10 for Large Language Model Security list.

His award-winning book, “The Developer’s Playbook for Large Language Model Security” (O’Reilly Media), was chosen as the perfect Cutting Edge Cybersecurity Book by Cyber Defense Magazine.

Exabeam is a pacesetter in intelligence and automation that powers security operations for the world’s smartest corporations. By combining the dimensions and power of AI with the strength of our industry-leading behavioral analytics and automation, organizations gain a more holistic view of security incidents, uncover anomalies missed by other tools, and achieve faster, more accurate and repeatable responses. Exabeam empowers global security teams to combat cyberthreats, mitigate risk, and streamline operations.

Your latest title is Chief AI and Product Officer at Exabeam. How does this reflect the evolving importance of AI inside cybersecurity?

Cybersecurity was among the many first domains to actually embrace machine learning—at Exabeam, we have been using ML because the core of our detection engine for over a decade to discover anomalous behavior that humans alone might miss. With the arrival of newer AI technologies, akin to intelligent agents, AI has grown from being necessary to completely central.

My combined role as Chief AI and Product Officer at Exabeam reflects exactly this evolution. At an organization deeply committed to embedding AI throughout its products, and inside an industry like cybersecurity where AI’s role is increasingly critical, it made sense to unify AI strategy and product strategy under one role. This integration ensures we’re strategically aligned to deliver transformative AI-driven solutions to security analysts and operations teams who rely on us most.

Exabeam is pioneering “agentic AI” in security operations. Are you able to explain what which means in practice and the way it differentiates from traditional AI approaches?

Agentic AI represents a meaningful evolution from traditional AI approaches. It’s action-oriented—proactively initiating processes, analyzing information, and presenting insights before analysts even ask for them. Beyond mere data evaluation, agentic AI acts as an advisor, offering strategic recommendations across all the SOC, guiding users toward the simplest wins and providing step-by-step guidance to enhance their security posture. Moreover, agents operate as specialized packs, not one cumbersome chatbot, each tailored with specific personalities and datasets that integrate seamlessly into the workflow of analysts, engineers, and managers to deliver targeted, impactful assistance.

With Exabeam Nova integrating multiple AI agents across the SOC workflow, what does the longer term of the safety analyst role seem like? Is it evolving, shrinking, or becoming more specialized?

The safety analyst role is unquestionably evolving. Analysts, security engineers, and SOC managers alike are overwhelmed with data, alerts, and cases. The true future shift will not be nearly saving time on mundane tasks—though agents actually help there—but about elevating everyone’s role into that of a team lead. Analysts will still need strong technical skills, but now they’ll be leading a team of agents able to speed up their tasks, amplify their decisions, and genuinely drive improvements in security posture. This transformation positions analysts to turn out to be strategic orchestrators moderately than tactical responders.

Recent data shows a disconnect between executives and analysts regarding AI’s productivity impact. Why do you think that this perception gap exists, and the way can or not it’s addressed?

Recent data shows a transparent disconnect: 71% of executives consider AI significantly boosts productivity, but only 22% of frontline analysts, the every day users, agree. At Exabeam, we have seen this gap grow alongside the recent frenzy of AI guarantees in cybersecurity. It’s never been easier to create flashy AI demos, and vendors are quick to say they’ve solved every SOC challenge. While these demos dazzle executives initially, many fall short where it counts—within the hands of the analysts. The potential is there, and pockets of real payoff exist, but there’s still an excessive amount of noise and too few meaningful improvements. To bridge this perception gap, executives must prioritize AI tools that genuinely empower analysts, not only impress in a demo. When AI truly enhances analysts’ effectiveness, trust and real productivity improvements will follow.

AI is accelerating threat detection and response, but how do you maintain the balance between automation and human judgment in high-stakes cybersecurity incidents?

AI capabilities are advancing rapidly, but today’s foundational “language models” underpinning intelligent agents were originally designed for tasks like language translation—not nuanced decision-making, game theory, or handling complex human aspects. This makes human judgment more essential than ever in cybersecurity. The analyst role isn’t diminished by AI; it’s elevated. Analysts at the moment are team leads, leveraging their experience and insight to guide and direct multiple agents, ensuring decisions remain informed by context and nuance. Ultimately, balancing automation with human judgment is about making a symbiotic relationship where AI amplifies human expertise, not replaces it.

How does your product strategy evolve when AI becomes a core design principle as a substitute of an add-on?

At Exabeam, our product strategy is fundamentally shaped by AI as a core design principle, not a superficial add-on. We built Exabeam from the bottom as much as support machine learning—from log ingestion, parsing, enrichment, and normalization—to populate a sturdy Common Information Model specifically optimized to feed ML systems. High-quality, structured data is not only necessary to AI systems—it’s their lifeblood. Today, we directly embed our intelligent agents into critical workflows, avoiding generic, unwieldy chatbots. As an alternative, we precisely goal crucial use-cases that deliver real-world, tangible advantages to our users.

With Exabeam Nova, you’re aiming to “move from assistive to autonomous.” What are the important thing milestones for getting to completely autonomous security operations?

The thought of fully autonomous security operations is intriguing but premature. Fully autonomous agents, across any domain, simply aren’t yet efficient or secure. While decision-making in AI is improving, it hasn’t reached human-level reliability and won’t for a while. At Exabeam, our approach isn’t chasing total autonomy, which my group at OWASP identifies as a core vulnerability generally known as Excessive Agency. Giving agents more autonomy than will be reliably tested and validated puts operations on dangerous ground. As an alternative, our goal is teams of intelligent agents, capable yet rigorously guided, working under the supervision of human experts within the SOC. That combination of human oversight and targeted agentic assistance is the realistic, impactful path forward.

What are the most important challenges you have faced integrating GenAI and machine learning at the dimensions required for real-time cybersecurity?

Considered one of the most important challenges in integrating GenAI and machine learning at scale for cybersecurity is balancing speed and precision. GenAI alone can’t replace the sheer scale of what our high-speed ML engine handles—processing terabytes of information constantly. Even probably the most advanced AI agents have a “context window” that’s vastly insufficient. As an alternative, our recipe involves using ML to distill massive data into actionable insights, which our intelligent agents then translate and operationalize effectively.

You co-founded the OWASP Top 10 for LLM Applications. What inspired this, and the way do you see it shaping AI security best practices?

After I launched the OWASP Top 10 for LLM Applications in early 2023, structured information on LLM and GenAI security was scarce, but interest was incredibly high. Inside days, over 200 volunteers joined the initiative, bringing diverse opinions and expertise to shape the unique list. Since then, it has been read well over 100,000 times and has turn out to be foundational to international industry standards. Today, the trouble has expanded into the OWASP Gen AI Security Project, covering areas like AI Red Teaming, securing agentic systems, and handling offensive uses of Gen AI in cybersecurity. Our group recently surpassed 10,000 members and continues to advance AI security practices globally.

Your book, ‘The Developer’s Playbook for LLM Security‘, won a top award. What’s an important takeaway or principle from the book that each AI developer should understand when constructing secure applications?”

An important takeaway from my book, “The Developer’s Playbook for LLM Security,” is straightforward: “with great power comes great responsibility.” While understanding traditional security concepts stays essential, developers now face a wholly latest set of challenges unique to LLMs. This powerful technology is not a free pass, it demands proactive, thoughtful security practices. Developers must expand their perspective, recognizing and addressing these latest vulnerabilities from the outset, embedding security into every step of their AI application’s lifecycle.

How do you see the cybersecurity workforce evolving in the subsequent 5 years as agentic AI becomes more mainstream?

We’re currently in an AI arms race. Adversaries are aggressively deploying AI to further their malicious goals, making cybersecurity professionals more crucial than ever. The following five years won’t diminish the cybersecurity workforce, they’ll elevate it. Professionals must embrace AI, integrating it into their teams and workflows. Security roles will shift toward strategic command—less about individual effort and more about orchestrating an efficient response with a team of AI-driven agents. This transformation empowers cybersecurity professionals to steer decisively and confidently within the battle against ever-evolving threats.

ASK ANA

What are your thoughts on this topic?
Let us know in the comments below.

0 0 votes
Article Rating
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

Share this article

Recent posts

0
Would love your thoughts, please comment.x
()
x