Rick Caccia, CEO and Co-Founding father of WitnessAI, has extensive experience in launching security and compliance products. He has held leadership roles in product and marketing at Palo Alto Networks, Google, and Symantec. Caccia previously led product marketing at ArcSight through its IPO and subsequent operations as a public company and served as the primary Chief Marketing Officer at Exabeam. He holds multiple degrees from the University of California, Berkeley.
WitnessAI is developing a security platform focused on ensuring the secure and secure use of AI in enterprises. With each major technological shift—similar to web, mobile, and cloud computing—latest security challenges emerge, creating opportunities for industry leaders to emerge. AI represents the subsequent frontier on this evolution.
The corporate goals to determine itself as a pacesetter in AI security by combining expertise in machine learning, cybersecurity, and large-scale cloud operations. Its team brings deep experience in AI development, reverse engineering, and multi-cloud Kubernetes deployment, addressing the critical challenges of securing AI-driven technologies.
What inspired you to co-found WitnessAI, and what key challenges in AI governance and security were you aiming to unravel?
Once we first began the corporate, we thought that security teams could be concerned about attacks on their internal AI models. As an alternative, the primary 15 CISOs we spoke with said the other, that widespread corporate LLM rollout was a great distance off, however the urgent problem was protecting their employees’ use of other people’s AI apps. We took a step back and saw that the issue wasn’t keeping off scary cyberattacks, it was safely enabling corporations to make use of AI productively. While governance possibly less sexy than cyberattacks, it’s what security and privacy teams actually needed. They needed visibility of what their employees were doing with third-party AI, a solution to implement acceptable use policies, and a solution to protect data without blocking use of that data. In order that’s what we built.
Given your extensive experience at Google Cloud, Palo Alto Networks, and other cybersecurity firms, how did those roles influence your approach to constructing WitnessAI?
I even have spoken with many CISOs through the years. One of the vital common things I hear from CISOs today is, “I don’t wish to be ‘Doctor No’ in the case of AI; I would like to assist our employees use it to be higher.” As someone who has worked with cybersecurity vendors for a very long time, this can be a very different statement. It’s more paying homage to the dotcom-era, back when the Web was a brand new and transformative technology. Once we built WitnessAI, we specifically began with product capabilities that helped customers adopt AI safely; our message was that these items is like magic and naturally everyone desires to experience magic. I believe that security corporations are too quick to play the fear card, and we desired to be different.
What sets WitnessAI aside from other AI governance and security platforms out there today?
Well, for one thing, most other vendors within the space are focused totally on the safety part, and never on the governance part. To me, governance is just like the brakes on a automotive. In case you actually need to get somewhere quickly, you would like effective brakes along with a strong engine. Nobody goes to drive a Ferrari very fast if it has no brakes. On this case, your organization using AI is the Ferrari, and WitnessAI is the brakes and steering wheel.
In contrast, most of our competitors deal with theoretical scary attacks on a corporation’s AI model. That could be a real problem, nevertheless it’s a distinct problem than getting visibility and control over how my employees are using any of the 5,000+ AI apps already on the web. It’s loads easier for us so as to add an AI firewall (and we’ve got) than it’s for the AI firewall vendors so as to add effective governance and risk management.
How does WitnessAI balance the necessity for AI innovation with enterprise security and compliance?
As I wrote earlier, we imagine that AI needs to be like magic – it may enable you do amazing things. With that in mind, we predict AI innovation and security are linked. In case your employees can use AI safely, they are going to use it often and you’ll pull ahead. In case you apply the everyday security mindset and lock it down, your competitor won’t try this, and they’ll pull ahead. Every little thing we do is about enabling secure adoption of AI. As one customer told me, “These things is magic, but most vendors treat it prefer it was black magic, scary and something to fear.” At WitnessAI, we’re helping to enable the magic.
Are you able to talk in regards to the company’s core philosophy regarding AI governance—do you see AI security as an enabler moderately than a restriction?
We recurrently have CISOs come as much as us at events where we’ve got presented, and so they tell us, “Your competitors are all about how scary AI is, and you’re the one vendor that’s telling us the way to actually use it effectively.” Sundar Pichai at Google has said that “AI might be more profound than fire,” and that’s an interesting metaphor. Fire may be incredibly damaging, as we’ve got seen recently. But controlled fire could make steel, which accelerates innovation. Sometimes at WitnessAI we discuss creating the innovation that permits our customers to soundly direct AI “fire” to create the equivalent of steel. Alternatively, if you happen to think AI is akin to magic, then perhaps our goal is to offer you a magic wand, to direct and control it.
In either case, we absolutely imagine that safely enabling AI is the goal. Just to offer you an example, there are various data loss prevention (DLP) tools, it’s a technology that’s been around without end. And other people attempt to apply DLP to AI use, and possibly the DLP browser plug in sees that you have got typed in a protracted prompt asking for help along with your work, and that prompt advertently has a customer ID number in it. What happens? The DLP product blocks the prompt from going out, and also you never get a solution. That’s restriction. As an alternative, with WItnessAI, we are able to discover the identical number, and silently and surgically redact it on the fly, after which unredact it within the AI response, so that you just get a useful answer while also keeping your data secure. That’s enablement.
What are the most important risks enterprises face when deploying generative AI, and the way does WitnessAI mitigate them?
The primary is visibility. Many individuals are surprised to learn that the AI application universe isn’t just ChatGPT and now DeepSeek; there are actually hundreds of AI apps on the web and enterprises absorb risks from employees using these apps, so step one is getting visibility: which AI apps are my employees using, what are they doing with those apps, and is it dangerous?
The second is control. Your legal team has constructed a comprehensive acceptable use policy for AI, one which ensures the protection of customer data, citizen data, mental property, in addition to worker safety. How will you implement this policy? Is it in your endpoint security product? In your firewall? In your VPN? In your cloud? What in the event that they are all from different vendors? So, you would like a solution to define and implement acceptable use policy that’s consistent across AI models, apps, clouds, and security products.
The third is protection of your individual apps. In 2025, we are going to see much faster adoption of LLMs inside enterprises, after which faster rollout of chat apps powered by those LLMs. So, enterprises have to be sure that not only that the apps are protected, but additionally that the apps don’t say “dumb” things, like recommend a competitor.
We address all three. We offer visibility into which apps individuals are accessing, how they’re using those apps, policy that relies on who you’re and what you are attempting to do, and really effective tools for stopping attacks similar to jailbreaks or unwanted behaviors out of your bots.
How does WitnessAI’s AI observability feature help corporations track worker AI usage and stop “shadow AI” risks?
WitnessAI connects to your network easily and silently builds a catalog of each AI app (and there are actually hundreds of them on the web) that your employees’ access. We inform you where those apps are positioned, where they host their data, etc so that you just understand how dangerous these apps are. You may activate conversation visibility, where we use deep packet inspection to watch prompts and responses. We are able to classify prompts by risk and by intent. Intent is perhaps “write code” or “write a company contract.” It’s necessary because we then allow you to write intent-based policy controls.
What role does AI policy enforcement play in ensuring corporate AI compliance, and the way does WitnessAI streamline this process?
Compliance means ensuring that your organization is following regulations or policies, and there are two parts to making sure compliance. The primary is that you should find a way to discover problematic activity. For instance, I want to know that an worker is using customer data in a way which may run afoul of an information protection law. We try this with our observability platform. The second part is describing and enforcing policy against that activity. You don’t wish to simply know that customer data is leaking, you must stop it from leaking. So, we we have built a novel AI-specific policy engine, Witness/CONTROL, that helps you to easily construct identity and intention-based policies to guard data, prevent harmful or illegal responses, etc. For instance, you may construct a policy that claims something like, “Only our legal department can use ChatGPT to write down corporate contracts, and in the event that they accomplish that, mechanically redact any PII.” Easy to say, and with WitnessAI, easy to implement.
How does WitnessAI address concerns around LLM jailbreaks and prompt injection attacks?
We’ve a hardcore AI research team—really sharp. Early on, they built a system to create synthetic attack data, along with pulling in widely available training data sets. Because of this, we’ve benchmarked our prompt injection against all the pieces on the market, we’re over 99% effective and recurrently catch attacks that the models themselves miss.
In practice, most corporations we speak with want to start out with worker app governance, after which a bit later they roll out an AI customer app based on their internal data. So, they use Witness to guard their people, then they activate the prompt injection firewall. One system, one consistent solution to construct policies, easy to scale.
What are your long-term goals for WitnessAI, and where do you see AI governance evolving in the subsequent five years?
To date, we’ve only talked a couple of person-to-chat app model here. Our next phase might be to handle app to app, i.e agentic AI. We’ve designed the APIs in our platform to work equally well with each agents and humans. Beyond that, we imagine we’ve built a brand new solution to get network-level visibility and policy control within the AI age, and we’ll be growing the corporate with that in mind.