Dr. Peter Garraghan, CEO, CTO & Co-Founder at Mindgard – Interview Series

-

Dr. Peter Garraghan is CEO, CTO & co-founder at Mindgard, the leader in Artificial Intelligence Security Testing. Founded at Lancaster University and backed by leading edge research, Mindgard enables organizations to secure their AI systems from latest threats that traditional application security tools cannot address. As a Professor of Computer Science at Lancaster University, Peter is an internationally recognized expert in AI security. He has devoted his profession to developing advanced technologies to combat the growing threats facing AI. With over €11.6 million in research funding and greater than 60 published scientific papers, his contributions span each scientific innovation and practical solutions.

Are you able to share the story behind Mindgard’s founding? What inspired you to transition from academia to launching a cybersecurity startup?

Mindgard was born out of a desire to show academic insights into real-world impact. As a professor specializing in computing systems, AI security, and machine learning, I actually have been driven to pursue science that generates large-scale impact on people’s lives. Since 2014, I’ve researched AI and machine learning, recognizing their potential to rework society—and the immense risks they pose, from nation-state attacks to election interference. Existing tools weren’t built to deal with these challenges, so I led a team of scientists and engineers to develop revolutionary approaches in AI security. Mindgard emerged as a research-driven enterprise focused on constructing tangible solutions to guard against AI threats, mixing cutting-edge research with a commitment to industry application.

What challenges did you face while spinning out an organization from a university, and the way did you overcome them?

We officially founded Mindgard in May 2022, and while Lancaster University provided great support, making a university spin-out requires greater than just research skills. That meant raising capital, refining the worth proposition, and getting the tech ready for demos—all while balancing my role as a professor. Academics are trained to be researchers and to pursue novel science. Spin-outs succeed not only on groundbreaking technology but on how well that technology addresses immediate or future business needs and delivers value that pulls and retains users and customers.

Mindgard’s core product is the results of years of R&D. Are you able to speak about how the early stages of research evolved right into a business solution?

The journey from research to a business solution was a deliberate and iterative process. It began over a decade ago, with my team at Lancaster University exploring fundamental challenges in AI and machine learning security. We identified vulnerabilities in instantiated AI systems that traditional security tools, each code scanning and firewalls, weren’t equipped to deal with.

Over time, our focus shifted from research exploration to constructing prototypes and testing them inside production scenarios. Collaborating with industry partners, we refined our approach, ensuring it addressed practical needs. With many AI products being launched without adequate security testing or assurances, leaving organizations vulnerable—a problem underscored by a Gartner finding that 29% of enterprises deploying AI systems have reported security breaches, and only 10% of internal auditors have visibility into AI risk— I felt the timing was right to commercialise the answer.

What are among the key milestones in Mindgard’s journey since its inception in 2022?

In September 2023, we secured £3 million in funding, led by IQ Capital and Lakestar, to speed up the event of the Mindgard solution. We’ve been able to determine an incredible team of leaders who’re ex-Snyk, Veracode, and Twilio folks to push our company to the following stage of its journey. We’re pleased with our recognition because the UK’s Most Progressive Cyber SME at Infosecurity Europe this 12 months. Today, we’ve got 15 full time employees, 10 PhD researchers (and more who’re being actively recruited), and are actively recruiting security analysts and engineers to hitch the team. Looking ahead, we plan to expand our presence within the US, with a brand new funding round from Boston-based investors providing a powerful foundation for such growth.

As enterprises increasingly adopt AI, what do you see as probably the most pressing cybersecurity threats they face today?

Many organizations underestimate the cybersecurity risks tied to AI. It is amazingly difficult for non-specialists to grasp how AI actually works, much less what are the safety implications to their business. I spend a substantial period of time demystifying AI security, even with seasoned technologists who’re experts in infrastructure security and data protection. At the tip of the day, AI remains to be essentially software and data running on hardware. But it surely introduces unique vulnerabilities that differ from traditional systems and the threats from AI behavior are much higher, and harder to check compared to other software.

You’ve uncovered vulnerabilities in systems like Microsoft’s AI content filters. How do these findings influence the event of your platform?

The vulnerabilities we uncovered in Microsoft’s Azure AI Content Safety Service were less about shaping our platform’s development, and more about showcasing its capabilities.

Azure AI Content Safety is a service designed to safeguard AI applications by moderating harmful content in text, images, and videos. Vulnerabilities that were discovered by our team affected the service’s AI Text Moderation (which blocks harmful content like hate speech, sexual material, etc) and Prompt Shield (which prevents jailbreaks and prompt injection). Left unchecked, this vulnerability will be exploited to launch broader attacks, undermine the trust in GenAI-based systems, and compromise the appliance integrity that depend on AI for decision-making and knowledge processing.

As of October 2024, Microsoft implemented stronger mitigations to deal with these issues. Nonetheless, we proceed to advocate for heightened vigilance when deploying AI guardrails. Supplementary measures, equivalent to additional moderation tools or using LLMs less vulnerable to harmful content and jailbreaks, are essential for ensuring robust AI security.

Are you able to explain the importance of “jailbreaks” and “prompt manipulation” in AI systems, and why they pose such a novel challenge?

A Jailbreak is a sort of prompt injection vulnerability where a malicious actor can abuse an LLM to follow instructions contrary to its intended use. Inputs processed by LLMs contain each standing instructions by the appliance designer and untrusted user-input, enabling attacks where the untrusted user input overrides the standing instructions. This is analogous to how an SQL injection vulnerability enables untrusted user input to alter a database query. The issue nonetheless is that these risks can only be detected at run-time, given the code of an LLM is effectively a large matrix of numbers in non-human readable format.

For instance, Mindgard’s research team recently explored a complicated type of jailbreak attack. It incorporates embedding secret audio messages inside audio inputs which are undetectable by human listeners but recognized and executed by LLMs. Each embedded message contained a tailored jailbreak command together with an issue designed for a selected scenario. So, in a medical chatbot scenario, the hidden message could prompt the chatbot to offer dangerous instructions, equivalent to tips on how to synthesize methamphetamine, which could end in severe reputational damage if the chatbot’s response were taken seriously.

Mindgard’s platform identifies such jailbreaks and lots of other security vulnerabilities in AI models and the best way businesses have implemented them of their application, so security leaders can ensure their AI-powered application is secure by design and stays secure.

How does Mindgard’s platform address vulnerabilities across various kinds of AI models, from LLMs to multi-modal systems?

Our platform addresses a wide selection of vulnerabilities inside AI, spanning prompt injection, jailbreaks, extraction (stealing models), inversion (reverse engineering data), data leakage, and evasion (bypassing detection), and more. All AI model types (whether LLM or multi-modal) exhibit susceptibility to the risks – the trick is uncovering which specific techniques that triggers these vulnerabilities to supply a security issue. At Mindgard we’ve got a big R&D team that focuses on discovering and implementing latest attack types into our platform, in order that users can not sleep to this point against state-of-the-art risks.

What role does red teaming play in securing AI systems, and the way does your platform innovate on this space?

Red teaming is a critical component of AI security. By constantly simulating adversarial attacks, red teaming identifies vulnerabilities in AI systems, helping organizations mitigate risks and speed up AI adoption.  Despite its importance, red teaming in AI lacks standardization, resulting in inconsistencies in threat assessment and remediation strategies. This makes it difficult to objectively compare the protection of various systems or track threats effectively.

To deal with this, we introduced MITRE ATLAS™ Adviser, a feature designed to standardize AI red teaming reporting and streamline systematic red teaming practices. This permits enterprises to raised manage today’s risks while preparing for future threats as AI capabilities evolve.  With a comprehensive library of advanced attacks developed by our R&D team, Mindgard supports multimodal AI red teaming, covering traditional and GenAI models. Our platform addresses key risks to privacy, integrity, abuse, and availability, ensuring enterprises are equipped to secure their AI systems effectively.

How do you see your product fitting into the MLOps pipeline for enterprises deploying AI at scale?

Mindgard is designed to integrate easily into existing CI/CD Automation and all SDLC stages, requiring only an inference or API endpoint for model integration. Our solution today performs Dynamic Application Security Testing of AI Models (DAST-AI). It empowers our customers to perform continuous security testing on all their AI across the complete construct and buy lifecycle. For enterprises, it’s utilized by multiple personas. Security teams use it to achieve visibility and respond quickly to risks from developers constructing and using AI, to check and evaluate AI guardrails and WAF solutions, and to evaluate risks between tailored AI models and baseline models. Pentesters and security analysts leverage Mindgard to scale their AI red teaming efforts, while developers profit from integrated continuous testing of their AI deployments.

ASK DUKE

What are your thoughts on this topic?
Let us know in the comments below.

0 0 votes
Article Rating
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

Share this article

Recent posts

0
Would love your thoughts, please comment.x
()
x