Kieran Norton a principal (partner) at Deloitte & Touche LLP, is the US Cyber AI & Automation Leader for Deloitte. With over 25 years of intensive experience and a solid technology background, Kieran excels in addressing emerging risks, providing clients with strategic and pragmatic insights into cybersecurity and technology risk management.
Inside Deloitte, Kieran leads the AI transformation efforts for the US Cyber practice. He oversees the design, development, and market deployment of AI and automation solutions, helping clients enhance their cyber capabilities and adopt AI/Gen AI technologies while effectively managing the associated risks.
Externally, Kieran helps clients in evolving their traditional security strategies to support digital transformation, modernize supply chains, speed up time to market, reduce costs, and achieve other critical business objectives.
With AI agents becoming increasingly autonomous, what recent categories of cybersecurity threats are emerging that companies may not yet fully understand?
The risks related to using recent AI related technologies to design, construct, deploy and manage agents could also be understood—operationalized is a distinct matter.
AI agent agency and autonomy – the flexibility for agents to perceive, determine, act and operate independent of humans –can create challenges with maintaining visibility and control over relationships and interactions that models/agents have with users, data and other agents. As agents proceed to multiply inside the enterprise, connecting multiple platforms and services with increasing autonomy and decision rights, this may turn into increasingly harder. The threats related to poorly protected, excessive or shadow AI agency/autonomy are quite a few. This will include data leakage, agent manipulation (via prompt injection, etc.) and agent-to-agent attack chains. Not all of those threats are here-and-now, but enterprises should consider how they may manage these threats as they adopt and mature AI driven capabilities.
AI Identity management is one other risk that ought to be thoughtfully considered. Identifying, establishing and managing the machine identities of AI agents will turn into more complex as more agents are deployed and used across enterprises. The ephemeral nature of AI models / model components which are spun up and torn down repeatedly under various circumstances, will lead to challenges in maintaining these model IDs. Model identities are needed to observe the activity and behavior of agents from each a security and trust perspective. If not implemented and monitored properly, detecting potential issues (performance, security, etc.) can be very difficult.
How concerned should we be about data poisoning attacks in AI training pipelines, and what are the perfect prevention strategies?
Data poisoning represents considered one of several ways to influence / manipulate AI models inside the model development lifecycle. Poisoning typically occurs when a foul actor injects harmful data into the training set. Nonetheless, it’s vital to notice that beyond explicit adversarial actors, data poisoning can occur attributable to mistakes or systemic issues in data generation. As organizations turn into more data hungry and search for useable data in additional places (e.g., outsourced manual annotation, purchased or generated synthetic data sets, etc.), the potential for unintentionally poisoning training data grows, and will not all the time be easily diagnosed.
Targeting training pipelines is a primary attack vector utilized by adversaries for each subtle and overt influence. Manipulation of AI models can result in outcomes that include false positives, false negatives, and other more subtle covert influences that may alter AI predictions.
Prevention strategies range from implementing solutions which are technical, procedural and architectural. Procedural strategies include data validation / sanitization and trust assessments; technical strategies include using security enhancements with AI techniques like federated learning; architectural strategies include implementing zero-trust pipelines and implementing robust monitoring / alerting that may facilitate anomaly detection. These models are only nearly as good as their data, even when a company is using the most recent and best tools, so data poisoning can turn into an Achilles heel for the unprepared.
In what ways can malicious actors manipulate AI models post-deployment, and the way can enterprises detect tampering early?
Access to AI models post-deployment is often achieved through accessing an Application Programming Interface (API), an application via an embedded system, and/or via a port-protocol to an edge device. Early detection requires early work within the Software Development Lifecycle (SDLC), understanding the relevant model manipulation techniques in addition to prioritized threat vectors to plan methods for detection and protection. Some model manipulation involves API hijacking, manipulation of memory spaces (runtime), and slow / gradual poisoning via model drift. Given these methods of manipulation, some early detection strategies may include using end point telemetry / monitoring (via Endpoint Detection and Response and Prolonged Detection and Response), implementing secure inference pipelines (e.g., confidential computing and Zero Trust principles), and enabling model watermarking / model signing.
Prompt injection is a family of model attacks that occur post-deployment and might be used for various purposes, including extracting data in unintended ways, revealing system prompts not meant for normal users, and inducing model responses that will forged a company in a negative light. There are number of guardrail tools out there to assist mitigate the chance of prompt injection, but as with the remaining of cyber, that is an arms race where attack techniques and defensive counter measures are continually being updated.
How do traditional cybersecurity frameworks fall short in addressing the unique risks of AI systems?
We typically associate ‘cybersecurity framework’ with guidance and standards – e.g. NIST, ISO, MITRE, etc. A few of the organizations behind these have published updated guidance specific to protecting AI systems which might be very helpful.
AI doesn’t render these frameworks ineffective – you continue to need to deal with all the standard domains of cybersecurity — what you might need is to update your processes and programs (e.g. your SDLC) to deal with the nuances related to AI workloads. Embedding and automating (where possible) controls to guard against the nuanced threats described above is probably the most efficient and effective way forward.
At a tactical level, it’s price mentioning that the total range of possible inputs and outputs is commonly vastly larger than non-AI applications, which creates an issue of scale for traditional penetration testing and rules-based detections, hence the give attention to automation.
What key elements ought to be included in a cybersecurity strategy specifically designed for organizations deploying generative AI or large language models?
When developing a cybersecurity strategy for deploying GenAI or large language models (LLMs), there isn’t a one-size-fits-all approach. Much depends upon the organization’s overall business objectives, IT strategy, industry focus, regulatory footprint, risk tolerance, etc. in addition to the particular AI use cases into consideration. An internal use only chatbot carries a really different risk profile than an agent that would impact health outcomes for patients for instance.
That said, there are fundamentals that each organization should address:
- Conduct a readiness assessment—this establishes a baseline of current capabilities in addition to identifies potential gaps considering prioritized AI use cases. Organizations should discover where there are existing controls that might be prolonged to deal with the nuanced risks related to GenAI and the necessity to implement recent technologies or enhance current processes.
- Establish an AI governance process—this will be net recent inside a company or a modification to current risk management programs. This could include defining enterprise-wide AI enablement functions and pulling in stakeholders from across the business, IT, product, risk, cybersecurity, etc. as a part of the governance structure. Moreover, defining/updating relevant policies (acceptable use policies, cloud security policies, third-party technology risk management, etc.) in addition to establishing L&D requirements to support AI literacy and AI security/safety throughout the organization ought to be included.
- Establish a trusted AI architecture—with the stand-up of AI / GenAI platforms and experimentation sandboxes, existing technology in addition to recent solutions (e.g. AI firewalls/runtime security, guardrails, model lifecycle management, enhanced IAM capabilities, etc.) will should be integrated into development and deployment environments in a repeatable, scalable fashion.
- Enhance the SDLC—organizations should construct tight integrations between AI developers and the chance management teams working to guard, secure and construct trust into AI solutions. This includes establishing a uniform/standard set of secure software development practices and control requirements, in partnership with the broader AI development and adoption teams.
Are you able to explain the concept of an “AI firewall” in easy terms? How does it differ from traditional network firewalls?
An AI firewall is a security layer designed to observe and control the inputs and outputs of AI systems—especially large language models—to stop misuse, protect sensitive data, and ensure responsible AI behavior. Unlike traditional firewalls that protect networks by filtering traffic based on IP addresses, ports, and known threats, AI firewalls give attention to understanding and managing natural language interactions. They block things like toxic content, data leakage, prompt injection, and unethical use of AI by applying policies, context-aware filters, and model-specific guardrails. In essence, while a conventional firewall protects your network, an AI firewall protects your AI models and their outputs.
Are there any current industry standards or emerging protocols that govern using AI-specific firewalls or guardrails?
Model communication protocol (MCP) is just not a universal standard but is gaining traction across the industry to assist address the growing configuration burden on enterprises which have a necessity to administer AI-GenAI solution diversity. MCP governs how AI models exchange information (including learning) inclusive of integrity and verification. We will consider MCP because the transmission control protocol (TCP)/web protocol (IP) stack for AI models which is especially useful in each centralized, federated, or distributed use cases. MCP is presently a conceptual framework that’s realized through various tools, research, and projects.
The space is moving quickly and we will expect it can shift quite a bit over the subsequent few years.
How is AI transforming the sector of threat detection and response today in comparison with just five years ago?
We now have seen the industrial security operations center (SOC) platforms modernizing to different degrees, using massive high-quality data sets together with advanced AI/ML models to enhance detection and classification of threats. Moreover, they’re leveraging automation, workflow and auto-remediation capabilities to scale back the time from detection to mitigation. Lastly, some have introduced copilot capabilities to further support triage and response.
Moreover, agents are being developed to meet select roles inside the SOC. As a practical example, we now have built a ‘Digital Analyst’ agent for deployment in our own managed services offering. The agent serves as a level one analyst, triaging inbound alerts, adding context from threat intel and other sources, and recommending response steps (based on extensive case history) for our human analysts who then review, modify if needed and take motion.
How do you see the connection between AI and cybersecurity evolving over the subsequent 3–5 years—will AI be more of a risk or an answer?
As AI evolves over the subsequent 3-5 years, it may help cybersecurity but at the identical time, it may also introduce risks. AI will expand the attack surface and create recent challenges from a defensive perspective. Moreover, adversarial AI goes to extend the viability, speed and scale of attacks which is able to create further challenges. On the flip side, leveraging AI within the business of cybersecurity presents significant opportunities to enhance effectiveness, efficiency, agility and speed of cyber operations across most domains—ultimately making a ‘fight fire with fire’ scenario.