Agent autonomy without guardrails is an SRE nightmare

-



João Freitas is GM and VP of engineering for AI and automation at PagerDuty

As AI use continues to evolve in large organizations, leaders are increasingly searching for the subsequent development that can yield major ROI. The newest wave of this ongoing trend is the adoption of AI agents. Nonetheless, as with all recent technology, organizations must ensure they adopt AI agents in a responsible way that enables them to facilitate each speed and security. 

Greater than half of organizations have already deployed AI agents to some extent, with more expecting to follow suit in the subsequent two years. But many early adopters at the moment are reevaluating their approach. 4-in-10 tech leaders regret not establishing a stronger governance foundation from the beginning, which suggests they adopted AI rapidly, but with margin to enhance on policies, rules and best practices designed to make sure the responsible, ethical and legal development and use of AI.

As AI adoption accelerates, organizations must find the fitting balance between their exposure risk and the implementation of guardrails to make sure AI use is secure.

Where do AI agents create potential risks?

There are three principal areas of consideration for safer AI adoption.

The primary is shadow AI, when employees use unauthorized AI tools without express permission, bypassing approved tools and processes. IT should create crucial processes for experimentation and innovation to introduce more efficient ways of working with AI. While shadow AI has existed so long as AI tools themselves, AI agent autonomy makes it easier for unsanctioned tools to operate outside the purview of IT, which may introduce fresh security risks.

Secondly, organizations must close gaps in AI ownership and accountability to arrange for incidents or processes gone mistaken. The strength of AI agents lies of their autonomy. Nonetheless, if agents act in unexpected ways, teams must give you the chance to find out who’s accountable for addressing any issues.

The third risk arises when there’s a scarcity of explainability for actions AI agents have taken. AI agents are goal-oriented, but how they accomplish their goals could be unclear. AI agents should have explainable logic underlying their actions in order that engineers can trace and, if needed, roll back actions which will cause issues with existing systems.

While none of those risks should delay adoption, they may help organizations higher ensure their security.

The three guidelines for responsible AI agent adoption

Once organizations have identified the risks AI agents can pose, they need to implement guidelines and guardrails to make sure protected usage. By following these three steps, organizations can minimize these risks.

1: Make human oversight the default 

AI agency continues to evolve at a quick pace. Nonetheless, we still need human oversight when AI agents are given the  capability to act, make decisions and pursue a goal which will impact key systems. A human must be within the loop by default, especially for business-critical use cases and systems. The teams that use AI must understand the actions it could take and where they might have to intervene. Start conservatively and, over time, increase the extent of agency given to AI agents.

In conjunction, operations teams, engineers and security professionals must understand the role they play in supervising AI agents’ workflows. Each agent must be assigned a selected human owner for clearly defined oversight and accountability. Organizations must also allow any human to flag or override an AI agent’s behavior when an motion has a negative end result.

When considering tasks for AI agents, organizations should understand that, while traditional automation is nice at handling repetitive, rule-based processes with structured data inputs, AI agents can handle way more complex tasks and adapt to recent information in a more autonomous way. This makes them an appealing solution for all types of tasks. But as AI agents are deployed, organizations should control what actions the agents can take, particularly within the early stages of a project. Thus, teams working with AI agents must have approval paths in place for high-impact actions to make sure agent scope doesn’t extend beyond expected use cases, minimizing risk to the broader system.

2: Bake in security 

The introduction of latest tools shouldn’t expose a system to fresh security risks. 

Organizations should consider agentic platforms that comply with high security standards and are validated by enterprise-grade certifications equivalent to SOC2, FedRAMP or equivalent. Further, AI agents shouldn’t be allowed free rein across a corporation’s systems. At a minimum, the permissions and security scope of an AI agent should be aligned with the scope of the owner, and any tools added to the agent shouldn’t allow for prolonged permissions. Limiting AI agent access to a system based on their role can even ensure deployment runs easily. Keeping complete logs of each motion taken by an AI agent also can help engineers understand what happened within the event of an incident and trace back the issue.

3: Make outputs explainable 

AI use in a corporation mustn’t ever be a black box. The reasoning behind any motion should be illustrated in order that any engineer who tries to access it may understand the context the agent used for decision-making and access the traces that led to those actions.

Inputs and outputs for each motion must be logged and accessible. It will help organizations establish a firm overview of the logic underlying an AI agent’s actions, providing significant value within the event anything goes mistaken.

Security underscores AI agents’ success

AI agents offer an enormous opportunity for organizations to speed up and improve their existing processes. Nonetheless, in the event that they don’t prioritize security and robust governance, they might expose themselves to recent risks.

As AI agents change into more common, organizations must ensure they’ve systems in place to measure how they perform and the flexibility to take motion after they create problems.

Read more from our guest writers. Or, consider submitting a post of your personal! See our guidelines here.



Source link

ASK ANA

What are your thoughts on this topic?
Let us know in the comments below.

0 0 votes
Article Rating
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

Share this article

Recent posts

0
Would love your thoughts, please comment.x
()
x