These protocols will help AI agents navigate our messy lives

-

What should these protocols say about security?

Researchers and developers still don’t really understand how AI models work, and latest vulnerabilities are being discovered on a regular basis. For chatbot-style AI applications, malicious attacks could cause models to do all types of bad things, including regurgitating training data and spouting slurs. But for AI agents, which interact with the world on someone’s behalf, the probabilities are far riskier. 

For instance, one AI agent, made to read and send emails for somebody, has already been shown to be vulnerable to what’s referred to as an indirect prompt injection attack. Essentially, an email might be written in a way that hijacks the AI model and causes it to malfunction. Then, if that agent has access to the user’s files, it might be instructed to send private documents to the attacker. 

Some researchers imagine that protocols like MCP should prevent agents from carrying out harmful actions like this. Nonetheless, it doesn’t in the intervening time. “Mainly, it doesn’t have any security design,” says Zhaorun Chen, a  University of Chicago PhD student who works on AI agent security and uses MCP servers. 

Bruce Schneier, a security researcher and activist, is skeptical that protocols like MCP will have the opportunity to do much to cut back the inherent risks that include AI and is worried that giving such technology more power will just give it more ability to cause harm in the actual, physical world. “We just don’t have good answers on the way to secure these items,” says Schneier. “It’s going to be a security cesspool really fast.” 

Others are more hopeful. Security design might be added to MCP and A2A much like the way in which it’s for web protocols like HTTPS (though the character of attacks on AI systems could be very different). And Chen and Anthropic imagine that standardizing protocols like MCP and A2A can assist make it easier to catch and resolve security issues at the same time as is. Chen uses MCP in his research to check the roles different programs can play in attacks to higher understand vulnerabilities. Chu at Anthropic believes that these tools could let cybersecurity corporations more easily cope with attacks against agents, because it’ll be easier to unpack who sent what. 

How open should these protocols be?

Although MCP and A2A are two of the preferred agent protocols available today, there are many others within the works. Large corporations like Cisco and IBM are working on their very own protocols, and other groups have put forth different designs like Agora, designed by researchers on the University of Oxford, which upgrades an agent-service communication from human language to structured data in real time.

Many developers hope there could eventually be a registry of protected, trusted systems to navigate the proliferation of agents and tools. Others, including Chen, want users to have the opportunity to rate different services in something like a Yelp for AI agent tools. Some more area of interest protocols have even built blockchains on top of MCP and A2A in order that servers can show they will not be just spam. 

ASK ANA

What are your thoughts on this topic?
Let us know in the comments below.

0 0 votes
Article Rating
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

Share this article

Recent posts

0
Would love your thoughts, please comment.x
()
x