Cyberattacks by AI agents are coming

-

Agents are also significantly smarter than the sorts of bots which are typically used to hack into systems. Bots are easy automated programs that run through scripts, in order that they struggle to adapt to unexpected scenarios. Agents, alternatively, are able not only to adapt the best way they engage with a hacking goal but additionally to avoid detection—each of that are beyond the capabilities of limited, scripted programs, says Volkov. “They’ll take a look at a goal and guess the most effective ways to penetrate it,” he says. “That type of thing is out of reach of, like, dumb scripted bots.”

Since LLM Agent Honeypot went live in October of last yr, it has logged greater than 11 million attempts to access it—the overwhelming majority of which were from curious humans and bots. But amongst these, the researchers have detected eight potential AI agents, two of which they’ve confirmed are agents that appear to originate from Hong Kong and Singapore, respectively. 

“We’d guess that these confirmed agents were experiments directly launched by humans with the agenda of something like ‘Exit into the web and take a look at and hack something interesting for me,’” says Volkov. The team plans to expand its honeypot into social media platforms, web sites, and databases to draw and capture a broader range of attackers, including spam bots and phishing agents, to research future threats.  

To find out which visitors to the vulnerable servers were LLM-powered agents, the researchers embedded prompt-injection techniques into the honeypot. These attacks are designed to alter the behavior of AI agents by issuing them latest instructions and asking questions that require humanlike intelligence. This approach wouldn’t work on standard bots.

ASK ANA

What are your thoughts on this topic?
Let us know in the comments below.

0 0 votes
Article Rating
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

Share this article

Recent posts

0
Would love your thoughts, please comment.x
()
x