AI is already making online crimes easier. It could get much worse.

-

Cherepanov and Strýček were confident that their discovery, which they dubbed PromptLock, marked a turning point in generative AI, showing how the technology could possibly be exploited to create highly flexible malware attacks. They published a blog post declaring that they’d uncovered the primary example of AI-powered ransomware, which quickly became the item of widespread global media attention.

However the threat wasn’t quite as dramatic because it first appeared. The day after the blog post went live, a team of researchers from Latest York University claimed responsibility, explaining that the malware was not, the truth is, a full attack let out within the wild but a research project, merely designed to prove it was to automate each step of a ransomware campaign—which, they said, that they had. 

PromptLock can have turned out to be an educational project, but the actual bad guys using the newest AI tools. Just as software engineers are using artificial intelligence to assist write code and check for bugs, hackers are using these tools to scale back the effort and time required to orchestrate an attack, lowering the barriers for less experienced attackers to try something out. 

The likelihood that cyberattacks will now develop into more common and simpler over time will not be a distant possibility but “a sheer reality,” says Lorenzo Cavallaro, a professor of computer science at University College London. 

Some in Silicon Valley warn that AI is getting ready to with the ability to perform fully automated attacks. But most security researchers say this claim is overblown. “For some reason, everyone seems to be just focused on this malware idea of, like, AI superhackers, which is just absurd,” says Marcus Hutchins, who’s principal threat researcher at the safety company Expel and famous in the safety world for ending a large global ransomware attack called WannaCry in 2017. 

As an alternative, experts argue, we must be paying closer attention to the rather more immediate risks posed by AI, which is already speeding up and increasing the quantity of scams. Criminals are increasingly exploiting the newest deepfake technologies to impersonate people and swindle victims out of vast sums of cash. These AI-enhanced cyberattacks are only set to get more frequent and more destructive, and we must be ready. 

Spam and beyond

Attackers began adopting generative AI tools almost immediately after ChatGPT exploded on the scene at the tip of 2022. These efforts began, as you would possibly imagine, with the creation of spam—and a number of it. Last yr, a report from Microsoft said that within the yr leading as much as April 2025, the corporate had blocked $4 billion value of scams and fraudulent transactions, “many likely aided by AI content.” 

At the least half of spam email is now generated using LLMs, in keeping with estimates by researchers at Columbia University, the University of Chicago, and Barracuda Networks, who analyzed nearly 500,000 malicious messages collected before and after the launch of ChatGPT. Additionally they found evidence that AI is increasingly being deployed in additional sophisticated schemes. They checked out targeted email attacks, which impersonate a trusted figure so as to trick a employee inside a company out of funds or sensitive information. By April 2025, they found, at the very least 14% of those forms of focused email attacks were generated using LLMs, up from 7.6% in April 2024.

ASK ANA

What are your thoughts on this topic?
Let us know in the comments below.

0 0 votes
Article Rating
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

Share this article

Recent posts

0
Would love your thoughts, please comment.x
()
x