Home Artificial Intelligence We’re hurtling toward a glitchy, spammy, scammy, AI-powered web

We’re hurtling toward a glitchy, spammy, scammy, AI-powered web

2
We’re hurtling toward a glitchy, spammy, scammy, AI-powered web

I agree with critics of the letter who say that worrying about future risks distracts us from the very real harms AI is already causing today. Biased systems are used to make decisions about people’s lives that trap them in poverty or result in wrongful arrests. Human content moderators should sift through mountains of traumatizing AI-generated content for less than $2 a day. Language AI models use a lot computing power that they continue to be huge polluters. 

However the systems which are being rushed out today are going to cause a distinct form of havoc altogether within the very near future. 

I just published a story that sets out a number of the ways AI language models might be misused. I even have some bad news: It’s stupidly easy, it requires no programming skills, and there aren’t any known fixes. For instance, for a sort of attack called indirect prompt injection, all you’ll want to do is hide a prompt in a cleverly crafted message on an internet site or in an email, in white text that (against a white background) shouldn’t be visible to the human eye. When you’ve done that, you may order the AI model to do what you would like. 

Tech corporations are embedding these deeply flawed models into all kinds of products, from programs that generate code to virtual assistants that sift through our emails and calendars. 

In doing so, they’re sending us hurtling toward a glitchy, spammy, scammy, AI-powered web. 

Allowing these language models to tug data from the web gives hackers the power to show them into “a super-powerful engine for spam and phishing,” says Florian Tramèr, an assistant professor of computer science at ETH Zürich who works on computer security, privacy, and machine learning.

Let me walk you thru how that works. First, an attacker hides a malicious prompt in a message in an email that an AI-powered virtual assistant opens. The attacker’s prompt asks the virtual assistant to send the attacker the victim’s contact list or emails, or to spread the attack to every body within the recipient’s contact list. Unlike the spam and scam emails of today, where people should be tricked into clicking on links, these recent sorts of attacks can be invisible to the human eye and automatic. 

This can be a recipe for disaster if the virtual assistant has access to sensitive information, comparable to banking or health data. The power to vary how the AI-powered virtual assistant behaves means people may very well be tricked into approving transactions that look close enough to the actual thing, but are literally planted by an attacker.  

2 COMMENTS

LEAVE A REPLY

Please enter your comment!
Please enter your name here