This tool strips away anti-AI protections from digital art

-

To be clear, the researchers behind LightShed aren’t attempting to steal artists’ work. They simply don’t want people to get a false sense of security. “You won’t make sure if firms have methods to delete these poisons but won’t ever inform you,” says Hanna Foerster, a PhD student on the University of Cambridge and the lead creator of a paper on the work. And in the event that they do, it could be too late to repair the issue.

AI models work, partially, by implicitly creating boundaries between what they perceive as different categories of images. Glaze and Nightshade change enough pixels to push a given piece of art over this boundary without affecting the image’s quality, causing the model to see it as something it’s not. These almost imperceptible changes are called perturbations, they usually mess up the AI model’s ability to know the artwork.

Glaze makes models misunderstand style (e.g., interpreting a photorealistic painting as a cartoon). Nightshade as a substitute makes the model see the topic incorrectly (e.g., interpreting a cat in a drawing as a dog). Glaze is used to defend an artist’s individual style, whereas Nightshade is used to attack AI models that crawl the web for art.

Foerster worked with a team of researchers from the Technical University of Darmstadt and the University of Texas at San Antonio to develop LightShed, which learns tips on how to see where tools like Glaze and Nightshade splash this type of digital poison onto art in order that it will possibly effectively clean it off. The group will present its findings on the Usenix Security Symposium, a number one global cybersecurity conference, in August. 

The researchers trained LightShed by feeding it pieces of art with and without Nightshade, Glaze, and other similar programs applied. Foerster describes the method as teaching LightShed to reconstruct “just the poison on poisoned images.” Identifying a cutoff for the way much poison will actually confuse an AI makes it easier to “wash” just the poison off. 

LightShed is incredibly effective at this. While other researchers have found easy ways to subvert poisoning, LightShed appears to be more adaptable. It could even apply what it’s learned from one anti-AI tool—say, Nightshade—to others like Mist or MetaCloak without ever seeing them ahead of time. While it has some trouble performing against small doses of poison, those are less prone to kill the AI models’ abilities to know the underlying art, making it a win-win for the AI—or a lose-lose for the artists using these tools.

Around 7.5 million people, a lot of them artists with small and medium-size followings and fewer resources, have downloaded Glaze to guard their art. Those using tools like Glaze see it as a very important technical line of defense, especially when the state of regulation around AI training and copyright remains to be up within the air. The LightShed authors see their work as a warning that tools like Glaze aren’t everlasting solutions. “It would need just a few more rounds of attempting to provide you with higher ideas for cover,” says Foerster.

The creators of Glaze and Nightshade appear to agree with that sentiment: The web site for Nightshade warned the tool wasn’t future-proof before work on LightShed ever began. And Shan, who led research on each tools, still believes defenses like his have meaning even when there are methods around them. 

ASK ANA

What are your thoughts on this topic?
Let us know in the comments below.

0 0 votes
Article Rating
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

Share this article

Recent posts

0
Would love your thoughts, please comment.x
()
x