Home Artificial Intelligence These latest tools could help protect our pictures from AI

These latest tools could help protect our pictures from AI

0
These latest tools could help protect our pictures from AI

While nonconsensual deepfake porn has been used to torment women for years, the most recent generation of AI makes it a good larger problem. These systems are much easier to make use of than previous deepfake tech, and so they can generate images that look completely convincing.

Image-to-image AI systems, which permit people to edit existing images using generative AI, “might be very top quality … since it’s mainly based off of an existing single high-res image,” Ben Zhao, a pc science professor on the University of Chicago, tells me. “The result that comes out of it is identical quality, has the identical resolution, has the identical level of details, because oftentimes [the AI system] is just moving things around.” 

You may imagine my relief once I learned a couple of latest tool that would help people protect their images from AI manipulation. PhotoGuard was created by researchers at MIT and works like a protective shield for photos. It alters them in ways which are imperceptible to us but stop AI systems from tinkering with them. If someone tries to edit a picture that has been “immunized” by PhotoGuard using an app based on a generative AI model equivalent to Stable Diffusion, the result will look unrealistic or warped. Read my story about it.

One other tool that works in an identical way known as Glaze. But moderately than protecting people’s photos, it helps artists  prevent their copyrighted works and artistic styles from being scraped into training data sets for AI models. Some artists have been up in arms ever since image-generating AI models like Stable Diffusion and DALL-E 2 entered the scene, arguing that tech firms scrape their mental property and use it to coach such models without compensation or credit.

Glaze, which was developed by Zhao and a team of researchers on the University of Chicago, helps them address that problem. Glaze “cloaks” images, applying subtle changes which are barely noticeable to humans but prevent AI models from learning the features that outline a selected artist’s style. 

Zhao says Glaze corrupts AI models’ image generation processes, stopping them from spitting out an infinite variety of images that seem like work by particular artists. 

PhotoGuard has a demo online that works with Stable Diffusion, and artists will soon have access to Glaze. Zhao and his team are currently beta testing the system and can allow a limited variety of artists to join to make use of it later this week. 

But these tools are neither perfect nor enough on their very own. You may still take a screenshot of a picture protected with PhotoGuard and use an AI system to edit it, for instance. And while they prove that there are neat technical fixes to the issue of AI image editing, they’re worthless on their very own unless tech firms start adopting tools like them more widely. Immediately, our images online are fair game to anyone who desires to abuse or manipulate them using AI.

LEAVE A REPLY

Please enter your comment!
Please enter your name here