Home Artificial Intelligence This latest tool could protect your pictures from AI manipulation

This latest tool could protect your pictures from AI manipulation

7
This latest tool could protect your pictures from AI manipulation

The tool, called PhotoGuard, works like a protective shield by altering photos in tiny ways which might be invisible to the human eye but prevent them from being manipulated. If someone tries to make use of an editing app based on a generative AI model similar to Stable Diffusion to govern a picture that has been “immunized” by PhotoGuard, the result will look unrealistic or warped. 

Right away, “anyone can take our image, modify it nevertheless they need, put us in very bad-looking situations, and blackmail us,” says Hadi Salman, a PhD researcher at MIT who contributed to the research. It was presented on the International Conference on Machine Learning this week. 

PhotoGuard is “an try to solve the issue of our images being manipulated maliciously by these models,” says Salman. The tool could, for instance, help prevent women’s selfies from being made into nonconsensual deepfake pornography.

The necessity to seek out ways to detect and stop AI-powered manipulation has never been more urgent, because generative AI tools have made it quicker and easier to do than ever before. In a voluntary pledge with the White House, leading AI firms similar to OpenAI, Google, and Meta committed to developing such methods in an effort to stop fraud and deception. PhotoGuard is a complementary technique to a different certainly one of these techniques, watermarking: it goals to stop people from using AI tools to tamper with images to start with, whereas watermarking uses similar invisible signals to permit people to detect AI-generated content once it has been created.

The MIT team used two different techniques to stop images from being edited using the open-source image generation model Stable Diffusion. 

The primary technique is known as an encoder attack. PhotoGuard adds imperceptible signals to the image in order that the AI model interprets it as something else. For instance, these signals could cause the AI to categorize a picture of, say, Trevor Noah as a block of pure gray. Consequently, any  try to use Stable Diffusion to edit Noah into other situations would look unconvincing. 

The second, more practical technique is known as a diffusion attack. It disrupts the best way the AI models generate images, essentially by encoding them with secret signals that alter how they’re processed by the model.  By adding these signals to a picture of Trevor Noah, the team managed to govern the diffusion model to disregard its prompt and generate the  image the researchers wanted. Consequently, any AI-edited images of Noah would just look gray. 

The work is “a superb combination of a tangible need for something with what will be done right away,” says Ben Zhao, a pc science professor on the University of Chicago, who developed an identical protective method called Glaze that artists can use to stop their work from being scraped into AI models. 

7 COMMENTS

  1. … [Trackback]

    […] Info on that Topic: bardai.ai/artificial-intelligence/this-latest-tool-could-protect-your-pictures-from-ai-manipulation/ […]

LEAVE A REPLY

Please enter your comment!
Please enter your name here