Home Artificial Intelligence Using AI to guard against AI image manipulation

Using AI to guard against AI image manipulation

7
Using AI to guard against AI image manipulation

As we enter a latest era where technologies powered by artificial intelligence can craft and manipulate images with a precision that blurs the road between reality and fabrication, the specter of misuse looms large. Recently, advanced generative models comparable to DALL-E and Midjourney, celebrated for his or her impressive precision and user-friendly interfaces, have made the production of hyper-realistic images relatively effortless. With the barriers of entry lowered, even inexperienced users can generate and manipulate high-quality images from easy text descriptions — starting from innocent image alterations to malicious changes. Techniques like watermarking pose a promising solution, but misuse requires a preemptive (versus only post hoc) measure. 

In the search to create such a latest measure, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) developed “PhotoGuard,” a way that uses perturbations — minuscule alterations in pixel values invisible to the human eye but detectable by computer models — that effectively disrupt the model’s ability to govern the image.

PhotoGuard uses two different “attack” methods to generate these perturbations. The more straightforward “encoder” attack targets the image’s latent representation within the AI model, causing the model to perceive the image as a random entity. The more sophisticated “diffusion” one defines a goal image and optimizes the perturbations to make the ultimate image resemble the goal as closely as possible.

“Consider the opportunity of fraudulent propagation of pretend catastrophic events, like an explosion at a major landmark. This deception can manipulate market trends and public sentiment, however the risks will not be limited to the general public sphere. Personal images might be inappropriately altered and used for blackmail, leading to significant financial implications when executed on a big scale,” says Hadi Salman, an MIT graduate student in electrical engineering and computer science (EECS), affiliate of MIT CSAIL, and lead creator of a latest paper about PhotoGuard

“In additional extreme scenarios, these models could simulate voices and pictures for staging false crimes, inflicting psychological distress and financial loss. The swift nature of those actions compounds the issue. Even when the deception is eventually uncovered, the damage — whether reputational, emotional, or financial — has often already happened. It is a reality for victims in any respect levels, from individuals bullied at college to society-wide manipulation.”

PhotoGuard in practice

AI models view a picture in a different way from how humans do. It sees a picture as a fancy set of mathematical data points that describe every pixel’s color and position — that is the image’s latent representation. The encoder attack introduces minor adjustments into this mathematical representation, causing the AI model to perceive the image as a random entity. Consequently, any attempt to govern the image using the model becomes nearly inconceivable. The changes introduced are so minute that they’re invisible to the human eye, thus preserving the image’s visual integrity while ensuring its protection.

The second and decidedly more intricate “diffusion” attack strategically targets the complete diffusion model end-to-end. This involves determining a desired goal image, after which initiating an optimization process with the intention of closely aligning the generated image with this preselected goal.

In implementing, the team created perturbations inside the input space of the unique image. These perturbations are then used in the course of the inference stage, and applied to the photographs, offering a sturdy defense against unauthorized manipulation.

“The progress in AI that we’re witnessing is really breathtaking, but it surely enables helpful and malicious uses of AI alike,” says MIT professor of EECS and CSAIL principal investigator Aleksander Madry, who can be an creator on the paper. “It’s thus urgent that we work towards identifying and mitigating the latter. I view PhotoGuard as our small contribution to that essential effort.”

The diffusion attack is more computationally intensive than its simpler sibling, and requires significant GPU memory. The team says that approximating the diffusion process with fewer steps mitigates the problem, thus making the technique more practical.

To raised illustrate the attack, consider an art project, for instance. The unique image is a drawing, and the goal image is one other drawing that’s completely different. The diffusion attack is like making tiny, invisible changes to the primary drawing in order that, to an AI model, it begins to resemble the second drawing. Nevertheless, to the human eye, the unique drawing stays unchanged.

By doing this, any AI model attempting to change the unique image will now inadvertently make changes as if coping with the goal image, thereby protecting the unique image from intended manipulation. The result’s an image that is still visually unaltered for human observers, but protects against unauthorized edits by AI models.

So far as an actual example with PhotoGuard, consider a picture with multiple faces. You would mask any faces you don’t want to change, after which prompt with “two men attending a marriage.” Upon submission, the system will adjust the image accordingly, making a plausible depiction of two men participating in a marriage ceremony.

Now, consider safeguarding the image from being edited; adding perturbations to the image before upload can immunize it against modifications. On this case, the ultimate output will lack realism in comparison with the unique, non-immunized image.

All hands on deck

Key allies within the fight against image manipulation are the creators of the image-editing models, says the team. For PhotoGuard to be effective, an integrated response from all stakeholders is obligatory. “Policymakers should consider implementing regulations that mandate firms to guard user data from such manipulations. Developers of those AI models could design APIs that routinely add perturbations to users’ images, providing an added layer of protection against unauthorized edits,” says Salman.

Despite PhotoGuard’s promise, it’s not a panacea. Once a picture is online, individuals with malicious intent could try and reverse engineer the protective measures by applying noise, cropping, or rotating the image. Nevertheless, there’s loads of previous work from the adversarial examples literature that might be utilized here to implement robust perturbations that resist common image manipulations.

“A collaborative approach involving model developers, social media platforms, and policymakers presents a sturdy defense against unauthorized image manipulation. Working on this pressing issue is of paramount importance today,” says Salman. “And while I’m glad to contribute towards this solution, much work is required to make this protection practical. Corporations that develop these models need to speculate in engineering robust immunizations against the possible threats posed by these AI tools. As we tread into this latest era of generative models, let’s strive for potential and protection in equal measures.”

“The prospect of using attacks on machine learning to guard us from abusive uses of this technology could be very compelling,” says Florian Tramèr, an assistant professor at ETH Zürich. “The paper has a pleasant insight that the developers of generative AI models have strong incentives to supply such immunization protections to their users, which could even be a legal requirement in the longer term. Nevertheless, designing image protections that effectively resist circumvention attempts is a difficult problem: Once the generative AI company commits to an immunization mechanism and other people start applying it to their online images, we want to make sure that this protection will work against motivated adversaries who might even use higher generative AI models developed within the near future. Designing such robust protections is a tough open problem, and this paper makes a compelling case that generative AI firms must be working on solving it.”

Salman wrote the paper alongside fellow lead authors Alaa Khaddaj and Guillaume Leclerc MS ’18, in addition to Andrew Ilyas ’18, MEng ’18; all three are EECS graduate students and MIT CSAIL affiliates. The team’s work was partially done on the MIT Supercloud compute cluster, supported by U.S. National Science Foundation grants and Open Philanthropy, and based upon work supported by the U.S. Defense Advanced Research Projects Agency. It was presented on the International Conference on Machine Learning this July.

7 COMMENTS

LEAVE A REPLY

Please enter your comment!
Please enter your name here