Meet PhotoGuard: The Ultimate Shield Against Unauthorised Image Manipulation

Is AI Copyright Really Necessary?

From romantic poems to Salvador Dali-inspired images, generative AI can now do it all. And it can do it so well, that it is often impossible to differentiate between AI and human-generated artworks. Since the Turing Test, which set the standard for successful AI performance as being able to mimic humans so well that it becomes indistinguishable, the discussion about technology imitating humans has been a major topic of public debate. The community has always tried to distinguish between text written by humans and text generated by AI, amidst the risk of possible misuse of technology.

MIT’s Computer Science & Artificial Intelligence Laboratory has come up with a solution for this.

MIT Has A Solution: PhotoGuard

Scientists from MIT CSAIL have created a new AI tool called “PhotoGuard” that aims to stop unauthorised changes to images made by models like DALL-E and Midjourney. This tool is specifically designed to protect against image manipulation without proper authorization.

PhotoGuard leverages “adversarial perturbations,” which are minuscule alterations in pixel values that are not visible to the human eye but can be detected by computer models. These perturbations disrupt the AI model’s ability to manipulate images effectively. There are two attack methods used by PhotoGuard to generate these perturbations.

The “encoder” attack targets the AI model’s latent representation of the image, causing the model to perceive the image as random. The goal of this attack is to disrupt the LDM’s process of encoding the input image into a latent vector representation, which is then used to generate a new image. They achieve this by solving an optimization problem using projected gradient descent (PGD). The resulting small, imperceptible perturbations added to the original image cause the LDM to generate an irrelevant or unrealistic image.

On the other hand, the “diffusion” attack defines a target image and optimizes the perturbations to make the final image resemble the target closely. This attack is more complex and aims to disturb the diffusion process itself, targeting not only the encoder but also the full diffusion process that includes text prompt conditioning. The goal is to generate a specific target image (e.g., random noise or a gray image) by solving another optimization problem using PGD. This attack nullifies not only the effect of the immunized image but also that of the text prompT.

Hadi Salman, lead author of the paper and a PhD student at MIT told AIM, “In essence, PhotoGuard’s mechanism of adversarial perturbations adds a layer of protection to images, making them immune to manipulation by diffusion models.” By repurposing these imperceptible modifications of pixels, PhotoGuard safeguards images from being tampered with by such models.

For example, consider an image with multiple faces. You could mask any faces you don’t want to modify, and then prompt with “two men attending a wedding.” Upon submission, the system will adjust the image accordingly, creating a plausible depiction of two men participating in a wedding ceremony. Now, consider safeguarding the image from being edited; adding perturbations to the image before upload can immunise it against modifications. In this case, the final output will lack realism compared to the original, non-immunized image.

“I would be skeptical of AI’s ability to supplant human creativity. I expect that in the long-run AI will become just another (powerful) tool in the hands of designers to boost productivity of individuals to articulate their thoughts better without technical barriers” concluded Salman.

Decoding the Problem

The recent Senate discussion around AI regulation has turned the spotlight on the most pressing issues of copyright and artist incentivisation. Senior executives from OpenAI, HuggingFace, Meta, among others have testified before the US Congress about the potential dangers of AI and suggested the creation of a new government agency to licence large AI models, revoke permits for non-compliance and set safety protocols.

The major impetus behind this plea for regulation stems from concerns regarding copyright infringement. It first started when the artist community filed a lawsuit against the companies behind image generators like Stability AI, Midjourney, and DeviantArt seeking compensation for damages caused by these companies using their art without credit.

AI-generated content is facing opposition from stock image companies like Shutterstock, Getty, as well as artists who see it as a threat to their intellectual property. But eventually most of them have gotten on board with partnerships. Adobe’s Firefly, a generative image maker designed for “safe commercial use.” Adobe offers IP indemnification to safeguard users from legal issues related to its use. It is built on NVIDIA’s Picasso which is trained on licensed images from Getty Images, Shutterstock. Shutterstock also partnered with DALL-E creator OpenAI to provide training data. It also now provides full indemnification to its enterprise customers who use generative AI images on their platform, ensuring protection against any potential legal claims related to the images’ usage. Google, Microsoft, OpenAI have also started watermarking in the aim of mitigating copyright issues.

The post Meet PhotoGuard: The Ultimate Shield Against Unauthorised Image Manipulation appeared first on Analytics India Magazine.

Follow us on Twitter, Facebook
0 0 votes
Article Rating
Subscribe
Notify of
guest
0 comments
Oldest
New Most Voted
Inline Feedbacks
View all comments

Latest stories

You might also like...