Rising generative AI systems empower chatbots to edit and create images, with Shutterstock and Adobe at the forefront. Concerns of image manipulation and art theft persist, but watermarking and MIT CSAIL's 'PhotoGuard' offer solutions.

(Photo : Hadi Salman YouTube Channel)
We developed a tool to immunize images against (malicious) manipulation via AI-powered image editing. The tools adds imperceptible perturbations to images to prevent the AI model from performing realistic edits.

Offering Solutions to Image Manipulation, Artwork Theft

Innovative changes in the tech industry emerge as generative AI systems like Dall-E and Stable Diffusion advance. Chatbots gain sophisticated image editing abilities, with Shutterstock and Adobe leading. 

Despite progress, Engadget reported that concerns over unauthorized manipulation and copyright theft remain. Watermarking methods mitigate risks while MIT CSAIL's 'PhotoGuard' offers protection against unauthorized image manipulation. 

PhotoGuard employs a technique that introduces subtle alterations to specific pixels in an image, disrupting AI's comprehension while remaining invisible to humans. This 'encoder attack' misleads the AI's understanding of the image, causing it to misinterpret the content displayed.

While employing the "encoder" attack method, the AI's understanding of the input image is manipulated, causing it to interpret the image as something entirely unrelated. Conversely, the "diffusion" attack method takes a different approach by camouflaging an image as a distinct one from the perspective of the AI. 

By fine-tuning the perturbations in the target image, DGT Reviews reported that any modifications attempted by the AI are actually applied to the forged target image, leading to the generation of an unrealistic and deceptive image.

Also Read: Spotting AI-Generated Images: Here's How Google's 'About This Image' Tech Works

Diffusion Attack

The "diffusion" attack method is a highly sophisticated and computationally intensive technique that disguises an image as different, fooling AI systems into perceiving it as the target image. By fine-tuning the perturbations within the image, it mimics the appearance of the desired target. 

Any alterations or edits made by an AI on these "protected" images will actually be applied to the forged "target" image, resulting in an unrealistic and manipulated output. MIT Doctorate and Lead Author Hadi Salman stated, "The encoder attack makes the model think that the input image (to be edited) is some other image (e.g. a gray image)."

In the case of the diffusion attack, TS2 reported that it compels the diffusion model to apply edits aimed at resembling a specific target image, which could even be an arbitrary or random one. However, this technique is not entirely infallible, as malicious actors have the potential to attempt to reverse the engineering of the protected image. 

A collaborative effort is essential to effectively address the potential threats posed by AI tools. This entails bringing together model developers, social media platforms, and policymakers to establish a strong defense system. While some progress has been achieved, further extensive work is necessary to make this protection practical and efficient. 

It is crucial for companies to invest in developing robust immunization strategies to ensure they are adequately equipped to counter the potential risks associated with these AI tools.

Related Article: Spotting AI-Generated Images: Here's How Google's 'About This Image' Tech Works

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion