Digital Mirage: Detecting AI-Generated Images

It may not come as a surprise, but generative AI has been revolutionizing the creative process for some time now. Solutions like Midjourney, DALL-E, Stable Diffusion, but also Adobe Firefly and Canva (see below) have democratized access to image generation.

Applications such as Reface (see below) enable users to insert their faces into scenarios often tied to popular content (music videos, iconic movie scenes, etc.), but that’s not all: AI avatars, face mixes, etc. are offered and generated from facial photos. These applications are popular and fun, allowing users to fantasize about a dream life for a moment.

Reface App (Android / iOS)

Unable to resist, I offer you a small anthology on the theme of Halloween, with the main role played by me obviously! How many movies/series will you recognize?

Beyond my obvious talent for acting, these applications raise numerous ethical issues: copyright law, image rights, the use of synthetic images for fraudulent purposes, and many others. While generative AI unlocks a myriad of opportunities, it also introduces hurdles, especially when it comes to identifying content produced by AI.

Deepfake: Blurring the Lines of Reality

Identifying AI-generated content is crucial in maintaining the authenticity of disseminated images. A key aspect is combating misinformation: such content can be used to sow confusion and spread false information. The term deepfake has been used for a few years now to describe media content synthesized with AI. In a world where information circulates faster than ever and where the image is of paramount importance, deepfakes can serve a variety of ambitions, some more commendable than others.

In the past, deepfakes have been used to tarnish the reputations of famous individuals, politicians, and artists. Sometimes, the repercussions of a deepfake can be even more severe. A recent example that comes to mind is an image of a baby linked to content about the Israeli-Palestinian conflict, which has since been flagged as AI-generated (note the baby’s fingers). The point here is not to discuss the conflict, but to demonstrate that such manipulation can also have serious political implications.

AI-generated image of a baby under the rubble in times of war

This brings us to another crucial aspect: public awareness. Much like the advent of the Internet in the early 2000s, generative AI is reshaping our access to information. The issue of information authenticity remains as relevant as ever. But what tools can assist us in detecting these contents?

Watermarking : Unmasking AI-Generated Images

Today, leading AI companies like OpenAI, Meta, and Google have committed to the White House (July 2023) to develop watermarking tools to fight misinformation and misuse of AI-generated content. But what is watermarking?

Watermarking is a method employed to verify and safeguard the integrity of various types of media. The origins of watermarking can be traced back to Italy, where marks were created during the paper-making process for identification purposes. As digital media emerged, watermarking advanced into a technique that involves embedding hidden markers in signals that can tolerate noise, such as audio, video, or image data. In the present day, watermarking serves a multitude of purposes including content authentication, security measures, copyright protection, and identifying ownership.

In relation to AI-created images, Google’s SynthID signifies a substantial advancement in watermarking technology. It incorporates an invisible digital watermark directly into the pixels of images generated by AI. This watermark can be utilized to confirm whether an image, or even a portion of an image, was produced by a specific AI model. SynthID is incorporated into Google Imagen.

Illustration of the imperceptible nature of the watermark applied to AI-generated images.

The concerned companies are also working on AI capable of recognizing AI-generated content, but it is still in the development stage. However, this does confirm the commitment of these giants to promote transparency and accountability in generative AI. This field is in a state of effervescence, and regulations are slow to come and be effective in the face of so much innovation.

In conclusion, it is important to maintain a critical mindset. Appearances can sometimes be deceiving, and questioning sources is more relevant than ever!

Leave a comment