A compelling latest study from Germany critiques the EU AI Act’s definition of the term ‘deepfake’ as overly vague, particularly within the context of digital image manipulation. The authors argue that the Act’s emphasis on content resembling real people or events – yet potentially fake – lacks clarity.
In addition they highlight that the Act’s exceptions for ‘standard editing’ (i.e., supposedly minor AI-aided modifications to photographs) fail to contemplate each the pervasive influence of AI in consumer applications and the subjective nature of artistic conventions that predate the appearance of AI.
Imprecise laws on these issues gives rise to 2 key risks: a ‘chilling effect,’ where the law’s broad interpretive scope stifles innovation and the adoption of latest systems; and a ‘scofflaw effect,’ where the law is disregarded as overreaching or irrelevant.
In either case, vague laws effectively shift the responsibility of creating practical legal definitions onto future court rulings – a cautious and risk-averse approach to laws.
AI-based image-manipulation technologies remain notably ahead of laws’s capability to handle them, it seems. As an illustration, one noteworthy example of the growing elasticity of the concept of AI-driven ‘automatic’ post-processing, the paper observes, is the ‘Scene Optimizer’ function in recent Samsung cameras, which can replace user-taken images of the moon (a difficult subject), with an AI-driven, ‘refined’ image:
Sources (clockwise from top-left): https://arxiv.org/pdf/2412.09961; https://www.samsung.com/uk/support/mobile-devices/how-galaxy-cameras-combine-super-resolution-technologies-with-ai-to-produce-high-quality-images-of-the-moon/; https:/reddit.com/r/Android/comments/11nzrb0/samsung_space_zoom_moon_shots_are_fake_and_here/
Within the lower-left of the image above, we see two images of the moon. The one on the left is a photograph taken by a Reddit user. Here, the image has been deliberately blurred and downscaled by the user.
To its right we see a photograph of the identical degraded image taken with a Samsung camera with AI-driven post-processing enabled. The camera has mechanically ‘augmented’ the recognized ‘moon’ object, regardless that it was not the true moon.
The paper levels deeper criticism on the Best Take feature incorporated into Google’s recent smartphones – a controversial AI feature that edits together the ‘best’ parts of a bunch photo, scanning multiple seconds of a photography sequence in order that smiles are shuffled forward or backward in time as needed – and no-one is shown in the midst of blinking.
The paper contends this type of composite process has the potential to misrepresent events:
The latest paper is titled , and comes from two researchers on the Computational Law Lab on the University of Tübingen, and Saarland University.
Old Tricks
Manipulating time in photography is much older than consumer-level AI. The brand new paper’s authors note the existence of much older techniques that may be argued as ‘inauthentic’, akin to the concatenation of multiple sequential images right into a High Dynamic Range (HDR) photo, or a ‘stitched’ panoramic photo.
Indeed, a number of the oldest and most amusing photographic fakes were traditionally created by school-children running from one end of a college group to a different, ahead of the trajectory of the special panoramic cameras that were once used for sports and faculty group photography – enabling the pupil to seem twice in the identical image:
![The temptation to trick panoramic cameras during group photos was too much to resist for many students, who were willing to risk a bad session at the head's office in order to 'clone' themselves in school photos. Source: https://petapixel.com/2012/12/13/double-exposure-a-clever-photo-prank-from-half-a-century-ago/](https://www.unite.ai/wp-content/uploads/2024/12/school-photo.jpg)
Source: https://petapixel.com/2012/12/13/double-exposure-a-clever-photo-prank-from-half-a-century-ago/
Unless you are taking a photograph in RAW mode, which mainly dumps the camera lens sensor to a really large file with none sort of interpretation, it’s likely that your digital photos should not completely authentic. Camera systems routinely apply ‘improvement’ algorithms akin to image sharpening and white balance, by default – and have done so because the origins of consumer-level digital photography.
The authors of the brand new paper argue that even these older kinds of digital photo augmentation don’t represent ‘reality’, since such methods are designed to make photos more pleasing, no more ‘real’.
The study suggests that the EU AI Act, even with later amendments akin to recitals 123–27, places all photographic output inside an framework unsuited to the context during which photos are produced as of late, versus the (nominally objective) nature of security camera footage or forensic photography. Most images addressed by the AI Act usually tend to originate in contexts where manufacturers and online platforms creative photo interpretation, including using AI.
The researchers suggest that photos ‘have never been an objective depiction of reality’. Considerations akin to the camera’s location, the depth of field chosen, and lighting decisions, all contribute to make a photograph deeply subjective.
The paper observes that routine ‘clean-up’ tasks – akin to removing sensor dust or unwanted power lines from an otherwise well-composed scene – were only -automated before the rise of AI: users needed to manually select a region or initiate a process to attain their desired end result.
Today, these operations are sometimes triggered by a user’s text prompts, most notably in tools like Photoshop. At the buyer level, such features are increasingly automated user input – an end result that is seemingly regarded by manufacturers and platforms as ‘obviously desirable’.
The Diluted Meaning of ‘Deepfake’
A central challenge for laws around AI-altered and AI-generated imagery is the paradox of the term ‘deepfake’, which has had its meaning notably prolonged over the past two years.
Originally the terms applied only to video output from autoencoder-based systems akin to DeepFaceLab and FaceSwap, each derived from anonymous code posted to Reddit in late 2017.
From 2022, the approaching of Latent Diffusion Models (LDMs) akin to Stable Diffusion and Flux, in addition to text-to-video systems akin to Sora, would also allow identity-swapping and customization, at improved resolution, versatility and fidelity. Now it was possible to create diffusion-based models that might depict celebrities and politicians. For the reason that term’ deepfake’ was already a headline-garnering treasure for media producers, it was prolonged to cover these systems.
Later, in each the media and the research literature, the term got here also to incorporate . By this point, the unique meaning of ‘deepfake’ was all but lost, while its prolonged meaning was consistently evolving, and increasingly diluted.
But because the word was so incendiary and galvanizing, and was by now a strong political and media touchstone, it proved not possible to offer up. It attracted readers to web sites, funding to researchers, and a spotlight to politicians. This lexical ambiguity is the major focus of the brand new research.
Because the authors observe, article 3(60) of the EU AI Act outlines 4 conditions that outline a ‘deepfake’.
1: True Moon
Firstly, the content should be , i.e., either created from scratch using AI (generation) or altered from existing data (manipulation). The paper highlights the issue in distinguishing between ‘acceptable’ image-editing outcomes and manipulative deepfakes, on condition that digital photos are, in any case, never true representations of reality.
The paper contends that a Samsung-generated moon is arguably authentic, because the moon is unlikely to vary appearance, and because the AI-generated content, trained on real lunar images, is subsequently prone to be accurate.
Nevertheless, the authors also state that because the Samsung system has been shown to generate an ‘enhanced’ image of the moon in a case where the source image was not the moon itself, this might be considered a ‘deepfake’.
It could be impractical to attract up a comprehensive list of differing use-cases around this type of functionality. Subsequently the burden of definition seems to pass, once more, to the courts.
2: TextFakes
Secondly, the content should be . Text content, while subject to other transparency obligations, just isn’t considered a deepfake under the AI Act. This just isn’t covered in any detail in the brand new study, though it could possibly have a notable bearing on the effectiveness of deepfakes (see below).
3: Real World Problems
Thirdly, the content must . This condition establishes a connection to the true world, meaning that purely fabricated imagery, even when photorealistic, wouldn’t qualify as a deepfake. Recital 134 of the EU AI Act emphasizes the ‘resemblance’ aspect by adding the word ‘appreciably’ (an apparent deferral to subsequent legal judgements).
The authors, citing earlier work, consider whether an AI-generated face need belong to an actual person, or whether it need only be adequately to an actual person, as a way to satisfy this definition.
As an illustration, how can one determine whether a sequence of photorealistic images depicting the politician Donald Trump has the intent of impersonation, if the pictures (or appended texts) don’t specifically mention him? Facial recognition? User surveys? A judge’s definition of ‘common sense’?
Returning to the ‘TextFakes’ issue (see above), words often constitute a significant slice of the act of a deepfake. As an illustration, it is feasible to take an (unaltered) image or video of ‘, and say, in a caption or a social media post, that the image is of ‘ (assuming the 2 people bear a resemblance).
In akin to case, , and the result could also be strikingly effective – but does such a low-tech approach also constitute a ‘deepfake’?
4: Retouch, Remodel
Finally, the content must . This condition emphasizes the of human viewers. Content that is barely recognized as representing an actual person or object by an algorithm would be considered a deepfake.
Of all of the conditions in 3(60), this one most obviously defers to the later judgment of a court, because it doesn’t allow for any interpretation via technical or mechanized means.
There are clearly some inherent difficulties in reaching consensus on such a subjective stipulation. The authors observe, as an illustration, that different people, and several types of people (akin to children and adults), could also be variously disposed to imagine in a specific deepfake.
The authors further note that the advanced AI capabilities of tools like Photoshop challenge traditional definitions of ‘deepfake.’ While these systems may include basic safeguards against controversial or prohibited content, they dramatically expand the concept of ‘retouching.’ Users can now add or remove objects in a highly convincing, photorealistic manner, achieving knowledgeable level of authenticity that redefines the boundaries of image manipulation.
The authors state:
Taking Exception
The EU AI Act comprises exceptions that, the authors argue, may be very permissive. Article 50(2), they state, offers an exception in cases where nearly all of an original source image just isn’t altered. The authors note:
The researchers provide the instance of adding a hand-gun to the photo a one that is pointing at someone. By adding the gun, one is changing as little as 5% of the image; nevertheless, the semantic significance of the modified portion is notable. Subsequently plainly this exception doesn’t take account of any ‘commonsense’ understanding of the effect a small detail can have on the general significance of a picture.
Section 50(2) also allows exceptions for an ‘assistive function for normal editing’. For the reason that Act doesn’t define what ‘standard editing’ means, even post-processing features as extreme as Google’s Best Take would appear to be protected by this exception, the authors observe.
Conclusion
The stated intention of the brand new work is to encourage interdisciplinary study across the regulation of deepfakes, and to act as a place to begin for brand new dialogues between computer scientists and legal scholars.
Nevertheless, the paper itself succumbs to tautology at several points: it incessantly uses the term ‘deepfake’ as if its meaning were self-evident, whilst taking aim on the EU AI Act for failing to define what actually constitutes a deepfake.