Home Artificial Intelligence Why Big Tech’s watermarking plans are some welcome excellent news

Why Big Tech’s watermarking plans are some welcome excellent news

0
Why Big Tech’s watermarking plans are some welcome excellent news

On February 6, Meta said it was going to label AI-generated images on Facebook, Instagram, and Threads. When someone uses Meta’s AI tools to create images, the corporate will add visible markers to the image, in addition to invisible watermarks and metadata within the image file. The corporate says its standards are in step with best practices laid out by the Partnership on AI, an AI research nonprofit.

Big Tech can be throwing its weight behind a promising technical standard that might add a “nutrition label” to pictures, video, and audio. Called C2PA, it’s an open-source web protocol that relies on cryptography to encode details in regards to the origins of a chunk of content, or what technologists discuss with as “provenance” information. The developers of C2PA often compare the protocol to a nutrition label, but one that claims where content got here from and who—or what—created it. Read more about it here. 

On February 8, Google announced it’s joining other tech giants equivalent to Microsoft and Adobe within the steering committee of C2PA and can include its watermark SynthID in all AI-generated images in its latest Gemini tools. Meta says it is usually participating in C2PA. Having an industry-wide standard makes it easier for corporations to detect AI-generated content, regardless of which system it was created with.

OpenAI too announced latest content provenance measures last week. It says it should add watermarks to the metadata of images generated with ChatGPT and DALL-E 3, its image-making AI. OpenAI says it should now include a visual label in images to signal they’ve been created with AI. 

These methods are a promising start, but they’re not foolproof. Watermarks in metadata are easy to bypass by taking a screenshot of images and just using that, while visual labels might be cropped or edited out. There is maybe more hope for invisible watermarks like Google’s SynthID, which subtly changes the pixels of a picture in order that computer programs can detect the watermark however the human eye cannot. These are harder to tamper with. What’s more, there aren’t reliable ways to label and detect AI-generated video, audio, and even text. 

But there remains to be value in creating these provenance tools. As Henry Ajder, a generative-AI expert, told me a few weeks ago when I interviewed him about how one can prevent deepfake porn, the purpose is to create a “perverse customer journey.” In other words, add barriers and friction to the deepfake pipeline with the intention to decelerate the creation and sharing of harmful content as much as possible. A determined person will likely still have the option to override these protections, but every little bit helps. 

There are also many nontechnical fixes tech corporations could introduce to forestall problems equivalent to deepfake porn. Major cloud service providers and app stores, equivalent to Google, Amazon, Microsoft, and Apple could move to ban services that might be used to create nonconsensual deepfake nudes. And watermarks ought to be included in all AI-generated content across the board, even by smaller startups developing the technology.

What gives me hope is that alongside these voluntary measures we’re beginning to see binding regulations, equivalent to the EU’s AI Act and the Digital Services Act, which require tech corporations to reveal AI-generated content and take down harmful content faster. There’s also renewed interest amongst US lawmakers in spending some binding rules on deepfakes. And following AI-generated robocalls of President Biden telling voters to not vote, the US Federal Communications Commission announced last week that it was banning the usage of AI in these calls. 

LEAVE A REPLY

Please enter your comment!
Please enter your name here