SynthID introduces additional information at the purpose of generation by changing the probability that tokens can be generated, explains Kohli.Â
To detect the watermark and determine whether text has been generated by an AI tool, SynthID compares the expected probability scores for words in watermarked and unwatermarked text.Â
Google DeepMind found that using the SynthID watermark didn’t compromise the standard, accuracy, creativity, or speed of generated text. That conclusion was drawn from a large live experiment of SynthID’s performance after the watermark was deployed in its Gemini products and utilized by hundreds of thousands of individuals. Gemini allows users to rank the standard of the AI model’s responses with a thumbs-up or a thumbs-down.Â
Kohli and his team analyzed the scores for around 20 million watermarked and unwatermarked chatbot responses. They found that users didn’t notice a difference in quality and usefulness between the 2. The outcomes of this experiment are detailed in a paper published in today. Currently SynthID for text only works on content generated by Google’s models, however the hope is that open-sourcing it can expand the range of tools it’s compatible with.Â
SynthID does produce other limitations. The watermark was proof against some tampering, resembling cropping text and lightweight editing or rewriting, nevertheless it was less reliable when AI-generated text had been rewritten or translated from one language into one other. Additionally it is less reliable in responses to prompts asking for factual information, resembling the capital city of France. It is because there are fewer opportunities to regulate the likelihood of the following possible word in a sentence without changing facts.Â