Hany Farid, a professor at UC Berkeley who makes a speciality of digital forensics but wasn’t involved within the Microsoft research, says that if the industry adopted the corporate’s blueprint, it might be meaningfully tougher to deceive the general public with manipulated content. Sophisticated individuals or governments can work to bypass such tools, he says, but the brand new standard could eliminate a significant slice of misleading material.
“I don’t think it solves the issue, but I believe it takes a pleasant big chunk out of it,” he says.
Still, there are reasons to see Microsoft’s approach for example of somewhat naïve techno-optimism. There’s growing evidence that individuals are swayed by AI-generated content even after they know that it is fake. And in a recent study of pro-Russian AI-generated videos concerning the war in Ukraine, comments declaring that the videos were made with AI received far less engagement than comments treating them as real.
“Are there individuals who, irrespective of what you tell them, are going to consider what they consider?” Farid asks. “Yes.” But, he adds, “there are a overwhelming majority of Americans and residents world wide who I do think need to know the reality.”
That desire has not exactly led to urgent motion from tech corporations. Google began adding a watermark to content generated by its AI tools in 2023, which Farid says has been helpful in his investigations. Some platforms use C2PA, a provenance standard Microsoft helped launch in 2021. But the total suite of changes that Microsoft suggests, powerful as they’re, might remain only suggestions in the event that they threaten the business models of AI corporations or social media platforms.
