It has been reported that the developer who cloned U.S. President Joe Biden's voice into artificial intelligence (AI) used technology from startup Eleven Labs. The corporate suspended the developer's account.
As well as, the problem of singer Taylor Swift's fake image, which has turned America the other way up previously few days, has led to calls for the White House to organize laws. Amid claims that Microsoft (MS) technology was used here, Microsoft CEO Satya Nadella criticized this as “an incredible and terrible thing.”
Bloomberg reported on the twenty seventh (local time) that Eleven Labs suspended the account of a producer known to have copied and distributed President Biden's voice and is investigating.
In response to this, in the course of this month, voters in Latest Hampshire, USA, received a call from President Biden's voice urging them to not vote in the first election. The state said, “That is an AI-generated voice intended to impersonate the president, and appears to be an illegal try and interfere with the Latest Hampshire presidential primary and suppress voters.”
In response to Pindrop Security, a voice fraud detection specialist that analyzed voice files, Eleven Labs technology was utilized in the voice files, and Eleven Labs said it was aware of this and was investigating.
Eleven Labs said it couldn’t comment on specific matters, but said in an announcement, “We’re committed to stopping misuse of our audio AI tools, and we take instances of misuse very seriously.”
On the twenty second, Eleven Labs is a number one startup within the voice AI field that became a unicorn with a company value of $1.1 billion (roughly 1.5 trillion won) just two years after its establishment.
As well as, the problem of Taylor Swift's deepfake image, which caused controversy by spreading through X (Twitter) last week, is spreading as a social problem. Following the outrage of many Swift's fans and demands to stop a reoccurrence, White House press secretary Carine Jean-Pierre issued an announcement on the twenty sixth urging Congress to enact laws to guard people from deepfake pornography.
A media outlet called 404 Media claimed that this deepfake got here from a porn production community that recommends using the Microsoft Designer image generator. OpenAI’s ‘Dali’ is integrated into the designer.
The reason is that on this community, a way of neutralizing the designer's guardrail by adjusting the prompt is being shared. In fact, it was also said that this fact doesn’t mean that the deepfake used MS technology.
In response to this, Microsoft CEO Satya Nadella said in an interview with NBC Nightly News, “This can be a surprising and terrible thing,” and “AI corporations must move quickly to ascertain higher guardrails.”
Meanwhile, a day after the photos were released, It is usually known that related searches have also been blocked.
Reporter Park Chan cpark@aitimes.com