Home Artificial Intelligence A phishing scam using voice generation AI…

A phishing scam using voice generation AI…

2
A phishing scam using voice generation AI…

(Photo = shutterstock)

It has been found that phone phishing scams that request money by synthesizing the voices of relations with artificial intelligence (AI) are rampant. Concerns became reality. Voice-generating AI, which creates the identical voice with just 30 seconds of voice, has been abused for crime.

The Washington Post (WP) published an example of such damage on the fifth (local time).

In line with reports, the parents of Benja Perkin, 39, living in Canada, received a call from their son saying they needed money in a automobile accident that killed an American diplomat and that he was in prison. They remitted $15,000 (about 20 million won).

Perkin discovered about this after making a phone call that night. Perkin’s parents thought the phone call was strange, but he couldn’t shake the sensation that he was talking to his real son.

Perkin said he had posted a YouTube video talking about his snowmobiling hobby, but it surely was unclear where the scammers stole his voice. Victims filed damage reports with Canadian police, but didn’t receive their a refund.

In line with Federal Trade Commission (FTC) data, there have been 36,000 cases of fraud previously yr by people pretending to be friends or family. Amongst them, greater than 5,100 cases were phone scams, and the quantity of injury was greater than 11 million dollars (roughly 14.3 billion won).

Phone fraud is a criminal offense that has rarely been eradicated, but has recently develop into more sophisticated because of advances in voice-generating AI tools. Inexpensive online AI tools turn a straightforward audio file into voice, allowing scammers to talk as they type.

Vishing refers to a fraudulent technique (Voice + phishing) that mimics a voice. (Photo = Shutterstock)
Vishing refers to a fraudulent technique (Voice + phishing) that mimics a voice. (Photo = Shutterstock)

Hani Farid, a professor on the University of California, Berkeley, explains that AI voice generation software analyzes aspects that make an individual’s voice unique, equivalent to age, gender, and accent, and searches an enormous voice database to search out similar voices and predict patterns.

Then, it reconstructs the pitch, timbre, and pronunciation of the human voice to create an effect much like the true one. The audio samples needed to generate these voices can come from short videos on YouTube, podcasts, commercials, TikTok, Instagram or Facebook.

Professor Farid said, “Only a yr ago, it took a whole lot of audio files to duplicate a human voice. Now, we are able to reproduce a human voice with only 30 seconds of audio file.”

Corporations like Eleven Labs, an AI voice generation startup founded last yr, turn short voice samples into synthesized voices for between $5 and $330 per 30 days. A free trial can also be available.

Following a series of abuses for phone scams, the corporate said it was developing and integrating tools that prevent free users from generating voices and detect generated voices.

FTC Deputy Commissioner Will Maxon told The Post that should you receive a call saying that a family is in an emergency and wishes money, you have to undergo one other route. His advice is to bear in mind that recent phone fraud techniques have evolved enough to trick people.

Professor Farid said that if harm is attributable to voice-generating AI tools, the courts should hold the businesses that developed them accountable.

Jeong Byeong-il, member jbi@aitimes.com

2 COMMENTS

  1. Thank you for your sharing. I am worried that I lack creative ideas. It is your article that makes me full of hope. Thank you. But, I have a question, can you help me?

LEAVE A REPLY

Please enter your comment!
Please enter your name here