Home Artificial Intelligence Humans could also be more more likely to consider disinformation generated by AI

Humans could also be more more likely to consider disinformation generated by AI

1
Humans could also be more more likely to consider disinformation generated by AI

That credibility gap, while small, is concerning provided that the issue of AI-generated disinformation seems poised to grow significantly, says Giovanni Spitale, the researcher on the University of Zurich who led the study, which appeared in Science Advances today. 

“The proven fact that AI-generated disinformation just isn’t only cheaper and faster, but additionally simpler, gives me nightmares,” he says. He believes that if the team repeated the study with the most recent large language model from OpenAI, GPT-4, the difference could be even larger, given how rather more powerful GPT-4 is. 

To check our susceptibility to various kinds of text, the researchers selected common disinformation topics, including climate change and covid. Then they asked OpenAI’s large language model GPT-3 to generate 10 true tweets and 10 false ones, and picked up a random sample of each true and false tweets from Twitter. 

Next, they recruited 697 people to finish a web based quiz judging whether tweets were generated by AI or collected from Twitter, and whether or not they were accurate or contained disinformation. They found that participants were 3% less more likely to consider human-written false tweets than AI-written ones. 

The researchers are unsure why people could also be more more likely to consider tweets written by AI. But the way in which during which GPT-3 orders information could have something to do with it, in accordance with Spitale. 

“GPT-3’s text tends to be a bit more structured when put next to organic [human-written] text,” he says. “However it’s also condensed, so it’s easier to process.”

The generative AI boom puts powerful, accessible AI tools within the hands of everyone, including bad actors. Models like GPT-3 can generate incorrect text that appears convincing, which might be used to generate false narratives quickly and cheaply for conspiracy theorists and disinformation campaigns. The weapons to fight the issue—AI text-detection tools—are still within the early stages of development, and lots of will not be entirely accurate. 

OpenAI is aware that its AI tools might be weaponized to supply large-scale disinformation campaigns. Although this violates its policies, it released a report in January warning that it’s “all but not possible to make sure that large language models are never used to generate disinformation.” OpenAI didn’t immediately reply to a request for comment.

1 COMMENT

LEAVE A REPLY

Please enter your comment!
Please enter your name here