Home Artificial Intelligence The people paid to coach AI are outsourcing their work… to AI

The people paid to coach AI are outsourcing their work… to AI

1
The people paid to coach AI are outsourcing their work… to AI

No wonder a few of them could also be turning to tools like ChatGPT to maximise their earning potential. But what number of? To search out out, a team of researchers from the Swiss Federal Institute of Technology (EPFL) hired 44 people on the gig work platform Amazon Mechanical Turk to summarize 16 extracts from medical research papers. Then they analyzed their responses using an AI model they’d trained themselves that appears for telltale signals of ChatGPT output, comparable to lack of variety in alternative of words. Additionally they extracted the employees’ keystrokes in a bid to work out whether or not they’d copied and pasted their answers, an indicator that they’d generated their responses elsewhere.

They estimated that somewhere between 33% and 46% of the employees had used AI models like OpenAI’s ChatGPT. It’s a percentage that’s more likely to grow even higher as ChatGPT and other AI systems turn into more powerful and simply accessible, in response to the authors of the study, which has been shared on arXiv and is yet to be peer-reviewed. 

“I don’t think it’s the top of crowdsourcing platforms. It just changes the dynamics,” says Robert West, an assistant professor at EPFL, who coauthored the study. 

Using AI-generated data to coach AI could introduce further errors into already error-prone models. Large language models recurrently present false information as fact. In the event that they generate incorrect output that’s itself used to coach other AI models, the errors could be absorbed by those models and amplified over time, making it increasingly difficult to work out their origins, says Ilia Shumailov, a junior research fellow in computer science at Oxford University, who was not involved within the project.

Even worse, there’s no easy fix. “The issue is, while you’re using artificial data, you acquire the errors from the misunderstandings of the models and statistical errors,” he says. “It is advisable ensure that your errors aren’t biasing the output of other models, and there’s no easy solution to do this.”

The study highlights the necessity for brand spanking new ways to examine whether data has been produced by humans or AI. It also highlights one in all the issues with tech corporations’ tendency to depend on gig staff to do the vital work of tidying up the info fed to AI systems.  

“I don’t think the whole lot will collapse,” says West. “But I feel the AI community could have to analyze closely which tasks are most vulnerable to being automated and to work on ways to stop this.”

1 COMMENT

LEAVE A REPLY

Please enter your comment!
Please enter your name here