Home Artificial Intelligence Why it’s inconceivable to construct an unbiased AI language model

Why it’s inconceivable to construct an unbiased AI language model

3
Why it’s inconceivable to construct an unbiased AI language model

An unbiased, purely fact-based AI chatbot is a cute idea, however it’s technically inconceivable. (Musk has yet to share any details of what his TruthGPT would entail, probably because he is just too busy fascinated about X and cage fights with Mark Zuckerberg.) To grasp why, it’s price reading a story I just published on recent research that sheds light on how political bias creeps into AI language systems. Researchers conducted tests on 14 large language models and located that OpenAI’s ChatGPT and GPT-4 were probably the most left-wing libertarian, while Meta’s LLaMA was probably the most right-wing authoritarian. 

“We imagine no language model may be entirely free from political biases,” Chan Park, a PhD researcher at Carnegie Mellon University, who was a part of the study, told me. Read more here.

One of the crucial pervasive myths around AI is that the technology is neutral and unbiased. This can be a dangerous narrative to push, and it would only exacerbate the issue of humans’ tendency to trust computers, even when the computers are incorrect. The truth is, AI language models reflect not only the biases of their training data, but in addition the biases of people that created them and trained them. 

And while it’s well-known that the info that goes into training AI models is a big source of those biases, the research I wrote about shows how bias creeps in at virtually every stage of model development, says Soroush Vosoughi, an assistant professor of computer science at Dartmouth College, who was not a part of the study. 

Bias in AI language models is a particularly hard problem to repair, because we don’t really understand how they generate the things they do, and our processes for mitigating bias usually are not perfect. That in turn is partly because biases are complicated social problems with no easy technical fix. 

That’s why I’m a firm believer in honesty as the perfect policy. Research like this might encourage firms to trace and chart the political biases of their models and be more forthright with their customers. They may, for instance, explicitly state the known biases so users can take the models’ outputs with a grain of salt.

In that vein, earlier this 12 months OpenAI told me it’s developing customized chatbots which might be in a position to represent different politics and worldviews. One approach can be allowing people to personalize their AI chatbots. That is something Vosoughi’s research has focused on. 

As described in a peer-reviewed paper, Vosoughi and his colleagues created a way just like a YouTube suggestion algorithm, but for generative models. They use reinforcement learning to guide an AI language model’s outputs in order to generate certain political ideologies or remove hate speech. 

3 COMMENTS

LEAVE A REPLY

Please enter your comment!
Please enter your name here