Home Artificial Intelligence AI language models are rife with political biases

AI language models are rife with political biases

3
AI language models are rife with political biases

The researchers asked language models where they stand on various topics, corresponding to feminism and democracy. They used the answers to plot them on a graph referred to as a political compass, after which tested whether retraining models on much more politically biased training data modified their behavior and talent to detect hate speech and misinformation (it did). The research is described in a peer-reviewed paper that won the best paper award on the Association for Computational Linguistics conference last month. 

As AI language models are rolled out into services utilized by hundreds of thousands of individuals, understanding their underlying political assumptions and biases couldn’t be more essential. That’s because they’ve the potential to cause real harm. A chatbot offering health-care advice might refuse to supply advice on abortion or contraception, or a customer support bot might start spewing offensive nonsense. 

For the reason that success of ChatGPT, OpenAI has faced criticism from right-wing commentators who claim the chatbot reflects a more liberal worldview. Nonetheless, the corporate insists that it’s working to deal with those concerns, and in a blog post, it says it instructs its human reviewers, who help fine-tune AI the AI model, to not favor any political group. “Biases that nevertheless may emerge from the method described above are bugs, not features,” the post says. 

Chan Park, a PhD researcher at Carnegie Mellon University who was a part of the study team, disagrees. “We imagine no language model might be entirely free from political biases,” she says. 

Bias creeps in at every stage

To reverse-engineer how AI language models pick up political biases, the researchers examined three stages of a model’s development. 

In step one, they asked 14 language models to agree or disagree with 62 politically sensitive statements. This helped them discover the models’ underlying political leanings and plot them on a political compass. To the team’s surprise, they found that AI models have distinctly different political tendencies, Park says. 

The researchers found that BERT models, AI language models developed by Google, were more socially conservative than OpenAI’s GPT models. Unlike GPT models, which predict the following word in a sentence, BERT models predict parts of a sentence using the encircling information inside a bit of text. Their social conservatism might arise because older BERT models were trained on books, which tended to be more conservative, while the newer GPT models are trained on more liberal web texts, the researchers speculate of their paper. 

AI models also change over time as tech corporations update their data sets and training methods. GPT-2, for instance, expressed support for “taxing the wealthy,” while OpenAI’s newer GPT-3 model didn’t. 

3 COMMENTS

LEAVE A REPLY

Please enter your comment!
Please enter your name here