Home Artificial Intelligence What if we could just ask AI to be less biased?

What if we could just ask AI to be less biased?

2
What if we could just ask AI to be less biased?

Last week, I published a story about recent tools developed by researchers at AI startup Hugging Face and the University of Leipzig that allow people see for themselves what sorts of inherent biases AI models have about different genders and ethnicities. 

Although I’ve written quite a bit about how our biases are reflected in AI models, it still felt jarring to see exactly how pale, male, and off the humans of AI are. That was particularly true for DALL-E 2, which generates white men 97% of the time when given prompts like “CEO” or “director.”

And the bias problem runs even deeper than you may think into the broader world created by AI. These models are built by American firms and trained on North American data, and thus once they’re asked to generate even mundane on a regular basis items, from doors to houses, they create objects that look American, Federico Bianchi, a researcher at Stanford University, tells me. 

Because the world becomes increasingly crammed with AI-generated imagery, we’re going to mostly see images that reflect America’s biases, culture, and values. Who knew AI could find yourself being a serious instrument of American soft power? 
So how can we address these problems? Loads of work has gone into fixing biases in the info sets AI models are trained on. But two recent research papers propose interesting recent approaches. 

What if, as a substitute of constructing the training data less biased, you would simply ask the model to present you less biased answers? 

A team of researchers on the Technical University of Darmstadt, Germany, and AI startup Hugging Face developed a tool called Fair Diffusion that makes it easier to tweak AI models to generate the kinds of images you wish. For instance, you possibly can generate stock photos of CEOs in several settings after which use Fair Diffusion to swap out the white men in the pictures for girls or people of various ethnicities. 

Because the Hugging Face tools show, AI models that generate images on the premise of image-text pairs of their training data default to very strong biases about professions, gender, and ethnicity. The German researchers’ Fair Diffusion tool relies on a way they developed called semantic guidance, which allows users to guide how the AI system generates images of individuals and edit the outcomes.  

The AI system stays very near the unique image, says Kristian Kersting, a pc science professor at TU Darmstadt who participated within the work. 

2 COMMENTS

LEAVE A REPLY

Please enter your comment!
Please enter your name here