Home Artificial Intelligence These recent tools allow you to see for yourself how biased AI image models are

These recent tools allow you to see for yourself how biased AI image models are

1
These recent tools allow you to see for yourself how biased AI image models are

One theory as to why that could be is that nonbinary brown people could have had more visibility within the press recently, meaning their images find yourself in the information sets the AI models use for training, says Jernite.

OpenAI and Stability.AI, the corporate that built Stable Diffusion, say that they’ve introduced fixes to mitigate the biases ingrained of their systems, akin to blocking certain prompts that appear more likely to generate offensive images. Nevertheless, these recent tools from Hugging Face show how limited these fixes are. 

A spokesperson for Stability.AI told us that the corporate trains its models on “data sets specific to different countries and cultures,” adding that this could “serve to mitigate biases attributable to overrepresentation on the whole data sets.”

A spokesperson for OpenAI didn’t comment on the tools specifically, but pointed us to a blog post explaining how the corporate has added various techniques to DALL-E 2 to filter out bias and sexual and violent images. 

Bias is becoming a more urgent problem as these AI models grow to be more widely adopted and produce ever more realistic images. They’re already being rolled out in a slew of products, akin to stock photos. Luccioni says she is frightened that the models risk reinforcing harmful biases on a big scale. She hopes the tools she and her team have created will bring more transparency to image-generating AI systems and underscore the importance of constructing them less biased. 

A part of the issue is that these models are trained on predominantly US-centric data, which implies they mostly reflect American associations, biases, values, and culture, says Aylin Caliskan, an associate professor on the University of Washington who studies bias in AI systems and was not involved on this research.  

“What finally ends up happening is the thumbprint of this online American culture … that’s perpetuated the world over,” Caliskan says. 

Caliskan says Hugging Face’s tools will help AI developers higher understand and reduce biases of their AI models. “When people see these examples directly, I feel they’ll have the opportunity to grasp the importance of those biases higher,” she says. 

1 COMMENT

LEAVE A REPLY

Please enter your comment!
Please enter your name here