Home Artificial Intelligence These latest tools could make AI vision systems less biased

These latest tools could make AI vision systems less biased

1
These latest tools could make AI vision systems less biased

Traditionally, skin-tone bias in computer vision is measured using the Fitzpatrick scale, which measures from light to dark. The dimensions was originally developed to measure tanning of white skin but has since been adopted widely as a tool to find out ethnicity, says William Thong, an AI ethics researcher at Sony. It’s used to measure bias in computer systems by, for instance, comparing how accurate AI models are for individuals with light and dark skin. 

But describing people’s skin with a one-dimensional scale is misleading, says Alice Xiang, the worldwide head of AI ethics at Sony. By classifying people into groups based on this coarse scale, researchers are missing out on biases that affect, for instance, Asian people, who’re underrepresented in Western AI data sets and might fall into each light-skinned and dark-skinned categories. And it also doesn’t take into consideration the proven fact that people’s skin tones change. For instance, Asian skin becomes darker and more yellow with age while white skin becomes darker and redder, the researchers indicate.  

Thong and Xiang’s team developed a tool—shared exclusively with MIT Technology Review—that expands the skin-tone scale into two dimensions, measuring each skin color (from light to dark) and skin hue (from red to yellow). Sony is making the tool freely available online

Thong says he was inspired by the Brazilian artist Angélica Dass, whose work shows that individuals who come from similar backgrounds can have an enormous number of skin tones. But representing the complete range of skin tones will not be a novel idea. The cosmetics industry has been using the identical technique for years. 

“For anyone who has had to pick out a foundation shade … you understand the importance of not only whether someone’s skin tone is light or dark, but in addition whether it’s warm toned or cool toned,” says Xiang. 

Sony’s work on skin hue “offers an insight right into a missing component that individuals have been overlooking,” says Guha Balakrishnan, an assistant professor at Rice University, who has studied biases in computer vision models. 

Measuring bias

Straight away, there is no such thing as a one standard way for researchers to measure bias in computer vision, which makes it harder to match systems against one another. 

To make bias evaluations more streamlined, Meta has developed a latest option to measure fairness in computer vision models, called Fairness in Computer Vision Evaluation (FACET), which may be used across a variety of common tasks resembling classification, detection, and segmentation. Laura Gustafson, an AI researcher at Meta, says FACET is the primary fairness evaluation to incorporate many alternative computer vision tasks, and that it incorporates a broader range of fairness metrics than other bias tools. 

1 COMMENT

LEAVE A REPLY

Please enter your comment!
Please enter your name here