Home Artificial Intelligence Joy Buolamwini: “We’re giving AI firms a free pass”

Joy Buolamwini: “We’re giving AI firms a free pass”

0
Joy Buolamwini: “We’re giving AI firms a free pass”

I can tell Buolamwini finds the duvet amusing. She takes an image of it. Times have modified quite a bit since 1961. In her recent memoir, Buolamwini shares her life story. In some ways she embodies how far tech has come since then, and the way much further it still must go. 

Buolamwini is best known for a pioneering paper she co-wrote with AI researcher Timnit Gebru in 2017, called “Gender Shades,” which exposed how business facial recognition systems often failed to acknowledge the faces of Black and brown people, especially Black women. Her research and advocacy led firms corresponding to Google, IBM, and Microsoft to enhance their software so it will be less biased and back away from selling their technology to law enforcement. 

Now, Buolamwini has a recent goal in sight. She is asking for a radical rethink of how AI systems are built. Buolamwini tells MIT Technology Review that, amid the present AI hype cycle, she sees a really real risk of letting technology firms pen the foundations that apply to them—repeating the very mistake, she argues, that has previously allowed biased and oppressive technology to thrive.

“What concerns me is we’re giving so many firms a free pass, or we’re applauding the innovation while turning our head [away from the harms],” Buolamwini says. 

A selected concern, says Buolamwini, is the premise upon which we’re constructing today’s sparkliest AI toys, so-called foundation models. Technologists envision these multifunctional models serving as a springboard for a lot of other AI applications, from chatbots to automated movie-making. They’re built by scraping masses of information from the web, inevitably including copyrighted content and private information. Many AI firms are actually being sued by artists, music firms, and writers, who claim their mental property was taken without consent. 

The present modus operandi of today’s AI firms is unethical—a type of “data colonialism,” Buolamwini says, with a “full disregard for consent.”  

“What’s on the market for the taking, if there aren’t laws—it’s just pillaged,” she says. As an writer, Buolamwini says, she fully expects her book, her poems, her voice, and her op-eds—even her PhD dissertation—to be scraped into AI models. 

“Should I find that any of my work has been utilized in these systems, I will certainly speak up. That’s what we do,” she says.  

LEAVE A REPLY

Please enter your comment!
Please enter your name here