Home Artificial Intelligence People shouldn’t pay such a high price for calling out AI harms

People shouldn’t pay such a high price for calling out AI harms

0
People shouldn’t pay such a high price for calling out AI harms

The G7 has just agreed a (voluntary) code of conduct that AI firms should abide by, as governments seek to reduce the harms and risks created by AI systems. And later this week, the UK might be filled with AI movers and shakers attending the federal government’s AI Safety Summit, an effort to provide you with global rules on AI safety. 

In all, these events suggest that the narrative pushed by Silicon Valley concerning the “existential risk” posed by AI appears to be increasingly dominant in public discourse.

That is concerning, because specializing in fixing hypothetical harms that will emerge in the long run takes attention from the very real harms AI is causing today. “Existing AI systems that cause demonstrated harms are more dangerous than hypothetical ‘sentient’ AI systems because they’re real,” writes Joy Buolamwini, a renowned AI researcher and activist, in her recent memoir Unmasking AI: My Mission to Protect What Is Human in a World of Machines. Read more of her thoughts in an excerpt from her book, out tomorrow. 

I had the pleasure of talking with Buolamwini about her life story and what concerns her in AI today. Buolamwini is an influential voice in the sphere. Her research on bias in facial recognition systems made firms comparable to IBM, Google, and Microsoft change their systems and back away from selling their technology to law enforcement. 

Now, Buolamwini has a recent goal in sight. She is looking for a radical rethink of how AI systems are built, starting with more ethical, consensual data collection practices. “What concerns me is we’re giving so many firms a free pass, or we’re applauding the innovation while turning our head [away from the harms],” Buolamwini told me. Read my interview together with her. 

While Buolamwini’s story is in some ways an inspirational tale, it is usually a warning. Buolamwini has been calling out AI harms for the higher a part of a decade, and he or she has done some impressive things to bring the subject to the general public consciousness. What really struck me was the toll speaking up has taken on her. Within the book, she describes having to envision herself into the emergency room for severe exhaustion after attempting to do too many things directly—pursuing advocacy, founding her nonprofit organization the Algorithmic Justice League, attending congressional hearings, and writing her PhD dissertation at MIT. 

She is just not alone. Buolamwini’s experience tracks with a bit I wrote almost exactly a 12 months ago about how responsible AI has a burnout problem.  

Partly due to researchers like Buolamwini, tech firms face more public scrutiny over their AI systems. Corporations realized they needed responsible AI teams to be sure that their products are developed in a way that mitigates any potential harm. These teams evaluate how our lives, societies, and political systems are affected by the way in which these systems are designed, developed, and deployed. 

LEAVE A REPLY

Please enter your comment!
Please enter your name here