Home Artificial Intelligence We want to give attention to the AI harms that exist already

We want to give attention to the AI harms that exist already

0
We want to give attention to the AI harms that exist already

One problem with minimizing existing AI harms by saying hypothetical existential harms are more essential is that it shifts the flow of invaluable resources and legislative attention. Firms that claim to fear existential risk from AI could show a real commitment to safeguarding humanity by not releasing the AI tools they claim could end humanity. 

I’m not against stopping the creation of fatal AI systems. Governments concerned with lethal use of AI can adopt the protections long championed by the Campaign to Stop Killer Robots to ban lethal autonomous systems and digital dehumanization. The campaign addresses potentially fatal uses of AI without making the hyperbolic jump that we’re on a path to creating sentient systems that can destroy all humankind.

Though it’s tempting to view physical violence as the last word harm, doing so makes it easy to forget pernicious ways our societies perpetuate . The Norwegian sociologist Johan Galtung coined this term to explain how institutions and social structures prevent people from meeting their fundamental needs and thus cause harm. Denial of access to health care, housing, and employment through the usage of AI perpetuates individual harms and generational scars. AI systems can kill us slowly.

Given what my “Gender Shades” research revealed about algorithmic bias from a number of the leading tech corporations on the earth, my concern is concerning the immediate problems and emerging vulnerabilities with AI and whether we could address them in ways that may also help create a future where the burdens of AI didn’t fall disproportionately on the marginalized and vulnerable. AI systems with subpar intelligence that result in false arrests or flawed diagnoses should be addressed now. 

When I believe of x-risk, I believe of the people being harmed now and those that are vulnerable to harm from AI systems. I believe concerning the risk and reality of being “excoded.” You’ll be able to be excoded when a hospital uses AI for triage and leaves you without care, or uses a clinical algorithm that precludes you from receiving a life-saving organ transplant. You’ll be able to be excoded when you find yourself denied a loan based on algorithmic decision-making. You’ll be able to be excoded when your résumé is routinely screened out and you’re denied the chance to compete for the remaining jobs that aren’t replaced by AI systems. You’ll be able to be excoded when a tenant-screening algorithm denies you access to housing. All of those examples are real. Nobody is immune from being excoded, and people already marginalized are at greater risk.

This is the reason my research can’t be confined simply to industry insiders, AI researchers, and even well-meaning influencers. Yes, academic conferences are essential venues. For a lot of academics, presenting published papers is the capstone of a particular research exploration. For me, presenting “Gender Shades” at Latest York University was a launching pad. I felt motivated to place my research into motion—beyond talking shop with AI practitioners, beyond the educational presentations, beyond private dinners. Reaching academics and industry insiders is solely not enough. We want to ensure on a regular basis people vulnerable to experiencing AI harms are a part of the fight for algorithmic justice.

Read our interview with Joy Buolamwini here

LEAVE A REPLY

Please enter your comment!
Please enter your name here