Home Artificial Intelligence Can ‘we the people’ keep AI in check?

Can ‘we the people’ keep AI in check?

1
Can ‘we the people’ keep AI in check?

Technologist and researcher Aviv Ovadya isn’t sure that generative AI will be governed, but he thinks essentially the most plausible technique of keeping it in check might just be entrusting those that will likely be impacted by AI to collectively settle on the ways to curb it.

Meaning you; it means me. It’s the ability of enormous networks of people to problem-solve faster and more equitably than a small group of people might do alone (including, say, in Washington). This just isn’t naively counting on the wisdom of the crowds — which has been shown to be problematic — but using so-called deliberative democracy, an approach that involves choosing people through sortition to be representative (such that everybody within the population being impacted has an equal probability of being chosen), and providing them with an environment that allows them to deliberate effectively and make smart decisions. This implies compensation for his or her time, access to experts and stakeholders, and neutral facilitation.

It’s already happening in lots of fields, including scientific research, business, politics and social movements. In Taiwan, for instance, civic-minded hackers in 2015 formed a platform — “virtual Taiwan” — that “brings together representatives from the general public, private and social sectors to debate policy solutions to problems primarily related to the digital economy,” as explained in 2019 by Taiwan’s digital minister, Audrey Tang in The Recent York Times. Since then, vTaiwan, because it’s known, has tackled dozens of issues by “counting on a mixture of online debate and face-to-face discussions with stakeholders,” Tang wrote on the time.

An identical initiative is Oregon’s Residents’ Initiative Review, which was signed into law in 2011 and informs the state’s voting population about ballot measures through a citizen-driven deliberative process. Roughly 20 to 25 residents who’re representative of the whole Oregon electorate are brought together to debate the merits of an initiative; they then collectively write a press release about that initiative that’s sent out to the state’s other voters so that they could make better-informed decisions on election days.

These deliberative processes have also successfully helped address issues in Australia (water policy), Canada (electoral reform), Chile (pensions and healthcare) and Argentina (housing, land ownership), amongst other places.

“There are obstacles to creating this work” because it pertains to AI, acknowledges Ovadya, who’s affiliated with Harvard’s Berkman Klein Center and whose work increasingly centers on the impacts of AI on society and democracy. “But empirically, this has been done on every continent around the globe, at every scale” and the “faster we are able to get some of these items in place, the higher,” he notes.

Letting large cross sections of individuals settle on acceptable guidelines for AI may sound outlandish to some, maybe even inconceivable.

Yet Ovadya isn’t alone in pondering the answer is basically rooted in society. Mira Murati, the chief technology officer of the distinguished AI startup OpenAI, told Time magazine in a recent interview, “[W]e’re a small group of individuals and we’d like a ton more input in this technique and so much more input that goes beyond the technologies — definitely regulators and governments and everybody else.”

Murati isn’t fearful that government involvement will slow innovation, or that it’s too early for policymakers and regulators to get entangled, she told the outlet when asked about this stuff. Quite the opposite, as OpenAI has been saying for years, the time for motion is today, not tomorrow. “It’s very necessary for everybody to begin getting involved given the impact these technologies are going to have,” she said.

For now, OpenAI is taking a self-governing approach, instituting and revisiting guidelines for the protected use of its tech and pushing out latest iterations in dribs and drabs.

The European Union has meanwhile been drafting a regulatory framework — AI Act — that’s making its way through the European Parliament and goals to grow to be a worldwide standard. The law would assign applications of AI to a few risk categories: applications and systems that create an “unacceptable risk”; “high-risk applications” that might be subject to specific legal requirements; and applications not explicitly banned or listed as high-risk that might largely be left unregulated.

The U.S. Department of Commerce has also drafted a voluntary framework meant as guidance for firms, yet amazingly, there stays no regulation — zilcho — despite that it’s sorely needed. (Along with OpenAI, tech behemoths like Microsoft and Google — burned by earlier releases of their very own AI that backfired — are very publicly racing again to roll out AI-infused products and applications. Like OpenAI, also they are attempting to determine their very own tweaks and guardrails.)

A sort of World Wide Web consortium, a world organization created in 1994 to set standards for the World Wide Web, would seemingly make sense. Indeed, Murati told Time that “different voices, like philosophers, social scientists, artists, and other people from the humanities” must be brought together to reply the various “ethical and philosophical questions that we’d like to think about.”

Newer tools that help people vote on issues could also potentially help. OpenAI CEO Sam Altman can also be a co-founder, for instance, of a retina-scanning company in Berlin called WorldCoin that wishes to make it easy to authenticate someone’s identity easily. Questions have been raised in regards to the privacy and security implications of WorldCoin’s biometric approach, but its potential applications include distributing a worldwide universal basic income, in addition to empowering latest types of digital democracy.

Either way, Ovadya is busily trying to steer all the key AI players that collective intelligence is the solution to quickly create boundaries around AI while also giving them needed credibility. Take OpenAI, says Ovadya. “It’s getting some flack immediately from everyone,” including over its perceived liberal bias. “It could be helpful [for the company] to have a extremely concrete answer” about the way it establishes its future policies.

Ovadya similarly points to Stability.AI, the open-source AI company whose CEO, Emad Mostaque, has repeatedly suggested that Stability is more democratic than OpenAI since it is accessible all over the place, whereas OpenAI is accessible only in countries immediately where it may provide “protected access.”

Says Ovadya, “Emad at Stability says he’s ‘democratizing AI.’ Well, wouldn’t it’s nice to truly be using democratic processes to determine what people actually need?”

1 COMMENT

LEAVE A REPLY

Please enter your comment!
Please enter your name here