Home Artificial Intelligence Generative AI risks concentrating Big Tech’s power. Here’s easy methods to stop it

Generative AI risks concentrating Big Tech’s power. Here’s easy methods to stop it

1
Generative AI risks concentrating Big Tech’s power. Here’s easy methods to stop it

If regulators don’t act now, the generative AI boom will concentrate Big Tech’s power even further. That’s the central argument of a latest report from research institute AI Now. And it is sensible. To know why, consider that the present AI boom depends upon two things: large amounts of knowledge, and enough computing power to process it.  

Each of those resources are only really available to Big Tech corporations. And although a number of the most enjoyable applications, reminiscent of OpenAI’s chatbot ChatGPT and Stability.AI’s image-generation AI Stable Diffusion, are created by startups, they depend on deals with Big Tech that offers them access to their vast data and computing resources. 

“A pair of massive tech firms are poised to consolidate power through AI, relatively than democratize it,” says Sarah Myers West, managing director of research non-profit the AI Now Institute. 

Straight away, Big Tech has a chokehold on AI. But Myers West believes we’re actually at a watershed moment. It’s the beginning of a latest tech hype cycle, and which means lawmakers and regulators have a singular opportunity to make sure the subsequent decade of AI technology is more democratic and fair. 

What separates this tech boom from previous ones is that we’ve got a greater understanding of all of the catastrophic ways AI can go awry. And regulators all over the place are paying close attention. 

China just unveiled a draft bill on generative AI calling for more transparency and oversight, while the European Union is negotiating the AI Act, which can require tech corporations to be more transparent about how generative AI systems work. It’s also planning  a bill to make them chargeable for AI harms.

The US has traditionally been reluctant to manage its tech sector. But that’s changing. The Biden administration is searching for input on ways to oversee AI models reminiscent of ChatGPT, by for instance requiring tech corporations to supply audits and impact assessments, or for AI systems to satisfy certain standards before they’re released. It’s one of the crucial concrete steps the Biden Administration has taken to curb AI harms.

Meanwhile, the Federal Trade Commission’s (FTC) chair Lina Khan has also highlighted Big Tech’ s data and computing power advantage, and has vowed to make sure competition within the AI industry. The agency has dangled the specter of antitrust investigations, and crackdowns on deceptive business practices. 

This latest deal with the AI sector is partly influenced by the proven fact that many members of the AI Now Institute, including Myers West, have spent stints on the FTC to bring technical expertise to the agency. 

Myers West says her secondment taught her that AI regulation doesn’t have to start out from a blank slate. As an alternative of waiting for AI-specific regulations, reminiscent of the EU’s AI Act, which can take years to place into place, regulators should ramp up enforcement of existing data protection and competition laws.

Because AI as we realize it today is basically depending on massive amounts of knowledge, data policy can be artificial intelligence policy, says Myers West. 

Working example: ChatGPT has faced intense scrutiny from European and Canadian data protection authorities, and has been blocked in Italy over allegedly scraping personal data off the online illegally and misusing personal data. 

The decision for regulation shouldn’t be just happening amongst government officials. Something interesting has happened. After many years of fighting regulation tooth and nail, today most tech corporations, including OpenAI, claim they welcome it.  

The massive query everyone’s still fighting over is how AI must be regulated. Tech corporations claim they support regulation, but they’re still pursuing a “release first, ask query later” approach in relation to launching AI-powered products. Tech corporations are rushing to release image- and text-generating AI models as products, despite these models having major flaws, reminiscent of making up nonsense, perpetuating harmful biases, infringing copyright and containing security vulnerabilities.

The White House’s proposal to tackle AI accountability with post-AI product launch measures reminiscent of algorithmic audits should not enough to mitigate AI harms, AI Now’s report argues. Stronger, swifter motion is required to make sure corporations first prove their models are fit for release, Myers West says.

“We must always be very wary of approaches that don’t put the burden on corporations. There are a number of approaches to regulation that essentially put the onus on the broader public and on regulators to root out AI-enabled harms,” says Myers West. 

And importantly, Myers West says, regulators have to take motion swiftly. 

“There must be consequences for when [tech companies] violate the law.” 

Deeper Learning

How AI helps historians higher understand our past

That is cool. Historians have began using machine learning to look at historical documents smudged by centuries spent in mildewed archives. They’re using these techniques to revive ancient texts, and making significant discoveries along the way in which. 

Connecting the dots: Historians say the applying of contemporary computer science to the distant past helps draw broader connections across the centuries than would otherwise be possible. But there’s a risk that these computer programs introduce distortions of their very own, slipping bias or outright falsifications into the historical record. Read more from Moira Donovan here.

Bits and Bytes

Google is overhauling Search to compete with AI rivals  
Threatened by Microsoft’s relative success with AI-powered Bing search, Google is constructing a latest search engine that uses large language models, and is upgrading its existing search engine with AI features. It hopes the brand new search engine will offer users a more personalized experience. (The Recent York Times

Elon Musk has created a latest AI company to rival OpenAI 
Over the past few months, Musk has been attempting to hire researchers to hitch his latest AI enterprise, X.AI. Musk was one among OpenAI’s co-founders, but was ousted in 2018 after an influence struggle with CEO Sam Altman. Musk has criticized OpenAI’s chatbot ChatGPT of being politically biased, and said he desires to create “truth-seeking” AI models. What does that mean? Your guess is nearly as good as mine. (The Wall Street Journal

Stability.AI is susceptible to going under
Stability.AI, the creator of the open source image-generating AI model Stable Diffusion, just released a new edition of their model that’s barely more photorealistic. However the business is in trouble. It’s burning through money fast, struggling to generate revenue, and staff are losing faith in the corporate’s CEO. (Semafor)

Meet the world’s worst AI program
The bot on Chess.com, depicted  as a turtleneck-wearing Bulgarian man with bushy eyebrows, a thick beard, and a rather receding hairline, is designed to be absolutely awful at chess. While other AI bots are programmed to dazzle, Martin is a reminder that even dumb AI systems can still surprise, delight, and teach us things. (The Atlantic

1 COMMENT

LEAVE A REPLY

Please enter your comment!
Please enter your name here