Home Artificial Intelligence How do you solve an issue like out-of-control AI? 

How do you solve an issue like out-of-control AI? 

1
How do you solve an issue like out-of-control AI? 

Last week Google revealed it goes all in on generative AI. At its annual I/O conference, the corporate announced it plans to embed AI tools into virtually all of its products, from Google Docs to coding and online search. (Read my story here.) 

Google’s announcement is a large deal. Billions of individuals will now get access to powerful, cutting-edge AI models to assist them do all kinds of tasks, from generating text to answering queries to writing and debugging code. As MIT Technology Review’s editor in chief, Mat Honan, writes in his evaluation of I/O, it is evident AI is now Google’s core product. 

Google’s approach is to introduce these recent functions into its products step by step. But it would most definitely be only a matter of time before things begin to go awry. The corporate has not solved any of the common problems with these AI models. They still make stuff up. They’re still easy to control to interrupt their very own rules. They’re still vulnerable to attacks. There may be very little stopping them from getting used as tools for disinformation, scams, and spam. 

Because these kinds of AI tools are relatively recent, they still operate in a largely regulation-free zone. But that doesn’t feel sustainable. Calls for regulation are growing louder because the post-ChatGPT euphoria is wearing off, and regulators are beginning to ask tough questions on the technology. 

US regulators are attempting to seek out a solution to govern powerful AI tools. This week, OpenAI CEO Sam Altman will testify within the US Senate (after a cozy “educational” dinner with politicians the night before). The hearing follows a gathering last week between Vice President Kamala Harris and the CEOs of Alphabet, Microsoft, OpenAI, and Anthropic.

In an announcement, Harris said the businesses have an “ethical, moral, and obligation” to make sure that their products are protected. Senator Chuck Schumer of Latest York, the bulk leader, has proposed laws to manage AI, which could include a recent agency to implement the principles. 

“Everybody desires to be seen to be doing something. There’s lots of social anxiety about where all that is going,” says Jennifer King, a privacy and data policy fellow on the Stanford Institute for Human-Centered Artificial Intelligence. 

Getting bipartisan support for a recent AI bill can be difficult, King says: “It is going to rely upon to what extent [generative AI] is being seen as an actual, societal-level threat.” However the chair of the Federal Trade Commission, Lina Khan, has come out “guns blazing,” she adds. Earlier this month, Khan wrote an op-ed calling for AI regulation now to stop the errors that arose from being too lax with the tech sector up to now. She signaled that within the US, regulators usually tend to use existing laws already of their tool kit to manage AI, comparable to antitrust and industrial practices laws. 

Meanwhile, in Europe, lawmakers are edging closer to a final deal on the AI Act. Last week members of the European Parliament signed off on a draft regulation that called for a ban on facial recognition technology in public places. It also bans predictive policing, emotion recognition, and the indiscriminate scraping of biometric data online. 

The EU is ready to create more rules to constrain generative AI too, and the parliament wants corporations creating large AI models to be more transparent. These measures include labeling AI-generated content, publishing summaries of copyrighted data that was used to coach the model, and establishing safeguards that will prevent models from generating illegal content.

But here’s the catch: the EU remains to be a good distance away from implementing rules on generative AI, and lots of the proposed elements of the AI Act aren’t going to make it to the ultimate version. There are still tough negotiations left between the parliament, the European Commission, and the EU member countries. It is going to be years until we see the AI Act in force.

While regulators struggle to get their act together, distinguished voices in tech are beginning to push the Overton window. Speaking at an event last week, Microsoft’s chief economist, Michael Schwarz, said that we must always wait until we see “meaningful harm” from AI before we regulate it. He compared it to driver’s licenses, which were introduced after many dozens of individuals were killed in accidents. “There must be a minimum of a little bit little bit of harm in order that we see what’s the true problem,” Schwarz said. 

This statement is outrageous. The harm attributable to AI has been well documented for years. There was bias and discrimination, AI-generated fake news, and scams. Other AI systems have led to innocent people being arrested, people being trapped in poverty, and tens of hundreds of individuals being wrongfully accused of fraud. These harms are more likely to grow exponentially as generative AI is integrated deeper into our society, due to announcements like Google’s. 

The query we must always be asking ourselves is: How much harm are we willing to see? I’d say we’ve seen enough.

Deeper Learning

The open-source AI boom is built on Big Tech’s handouts. How long will it last?

Latest open-source large language models—alternatives to Google’s Bard or OpenAI’s ChatGPT that researchers and app developers can study, construct on, and modify—are dropping like candy from a piñata. These are smaller, cheaper versions of the best-in-class AI models created by the massive firms that (almost) match them in performance—and so they’re shared totally free.

The longer term of how AI is made and used is at a crossroads. On one hand, greater access to those models has helped drive innovation. It could also help catch their flaws. But this open-source boom is precarious. Most open-source releases still stand on the shoulders of giant models put out by big firms with deep pockets. If OpenAI and Meta determine they’re closing up shop, a boomtown could develop into a backwater. Read more from Will Douglas Heaven.

Bits and Bytes

Amazon is working on a secret home robot with ChatGPT-like features
Leaked documents show plans for an updated version of the Astro robot that may remember what it’s seen and understood, allowing people to ask it questions and provides it commands. But Amazon  has to resolve lots of problems before these models are protected to deploy inside people’s homes at scale. (Insider)

Stability AI has released a text-to-animation model 
The corporate that created the open-source text-to-image model Stable Diffusion has launched one other tool that lets people create animations using text, image, and video prompts. Copyright problems aside, these tools could develop into powerful tools for creatives, and the indisputable fact that they’re open source makes them accessible to more people. It’s also a stopgap before the inevitable next step, open-source text-to-video. (Stability AI

AI is getting sucked into culture wars—see the Hollywood writers’ strike
One among the disputes between the Writers Guild of America and Hollywood studios is whether or not people ought to be allowed to make use of AI to put in writing film and tv scripts. With wearying predictability, the US culture-war brigade has stepped into the fray. Online trolls are gleefully telling striking writers that AI will replace them. (Latest York Magazine)

Watch: An AI-generated trailer for  … but make it Wes Anderson 
This was cute. 

1 COMMENT

LEAVE A REPLY

Please enter your comment!
Please enter your name here