Home Artificial Intelligence 4 trends that modified AI in 2023

4 trends that modified AI in 2023

0
4 trends that modified AI in 2023

Existential risk has change into one in every of the biggest memes in AI. The hypothesis is that in the future we are going to construct an AI that’s far smarter than humans, and this could lead on to grave consequences. It’s an ideology championed by many in Silicon Valley, including Ilya Sutskever, OpenAI’s chief scientist, who played a pivotal role in ousting OpenAI CEO Sam Altman (after which reinstating him a number of days later). 

But not everyone agrees with this concept. Meta’s AI leaders Yann LeCun and Joelle Pineau have said that these fears are “ridiculous” and the conversation about AI risks has change into “unhinged.” Many other power players in AI, reminiscent of researcher Joy Buolamwini, say that specializing in hypothetical risks distracts from the very real harms AI is causing today. 

Nevertheless, the increased attention on the technology’s potential to cause extreme harm has prompted many essential conversations about AI policy and animated lawmakers everywhere in the world to take motion. 

4. The times of the AI Wild West are over

Because of ChatGPT, everyone from the US Senate to the G7 was talking about AI policy and regulation this yr. In early December, European lawmakers wrapped up a busy policy yr after they agreed on the AI Act, which is able to introduce binding rules and standards on learn how to develop the riskiest AI more responsibly. It can also ban certain “unacceptable” applications of AI, reminiscent of police use of facial recognition in public places. 

The White House, meanwhile, introduced an executive order on AI, plus voluntary commitments from leading AI corporations. Its efforts aimed to bring more transparency and standards for AI and gave loads of freedom to agencies to adapt AI rules to suit their sectors. 

One concrete policy proposal that got loads of attention was watermarks—invisible signals in text and pictures that could be detected by computers, to be able to flag AI-generated content. These could possibly be used to trace plagiarism or help fight disinformation, and this yr we saw research that succeeded in applying them to AI-generated text and images.

It wasn’t just lawmakers that were busy, but lawyers too. We saw a record variety of  lawsuits, as artists and writers argued that AI corporations had scraped their mental property without their consent and with no compensation. In an exciting counter-offensive, researchers on the University of Chicago developed Nightshade, a latest data-poisoning tool that lets artists fight back against generative AI by messing up training data in ways that would cause serious damage to image-generating AI models. There’s a resistance brewing, and I expect more grassroots efforts to shift tech’s power balance next yr. 

Deeper Learning

Now we all know what OpenAI’s superalignment team has been as much as

OpenAI has announced the primary results from its superalignment team, its in-house initiative dedicated to stopping a superintelligence—a hypothetical future AI that may outsmart humans—from going rogue. The team is led by chief scientist Ilya Sutskever, who was a part of the group that just last month fired OpenAI’s CEO, Sam Altman, only to reinstate him a number of days later.

Business as usual: Unlike most of the company’s announcements, this heralds no big breakthrough. In a low-key research paper, the team describes a method that lets a less powerful large language model supervise a more powerful one—and suggests that this may be a small step toward determining how humans might supervise superhuman machines. Read more from Will Douglas Heaven. 

Bits and Bytes

Google DeepMind used a big language model to unravel an unsolvable math problem
In a paper published in Nature, the corporate says it’s the primary time a big language model has been used to find an answer to a long-standing scientific puzzle—producing verifiable and worthwhile latest information that didn’t previously exist. (MIT Technology Review)

LEAVE A REPLY

Please enter your comment!
Please enter your name here