Home Artificial Intelligence What does the long run hold for generative AI?

What does the long run hold for generative AI?

0
What does the long run hold for generative AI?

Speaking on the “Generative AI: Shaping the Future” symposium on Nov. 28, the kickoff event of MIT’s Generative AI Week, keynote speaker and iRobot co-founder Rodney Brooks warned attendees against uncritically overestimating the capabilities of this emerging technology, which underpins increasingly powerful tools like OpenAI’s ChatGPT and Google’s Bard.

“Hype results in hubris, and hubris results in conceit, and self-esteem results in failure,” cautioned Brooks, who can also be a professor emeritus at MIT, a former director of the Computer Science and Artificial Intelligence Laboratory (CSAIL), and founding father of Robust.AI.

“Nobody technology has ever surpassed all the pieces else,” he added.

The symposium, which drew a whole lot of attendees from academia and industry to the Institute’s Kresge Auditorium, was laced with messages of hope in regards to the opportunities generative AI offers for making the world a greater place, including through art and creativity, interspersed with cautionary tales about what could go flawed if these AI tools usually are not developed responsibly.

Generative AI is a term to explain machine-learning models that learn to generate latest material that appears like the information they were trained on. These models have exhibited some incredible capabilities, equivalent to the flexibility to supply human-like creative writing, translate languages, generate functional computer code, or craft realistic images from text prompts.

In her opening remarks to launch the symposium, MIT President Sally Kornbluth highlighted several projects faculty and students have undertaken to make use of generative AI to make a positive impact on the earth. For instance, the work of the Axim Collaborative, a web based education initiative launched by MIT and Harvard, includes exploring the tutorial elements of generative AI to assist underserved students.

The Institute also recently announced seed grants for 27 interdisciplinary faculty research projects centered on how AI will transform people’s lives across society.

In hosting Generative AI Week, MIT hopes to not only showcase this sort of innovation, but in addition generate “collaborative collisions” amongst attendees, Kornbluth said.

Collaboration involving academics, policymakers, and industry shall be critical if we’re to soundly integrate a rapidly evolving technology like generative AI in ways which can be humane and help humans solve problems, she told the audience.

“I truthfully cannot consider a challenge more closely aligned with MIT’s mission. It’s a profound responsibility, but I even have every confidence that we are able to face it, if we face it head on and if we face it as a community,” she said.

While generative AI holds the potential to assist solve a number of the planet’s most pressing problems, the emergence of those powerful machine learning models has blurred the excellence between science fiction and reality, said CSAIL Director Daniela Rus in her opening remarks. It isn’t any longer a matter of whether we are able to make machines that produce latest content, she said, but how we are able to use these tools to boost businesses and ensure sustainability. 

“Today, we’ll discuss the potential of a future where generative AI does not only exist as a technological marvel, but stands as a source of hope and a force for good,” said Rus, who can also be the Andrew and Erna Viterbi Professor within the Department of Electrical Engineering and Computer Science.

But before the discussion dove deeply into the capabilities of generative AI, attendees were first asked to ponder their humanity, as MIT Professor Joshua Bennett read an original poem.

Bennett, a professor within the MIT Literature Section and Distinguished Chair of the Humanities, was asked to write down a poem about what it means to be human, and drew inspiration from his daughter, who was born three weeks ago.

The poem told of his experiences as a boy watching together with his father and touched on the importance of passing traditions all the way down to the subsequent generation.

In his keynote remarks, Brooks got down to unpack a number of the deep, scientific questions surrounding generative AI, in addition to explore what the technology can tell us about ourselves.

To start, he sought to dispel a number of the mystery swirling around generative AI tools like ChatGPT by explaining the fundamentals of how this huge language model works. ChatGPT, as an example, generates text one word at a time by determining what the subsequent word needs to be within the context of what it has already written. While a human might write a story by fascinated by entire phrases, ChatGPT only focuses on the subsequent word, Brooks explained.

ChatGPT 3.5 is built on a machine-learning model that has 175 billion parameters and has been exposed to billions of pages of text on the internet during training. (The latest iteration, ChatGPT 4, is even larger.) It learns correlations between words on this massive corpus of text and uses this data to propose what word might come next when given a prompt.

The model has demonstrated some incredible capabilities, equivalent to the flexibility to write down a sonnet about robots within the variety of Shakespeare’s famous Sonnet 18. During his talk, Brooks showcased the sonnet he asked ChatGPT to write down side-by-side together with his own sonnet.

But while researchers still don’t fully understand exactly how these models work, Brooks assured the audience that generative AI’s seemingly incredible capabilities usually are not magic, and it doesn’t mean these models can do anything.

His biggest fears about generative AI don’t revolve around models that would someday surpass human intelligence. Somewhat, he’s most apprehensive about researchers who may throw away many years of fantastic work that was nearing a breakthrough, simply to jump on shiny latest advancements in generative AI; enterprise capital firms that blindly swarm toward technologies that may yield the very best margins; or the likelihood that an entire generation of engineers will ignore other types of software and AI.

At the top of the day, those that consider generative AI can solve the world’s problems and those that consider it’ll only generate latest problems have not less than one thing in common: Each groups are likely to overestimate the technology, he said.

“What’s the self-esteem with generative AI? The self-esteem is that it’s someway going to steer to artificial general intelligence. By itself, it is just not,” Brooks said.

Following Brooks’ presentation, a gaggle of MIT faculty spoke about their work using generative AI and took part in a panel discussion about future advances, essential but underexplored research topics, and the challenges of AI regulation and policy.

The panel consisted of Jacob Andreas, an associate professor within the MIT Department of Electrical Engineering and Computer Science (EECS) and a member of CSAIL; Antonio Torralba, the Delta Electronics Professor of EECS and a member of CSAIL; Ev Fedorenko, an associate professor of brain and cognitive sciences and an investigator on the McGovern Institute for Brain Research at MIT; and Armando Solar-Lezama, a Distinguished Professor of Computing and associate director of CSAIL. It was moderated by William T. Freeman, the Thomas and Gerd Perkins Professor of EECS and a member of CSAIL.

The panelists discussed several potential future research directions around generative AI, including the potential of integrating perceptual systems, drawing on human senses like touch and smell, somewhat than focusing totally on language and pictures. The researchers also spoke in regards to the importance of engaging with policymakers and the general public to make sure generative AI tools are produced and deployed responsibly.

“One in every of the massive risks with generative AI today is the danger of digital snake oil. There may be an enormous risk of a number of products going out that claim to do miraculous things but in the long term could possibly be very harmful,” Solar-Lezama said.

The morning session concluded with an excerpt from the 1925 science fiction novel “Metropolis,” read by senior Joy Ma, a physics and theater arts major, followed by a roundtable discussion on the long run of generative AI. The discussion included Joshua Tenenbaum, a professor within the Department of Brain and Cognitive Sciences and a member of CSAIL; Dina Katabi, the Thuan and Nicole Pham Professor in EECS and a principal investigator in CSAIL and the MIT Jameel Clinic; and Max Tegmark, professor of physics; and was moderated by Daniela Rus.

One focus of the discussion was the potential of developing generative AI models that may transcend what we are able to do as humans, equivalent to tools that may sense someone’s emotions through the use of electromagnetic signals to know how an individual’s respiration and heart rate are changing.

But one key to integrating AI like this into the true world safely is to be certain that we are able to trust it, Tegmark said. If we all know an AI tool will meet the specifications we insist on, then “we not should be afraid of constructing really powerful systems that exit and do things for us on the earth,” he said.

LEAVE A REPLY

Please enter your comment!
Please enter your name here