A model of virtuosity

-

A crowd gathered on the MIT Media Lab in September for a concert by musician Jordan Rudess and two collaborators. One in every of them, violinist and vocalist Camilla Bäckman, has performed with Rudess before. The opposite — a man-made intelligence model informally dubbed the jam_bot, which Rudess developed with an MIT team over the preceding several months — was making its public debut as a piece in progress.

Throughout the show, Rudess and Bäckman exchanged the signals and smiles of experienced musicians finding a groove together. Rudess’ interactions with the jam_bot suggested a special and unfamiliar form of exchange. During one duet inspired by Bach, Rudess alternated between playing a couple of measures and allowing the AI to proceed the music in an analogous baroque style. Every time the model took its turn, a variety of expressions moved across Rudess’ face: bemusement, concentration, curiosity. At the top of the piece, Rudess admitted to the audience, “That could be a combination of a complete lot of fun and really, really difficult.”

Rudess is an acclaimed keyboardist — the most effective of all time, in accordance with one Music Radar magazine poll — known for his work with the platinum-selling, Grammy-winning progressive metal band Dream Theater, which embarks this fall on a fortieth anniversary tour. He can also be a solo artist whose latest album, “Permission to Fly,” was released on Sept. 6; an educator who shares his skills through detailed online tutorials; and the founding father of software company Wizdom Music. His work combines a rigorous classical foundation (he began his piano studies at The Juilliard School at age 9) with a genius for improvisation and an appetite for experimentation.

Last spring, Rudess became a visiting artist with the MIT Center for Art, Science and Technology (CAST), collaborating with the MIT Media Lab’s Responsive Environments research group on the creation of recent AI-powered music technology. Rudess’ essential collaborators within the enterprise are Media Lab graduate students Lancelot Blanchard, who researches musical applications of generative AI (informed by his own studies in classical piano), and Perry Naseck, an artist and engineer specializing in interactive, kinetic, light- and time-based media. Overseeing the project is Professor Joseph Paradiso, head of the Responsive Environments group and a longtime Rudess fan. Paradiso arrived on the Media Lab in 1994 with a CV in physics and engineering and a sideline designing and constructing synthesizers to explore his avant-garde musical tastes. His group has a convention of investigating musical frontiers through novel user interfaces, sensor networks, and unconventional datasets.

The researchers got down to develop a machine learning model channeling Rudess’ distinctive musical style and technique. In a paper published online by MIT Press in September, co-authored with MIT music technology professor Eran Egozy, they articulate their vision for what they call “symbiotic virtuosity:” for human and computer to duet in real-time, learning from each duet they perform together, and making performance-worthy recent music in front of a live audience.

Rudess contributed the info on which Blanchard trained the AI model. Rudess also provided continuous testing and feedback, while Naseck experimented with ways of visualizing the technology for the audience.

“Audiences are used to seeing lighting, graphics, and scenic elements at many live shows, so we would have liked a platform to permit the AI to construct its own relationship with the audience,” Naseck says. In early demos, this took the shape of a sculptural installation with illumination that shifted every time the AI modified chords. Through the concert on Sept. 21, a grid of petal-shaped panels mounted behind Rudess got here to life through choreography based on the activity and future generation of the AI model.

“In case you see jazz musicians make eye contact and nod at one another, that offers anticipation to the audience of what’s going to occur,” says Naseck. “The AI is effectively generating sheet music after which playing it. How can we show what’s coming next and communicate that?”

Naseck designed and programmed the structure from scratch on the Media Lab with assistance from Brian Mayton (mechanical design) and Carlo Mandolini (fabrication), drawing a few of its movements from an experimental machine learning model developed by visiting student Madhav Lavakare that maps music to points moving in space. With the power to spin and tilt its petals at speeds starting from subtle to dramatic, the kinetic sculpture distinguished the AI’s contributions through the concert from those of the human performers, while conveying the emotion and energy of its output: swaying gently when Rudess took the lead, for instance, or furling and unfurling like a blossom because the AI model generated stately chords for an improvised adagio. The latter was considered one of Naseck’s favorite moments of the show.

“At the top, Jordan and Camilla left the stage and allowed the AI to totally explore its own direction,” he recalls. “The sculpture made this moment very powerful — it allowed the stage to stay animated and intensified the grandiose nature of the chords the AI played. The audience was clearly captivated by this part, sitting at the perimeters of their seats.”

“The goal is to create a musical visual experience,” says Rudess, “to point out what’s possible and to up the sport.”

Musical futures

As the start line for his model, Blanchard used a music transformer, an open-source neural network architecture developed by MIT Assistant Professor Anna Huang SM ’08, who joined the MIT faculty in September.

“Music transformers work in an analogous way as large language models,” Blanchard explains. “The identical way that ChatGPT would generate essentially the most probable next word, the model we have now would predict essentially the most probable next notes.”

Blanchard fine-tuned the model using Rudess’ own playing of elements from bass lines to chords to melodies, variations of which Rudess recorded in his Recent York studio. Along the way in which, Blanchard ensured the AI could be nimble enough to reply in real-time to Rudess’ improvisations.

“We reframed the project,” says Blanchard, “by way of musical futures that were hypothesized by the model and that were only being realized for the time being based on what Jordan was deciding.”

As Rudess puts it: “How can the AI respond — how can I actually have a dialogue with it? That’s the cutting-edge a part of what we’re doing.”

One other priority emerged: “In the sector of generative AI and music, you hear about startups like Suno or Udio which might be capable of generate music based on text prompts. Those are very interesting, but they lack controllability,” says Blanchard. “It was necessary for Jordan to give you the chance to anticipate what was going to occur. If he could see the AI was going to make a choice he didn’t want, he could restart the generation or have a kill switch in order that he can take control again.”

Along with giving Rudess a screen previewing the musical decisions of the model, Blanchard built in numerous modalities the musician could activate as he plays — prompting the AI to generate chords or lead melodies, for instance, or initiating a call-and-response pattern.

“Jordan is the mastermind of every part that’s happening,” he says.

What would Jordan do

Though the residency has wrapped up, the collaborators see many paths for continuing the research. For instance, Naseck would love to experiment with more ways Rudess could interact directly together with his installation, through features like capacitive sensing. “We hope in the longer term we’ll give you the chance to work with more of his subtle motions and posture,” Naseck says.

While the MIT collaboration focused on how Rudess can use the tool to enhance his own performances, it’s easy to assume other applications. Paradiso recalls an early encounter with the tech: “I played a chord sequence, and Jordan’s model was generating the leads. It was like having a musical ‘bee’ of Jordan Rudess buzzing across the melodic foundation I used to be laying down, doing something like Jordan would do, but subject to the easy progression I used to be playing,” he recalls, his face echoing the delight he felt on the time. “You are going to see AI plugins in your favorite musician which you could bring into your personal compositions, with some knobs that permit you control the particulars,” he posits. “It’s that form of world we’re opening up with this.”

Rudess can also be keen to explore educational uses. Since the samples he recorded to coach the model were much like ear-training exercises he’s used with students, he thinks the model itself could someday be used for teaching. “This work has legs beyond just entertainment value,” he says.

The foray into artificial intelligence is a natural progression for Rudess’ interest in music technology. “This is the following step,” he believes. When he discusses the work with fellow musicians, nevertheless, his enthusiasm for AI often meets with resistance. “I can have sympathy or compassion for a musician who feels threatened, I totally get that,” he allows. “But my mission is to be considered one of the individuals who moves this technology toward positive things.”

“On the Media Lab, it’s so necessary to take into consideration how AI and humans come together for the good thing about all,” says Paradiso. “How is AI going to lift us all up? Ideally it is going to do what so many technologies have done — bring us into one other vista where we’re more enabled.”

“Jordan is ahead of the pack,” Paradiso adds. “Once it’s established with him, people will follow.”

Jamming with MIT

The Media Lab first landed on Rudess’ radar before his residency because he desired to check out the Knitted Keyboard created by one other member of Responsive Environments, textile researcher Irmandy Wickasono PhD ’24. From that moment on, “It has been a discovery for me, learning in regards to the cool things which might be happening at MIT within the music world,” Rudess says.

During two visits to Cambridge last spring (assisted by his wife, theater and music producer Danielle Rudess), Rudess reviewed final projects in Paradiso’s course on electronic music controllers, the syllabus for which included videos of his own past performances. He brought a brand new gesture-driven synthesizer called Osmose to a category on interactive music systems taught by Egozy, whose credits include the co-creation of the video game “Guitar Hero.” Rudess also provided tips about improvisation to a composition class; played GeoShred, a touchscreen musical instrument he co-created with Stanford University researchers, with student musicians within the MIT Laptop Ensemble and Arts Scholars program; and experienced immersive audio within the MIT Spatial Sound Lab. During his most up-to-date trip to campus in September, he taught a masterclass for pianists in MIT’s Emerson/Harris Program, which provides a complete of 67 scholars and fellows with support for conservatory-level musical instruction.

“I get a form of rush at any time when I come to the university,” Rudess says. “I feel the sense that, wow, all of my musical ideas and inspiration and interests have come together on this really cool way.”

ASK ANA

What are your thoughts on this topic?
Let us know in the comments below.

0 0 votes
Article Rating
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

Share this article

Recent posts

0
Would love your thoughts, please comment.x
()
x