Home Artificial Intelligence The creative way forward for generative AI

The creative way forward for generative AI

0
The creative way forward for generative AI

Few technologies have shown as much potential to shape our future as artificial intelligence. Specialists in fields starting from medicine to microfinance to the military are evaluating AI tools, exploring how these might transform their work and worlds. For creative professionals, AI poses a novel set of challenges and opportunities — particularly generative AI, using algorithms to rework vast amounts of information into recent content.

The longer term of generative AI and its impact on art and design was the topic of a sold-out panel discussion on Oct. 26 on the MIT Bartos Theater. It was a part of the annual meeting for the Council for the Arts at MIT (CAMIT), a bunch of alumni and other supporters of the humanities at MIT, and was co-presented by the MIT Center for Art, Science, and Technology (CAST), a cross-school initiative for artist residencies and cross-disciplinary projects.

Introduced by Andrea Volpe, director of CAMIT, and moderated by Onur Yüce Gün SM ’06, PhD’16, the panel featured multimedia artist and social science researcher Ziv Epstein SM’19, PhD’23, MIT professor of architecture and director of the SMArchS and SMArchS AD programs Ana Miljački, and artist and roboticist Alex Reben MAS ’10.

Play video

Panel Discussion: How Is Generative AI Transforming Art and Design?
Thumbnail image created using Google DeepMind AI image generator.
Video: Arts at MIT

The discussion centered around three themes: emergence, embodiment, and expectations:

Emergence  

Moderator Onur Yüce Gün: In much of your work, what emerges is frequently a matter — an ambiguity — and that ambiguity is inherent within the creative process in art and design. Does generative AI provide help to reach those ambiguities?

Ana Miljački: In the summertime of 2022, the Memorial Cemetery in Mostar [in Bosnia and Herzegovina] was destroyed. It was a post-World War II Yugoslav memorial, and we desired to determine a strategy to uphold the values the memorial had stood for. We compiled video material from six different monuments and, with AI, created a nonlinear documentary, a triptych playing on three video screens, accompanied by a soundscape. With this project we fabricated an artificial memory, a strategy to seed those memories and values into the minds of people that never lived those memories or values. That is the variety of ambiguity that may be problematic in science, and one which is fascinating for artists and designers and designers. It is usually a bit scary.

Ziv Epstein: There’s some debate whether generative AI is a tool or an agent. But even when we call it a tool, we want to do not forget that tools should not neutral. Take into consideration photography. When photography emerged, a whole lot of painters were frightened that it meant the top of art. However it turned out that photography freed up painters to do other things. Generative AI is, in fact, a unique variety of tool since it draws on an enormous quantity of other people’s work. There’s already artistic and inventive agency embedded in these systems. There are already ambiguities in how these existing works will likely be represented, and which cycles and ambiguities we are going to perpetuate.

Alex Reben: I’m often asked whether these systems are literally creative, in the best way that we’re creative. In my very own experience, I’ve often been surprised on the outputs I create using AI. I see that I can steer things in a direction that parallels what I might need done alone but is different enough from what I might need done, is amplified or altered or modified. So there are ambiguities. But we want to do not forget that the term AI can be ambiguous. It’s actually many alternative things.

Embodiment

Moderator: Most of us use computers on a every day basis, but we experience the world through our senses, through our bodies. Art and design create tangible experiences. We hear them, see them, touch them. Have we attained the identical sensory interaction with AI systems?

Miljački: As long as we’re working in images, we’re working in two dimensions. But for me, at the very least within the project we did across the Mostar memorial, we were in a position to produce affect on quite a lot of levels, levels that together produce something that is larger than a two-dimensional image moving in time. Through images and a soundscape we created a spatial experience in time, a wealthy sensory experience that goes beyond the 2 dimensions of the screen.

Reben: I assume embodiment for me means having the ability to interface and interact with the world and modify it. In certainly one of my projects, we used AI to generate a “Dali-like” image, after which turned it right into a three-dimensional object, first with 3D printing, after which casting it in bronze at a foundry. There was even a patina artist to complete the surface. I cite this instance to indicate just what number of humans were involved within the creation of this artwork at the top of the day. There have been human fingerprints at every step.

Epstein: The query is, how will we embed meaningful human control into these systems, so that they could possibly be more like, for instance, a violin. A violin player has all varieties of causal inputs — physical gestures they’ll use to rework their artistic intention into outputs, into notes and sounds. At once we’re removed from that with generative AI. Our interaction is essentially typing a little bit of text and getting something back. We’re principally yelling at a black box.

Expectations

Moderator: These recent technologies are spreading so rapidly, almost like an explosion. And there are enormous expectations around what they will do. As a substitute of stepping on the gas here, I’d prefer to test the brakes and ask what these technologies should not going to do. Are there guarantees they won’t have the ability to satisfy?

Miljački: I’m hoping that we don’t go to “Westworld.” I understand we do need AI to resolve complex computational problems. But I hope it won’t be used to interchange considering. Because as a tool AI is definitely nostalgic. It may possibly only work with what already exists after which produce probable outcomes. And which means it reproduces all of the biases and gaps within the archive it has been fed. In architecture, for instance, that archive is made up of works by white male European architects. We now have to determine how to not perpetuate that variety of bias, but to query it.

Epstein: In a way, using AI now’s like putting on a jetpack and a blindfold. You’re going really fast, but you don’t really know where you’re going. Now that this technology appears to be able to doing human-like things, I believe it’s an awesome opportunity for us to take into consideration what it means to be human. My hope is that generative AI could be a type of ontological wrecking ball, that it may possibly shake things up in a really interesting way.

Reben: I do know from history that it’s pretty hard to predict the long run of technology. So attempting to predict the negative — what won’t occur — with this recent technology can be near unimaginable. For those who look back at what we thought we’d have now, on the predictions that were made, it’s quite different from what we even have. I don’t think that anyone today can say for certain what AI won’t have the ability to do in the future. Identical to we will’t say what science will have the ability to do, or humans. The perfect we will do, for now, is try and drive these technologies towards the long run in a way that will likely be helpful.

LEAVE A REPLY

Please enter your comment!
Please enter your name here