Home Artificial Intelligence To excel at engineering design, generative AI must learn to innovate, study finds

To excel at engineering design, generative AI must learn to innovate, study finds

0
To excel at engineering design, generative AI must learn to innovate, study finds

ChatGPT and other deep generative models are proving to be uncanny mimics. These AI supermodels can churn out poems, finish symphonies, and create recent videos and pictures by routinely learning from tens of millions of examples of previous works. These enormously powerful and versatile tools excel at generating recent content that resembles all the pieces they’ve seen before.

But as MIT engineers say in a recent study, similarity isn’t enough if you need to truly innovate in engineering tasks.

“Deep generative models (DGMs) are very promising, but in addition inherently flawed,” says study creator Lyle Regenwetter, a mechanical engineering graduate student at MIT. “The target of those models is to mimic a dataset. But as engineers and designers, we regularly don’t wish to create a design that’s already on the market.”

He and his colleagues make the case that if mechanical engineers want help from AI to generate novel ideas and designs, they may should first refocus those models beyond “statistical similarity.”

“The performance of loads of these models is explicitly tied to how statistically similar a generated sample is to what the model has already seen,” says co-author Faez Ahmed, assistant professor of mechanical engineering at MIT. “But in design, being different could possibly be essential if you need to innovate.”

Of their study, Ahmed and Regenwetter reveal the pitfalls of deep generative models once they are tasked with solving engineering design problems. In a case study of bicycle frame design, the team shows that these models find yourself generating recent frames that mimic previous designs but falter on engineering performance and requirements.

When the researchers presented the identical bicycle frame problem to DGMs that they specifically designed with engineering-focused objectives, moderately than only statistical similarity, these models produced more revolutionary, higher-performing frames.

The team’s results show that similarity-focused AI models don’t quite translate when applied to engineering problems. But, because the researchers also highlight of their study, with some careful planning of task-appropriate metrics, AI models could possibly be an efficient design “co-pilot.”

“That is about how AI may help engineers be higher and faster at creating revolutionary products,” Ahmed says. “To do this, we’ve to first understand the necessities. That is one step in that direction.”

The team’s recent study appeared recently online, and can be within the December print edition of the journal . The research is a collaboration between computer scientists at MIT-IBM Watson AI Lab and mechanical engineers in MIT’s DeCoDe Lab. The study’s co-authors include Akash Srivastava and Dan Gutreund on the MIT-IBM Watson AI Lab.

Framing an issue

As Ahmed and Regenwetter write, DGMs are “powerful learners, boasting unparalleled ability” to process huge amounts of knowledge. DGM is a broad term for any machine-learning model that’s trained to learn distribution of knowledge after which use that to generate recent, statistically similar content. The enormously popular ChatGPT is one kind of deep generative model generally known as a big language model, or LLM, which includes natural language processing capabilities into the model to enable the app to generate realistic imagery and speech in response to conversational queries. Other popular models for image generation include DALL-E and Stable Diffusion.

Due to their ability to learn from data and generate realistic samples, DGMs have been increasingly applied in multiple engineering domains. Designers have used deep generative models to draft recent aircraft frames, metamaterial designs, and optimal geometries for bridges and cars. But for probably the most part, the models have mimicked existing designs, without improving the performance on existing designs.

“Designers who’re working with DGMs are type of missing this cherry on top, which is adjusting the model’s training objective to concentrate on the design requirements,” Regenwetter says. “So, people find yourself generating designs which might be very much like the dataset.”

In the brand new study, he outlines the important pitfalls in applying DGMs to engineering tasks, and shows that the basic objective of normal DGMs doesn’t take into consideration specific design requirements. For instance this, the team invokes an easy case of bicycle frame design and demonstrates that problems can crop up as early because the initial learning phase. As a model learns from 1000’s of existing bike frames of assorted styles and sizes, it’d consider two frames of comparable dimensions to have similar performance, when in reality a small disconnect in a single frame — too small to register as a big difference in statistical similarity metrics — makes the frame much weaker than the opposite, visually similar frame.

Beyond “vanilla”

An animation depicting transformations across common bicycle designs. 

Credit: Courtesy of the researchers

The researchers carried the bicycle example forward to see what designs a DGM would actually generate after having learned from existing designs. They first tested a traditional “vanilla” generative adversarial network, or GAN — a model that has widely been utilized in image and text synthesis, and is tuned simply to generate statistically similar content. They trained the model on a dataset of 1000’s of bicycle frames, including commercially manufactured designs and fewer conventional, one-off frames designed by hobbyists.

Once the model learned from the information, the researchers asked it to generate a whole lot of latest bike frames. The model produced realistic designs that resembled existing frames. But not one of the designs showed significant improvement in performance, and a few were even a bit inferior, with heavier, less structurally sound frames.

The team then carried out the identical test with two other DGMs that were specifically designed for engineering tasks. The primary model is one which Ahmed previously developed to generate high-performing airfoil designs. He built this model to prioritize statistical similarity in addition to functional performance. When applied to the bike frame task, this model generated realistic designs that also were lighter and stronger than existing designs. But it surely also produced physically “invalid” frames, with components that didn’t quite fit or overlapped in physically inconceivable ways.

“We saw designs that were significantly higher than the dataset, but in addition designs that were geometrically incompatible since the model wasn’t focused on meeting design constraints,” Regenwetter says.

The last model the team tested was one which Regenwetter built to generate recent geometric structures. This model was designed with the identical priorities because the previous models, with the added ingredient of design constraints, and prioritizing physically viable frames, for example, with no disconnections or overlapping bars. This last model produced the highest-performing designs, that were also physically feasible.

“We found that when a model goes beyond statistical similarity, it may well give you designs which might be higher than those which might be already on the market,” Ahmed says. “It’s a proof of what AI can do, whether it is explicitly trained on a design task.”

As an illustration, if DGMs could be built with other priorities, resembling performance, design constraints, and novelty, Ahmed foresees “quite a few engineering fields, resembling molecular design and civil infrastructure, would greatly profit. By shedding light on the potential pitfalls of relying solely on statistical similarity, we hope to encourage recent pathways and techniques in generative AI applications outside multimedia.”

LEAVE A REPLY

Please enter your comment!
Please enter your name here