A “ChatGPT for spreadsheets” helps solve difficult engineering challenges faster

-

Many engineering challenges come right down to the identical headache — too many knobs to show and too few possibilities to check them. Whether tuning an influence grid or designing a safer vehicle, each evaluation could be costly, and there could also be a whole bunch of variables that would matter.

Consider automotive safety design. Engineers must integrate 1000’s of parts, and lots of design decisions can affect how a vehicle performs in a collision. Classic optimization tools could begin to struggle when trying to find the very best combination.

MIT researchers developed a brand new approach that rethinks how a classic method, generally known as Bayesian optimization, could be used to resolve problems with a whole bunch of variables. In tests on realistic engineering-style benchmarks, like power-system optimization, the approach found top solutions 10 to 100 times faster than widely used methods.

Their technique leverages a foundation model trained on tabular data that routinely identifies the variables that matter most for improving performance, repeating the method to hone in on higher and higher solutions. Foundation models are huge artificial intelligence systems trained on vast, general datasets. This permits them to adapt to different applications.

The researchers’ tabular foundation model doesn’t have to be continuously retrained as it really works toward an answer, increasing the efficiency of the optimization process. The technique also delivers greater speedups for more complicated problems, so it might be especially useful in demanding applications like materials development or drug discovery.

“Modern AI and machine-learning models can fundamentally change the best way engineers and scientists create complex systems. We got here up with one algorithm that can’t only solve high-dimensional problems, but can be reusable so it might be applied to many problems without the necessity to begin all the things from scratch,” says Rosen Yu, a graduate student in computational science and engineering and lead creator of a paper on this method.

Yu is joined on the paper by Cyril Picard, a former MIT postdoc and research scientist, and Faez Ahmed, associate professor of mechanical engineering and a core member of the MIT Center for Computational Science and Engineering. The research will likely be presented on the International Conference on Learning Representations.

Improving a proven method

When scientists seek to resolve a multifaceted problem but have expensive methods to judge success, like crash testing a automotive to understand how good each design is, they often use a tried-and-true method called Bayesian optimization. This iterative method finds the very best configuration for an advanced system by constructing a surrogate model that helps estimate what to explore next while considering the uncertainty of its predictions.

However the surrogate model should be retrained after each iteration, which might quickly change into computationally intractable when the space of potential solutions could be very large. As well as, scientists need to construct a brand new model from scratch any time they wish to tackle a special scenario.

To deal with each shortcomings, the MIT researchers utilized a generative AI system generally known as a tabular foundation model because the surrogate model inside a Bayesian optimization algorithm.

“A tabular foundation model is sort of a ChatGPT for spreadsheets. The input and output of those models are tabular data, which within the engineering domain is rather more common to see and use than language,” Yu says.

Identical to large language models comparable to ChatGPT,  Claude, and Gemini, the model has been pre-trained on an unlimited amount of tabular data. This makes it well-equipped to tackle a spread of prediction problems. As well as, the model could be deployed as-is, without the necessity for any retraining.

To make their system more accurate and efficient for optimization, the researchers employed a trick that allows the model to discover features of the design space that could have the largest impact on the answer.

“A automotive may need 300 design criteria, but not all of them are the principal driver of the very best design if you happen to try to extend some safety parameters. Our algorithm can smartly select essentially the most critical features to concentrate on,” Yu says.

It does this by utilizing a tabular foundation model to estimate which variables (or mixtures of variables) most influence the consequence.

It then focuses the search on those high-impact variables as an alternative of wasting time exploring all the things equally. As an example, if the scale of the front crumple zone significantly increased and the automotive’s safety rating improved, that feature likely played a task within the enhancement.

Greater problems, higher solutions

One in all their biggest challenges was finding the very best tabular foundation model for this task, Yu says. Then that they had to attach it with a Bayesian optimization algorithm in such a way that it could discover essentially the most distinguished design features.

“Finding essentially the most distinguished dimension is a well known problem in math and computer science, but coming up with a way that leveraged the properties of a tabular foundation model was an actual challenge,” Yu says.

With the algorithmic framework in place, the researchers tested their method by comparing it to 5 state-of-the-art optimization algorithms.

On 60 benchmark problems, including realistic situations like power grid design and automotive crash testing, their method consistently found the very best solution between 10 and 100 times faster than the opposite algorithms.

“When an optimization problem gets an increasing number of dimensions, our algorithm really shines,” Yu added.

But their method didn’t outperform the baselines on all problems, comparable to robotic path planning. This likely indicates that scenario was not well-defined within the model’s training data, Yu says.

In the longer term, the researchers want to check methods that would boost the performance of tabular foundation models. In addition they wish to apply their technique to problems with 1000’s and even thousands and thousands of dimensions, just like the design of a naval ship.

“At a better level, this work points to a broader shift: using foundation models not only for perception or language, but as algorithmic engines inside scientific and engineering tools, allowing classical methods like Bayesian optimization to scale to regimes that were previously impractical,” says Ahmed.

“The approach presented on this work, using a pretrained foundation model along with high‑dimensional Bayesian optimization, is a creative and promising strategy to reduce the heavy data requirements of simulation‑based design. Overall, this work is a practical and powerful step toward making advanced design optimization more accessible and easier to use in real-world settings,” says Wei Chen, the Wilson-Cook Professor in Engineering Design and chair of the Department of Mechanical Engineering at Northwestern University, who was not involved on this research.

ASK ANA

What are your thoughts on this topic?
Let us know in the comments below.

0 0 votes
Article Rating
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

Share this article

Recent posts

0
Would love your thoughts, please comment.x
()
x