MIT News
The energy demands of generative AI are expected to proceed increasing dramatically over the subsequent decade.
As an illustration, an April 2025 report from the International Energy Agency predicts that the global electricity demand from data centers, which house the computing infrastructure to coach and deploy AI models, will greater than double by 2030, to around 945 terawatt-hours. While not all operations performed in a knowledge center are AI-related, this total amount is barely greater than the energy consumption of Japan.
Furthermore, an August 2025 evaluation from Goldman Sachs Research forecasts that about 60 percent of the increasing electricity demands from data centers will probably be met by burning fossil fuels, increasing global carbon emissions by about 220 million tons. As compared, driving a gas-powered automobile for five,000 miles produces about 1 ton of carbon dioxide.
These statistics are staggering, but at the identical time, scientists and engineers at MIT and all over the world are studying innovations and interventions to mitigate AI’s ballooning carbon footprint, from boosting the efficiency of algorithms to rethinking the design of knowledge centers.
Considering carbon emissions
Talk of reducing generative AI’s carbon footprint is often centered on “operational carbon” — the emissions utilized by the powerful processors, referred to as GPUs, inside a knowledge center. It often ignores “embodied carbon,” that are emissions created by constructing the info center in the primary place, says Vijay Gadepally, senior scientist at MIT Lincoln Laboratory, who leads research projects within the Lincoln Laboratory Supercomputing Center.
Constructing and retrofitting a knowledge center, built from tons of steel and concrete and crammed with air con units, computing hardware, and miles of cable, consumes an enormous amount of carbon. Actually, the environmental impact of constructing data centers is one reason firms like Meta and Google are exploring more sustainable constructing materials. (Cost is one other factor.)
Plus, data centers are enormous buildings — the world’s largest, the China Telecomm-Inner Mongolia Information Park, engulfs roughly 10 million square feet — with about 10 to 50 times the energy density of a traditional office constructing, Gadepally adds.
“The operational side is just a part of the story. Some things we’re working on to cut back operational emissions may lend themselves to reducing embodied carbon, too, but we want to do more on that front in the long run,” he says.
Reducing operational carbon emissions
In terms of reducing operational carbon emissions of AI data centers, there are lots of parallels with home energy-saving measures. For one, we will simply turn down the lights.
“Even when you have got the worst lightbulbs in your own home from an efficiency standpoint, turning them off or dimming them will at all times use less energy than leaving them running at full blast,” Gadepally says.
In the identical fashion, research from the Supercomputing Center has shown that “turning down” the GPUs in a knowledge center in order that they eat about three-tenths the energy has minimal impacts on the performance of AI models, while also making the hardware easier to chill.
One other strategy is to make use of less energy-intensive computing hardware.
Demanding generative AI workloads, similar to training latest reasoning models like GPT-5, often need many GPUs working concurrently. The Goldman Sachs evaluation estimates that a state-of-the-art system could soon have as many as 576 connected GPUs operating without delay.
But engineers can sometimes achieve similar results by reducing the precision of computing hardware, perhaps by switching to less powerful processors which have been tuned to handle a particular AI workload.
There are also measures that boost the efficiency of coaching power-hungry deep-learning models before they’re deployed.
Gadepally’s group found that about half the electricity used for training an AI model is spent to get the last 2 or 3 percentage points in accuracy. Stopping the training process early can save numerous that energy.
“There is likely to be cases where 70 percent accuracy is sweet enough for one particular application, like a recommender system for e-commerce,” he says.
Researchers can even make the most of efficiency-boosting measures.
As an illustration, a postdoc within the Supercomputing Center realized the group might run a thousand simulations throughout the training process to select the 2 or three best AI models for his or her project.
By constructing a tool that allowed them to avoid about 80 percent of those wasted computing cycles, they dramatically reduced the energy demands of coaching with no reduction in model accuracy, Gadepally says.
Leveraging efficiency improvements
Constant innovation in computing hardware, similar to denser arrays of transistors on semiconductor chips, remains to be enabling dramatic improvements within the energy efficiency of AI models.
Despite the fact that energy efficiency improvements have been slowing for many chips since about 2005, the quantity of computation that GPUs can do per joule of energy has been improving by 50 to 60 percent every year, says Neil Thompson, director of the FutureTech Research Project at MIT’s Computer Science and Artificial Intelligence Laboratory and a principal investigator at MIT’s Initiative on the Digital Economy.
“The still-ongoing ‘Moore’s Law’ trend of getting an increasing number of transistors on chip still matters for numerous these AI systems, since running operations in parallel remains to be very precious for improving efficiency,” says Thomspon.
Much more significant, his group’s research indicates that efficiency gains from latest model architectures that may solve complex problems faster, consuming less energy to realize the identical or higher results, is doubling every eight or nine months.
Thompson coined the term “negaflop” to explain this effect. The identical way a “negawatt” represents electricity saved as a result of energy-saving measures, a “negaflop” is a computing operation that doesn’t must be performed as a result of algorithmic improvements.
These may very well be things like “pruning” away unnecessary components of a neural network or employing compression techniques that enable users to do more with less computation.
“If it’s essential use a very powerful model today to finish your task, in only a couple of years, you may have the ability to make use of a significantly smaller model to do the identical thing, which might carry much less environmental burden. Making these models more efficient is the single-most necessary thing you’ll be able to do to cut back the environmental costs of AI,” Thompson says.
Maximizing energy savings
While reducing the general energy use of AI algorithms and computing hardware will cut greenhouse gas emissions, not all energy is identical, Gadepally adds.
“The quantity of carbon emissions in 1 kilowatt hour varies quite significantly, even just throughout the day, in addition to over the month and 12 months,” he says.
Engineers can make the most of these variations by leveraging the flexibleness of AI workloads and data center operations to maximise emissions reductions. As an illustration, some generative AI workloads don’t must be performed of their entirety at the identical time.
Splitting computing operations so some are performed later, when more of the electricity fed into the grid is from renewable sources like solar and wind, can go a great distance toward reducing a knowledge center’s carbon footprint, says Deepjyoti Deka, a research scientist within the MIT Energy Initiative.
Deka and his team are also studying “smarter” data centers where the AI workloads of multiple firms using the identical computing equipment are flexibly adjusted to enhance energy efficiency.
“By the system as an entire, our hope is to reduce energy use in addition to dependence on fossil fuels, while still maintaining reliability standards for AI firms and users,” Deka says.
He and others at MITEI are constructing a flexibility model of a knowledge center that considers the differing energy demands of coaching a deep-learning model versus deploying that model. Their hope is to uncover the perfect strategies for scheduling and streamlining computing operations to enhance energy efficiency.
The researchers are also exploring using long-duration energy storage units at data centers, which store excess energy for times when it is required.
With these systems in place, a knowledge center could use stored energy that was generated by renewable sources during a high-demand period, or avoid using diesel backup generators if there are fluctuations within the grid.
“Long-duration energy storage may very well be a game-changer here because we will design operations that basically change the emission mixture of the system to rely more on renewable energy,” Deka says.
As well as, researchers at MIT and Princeton University are developing a software tool for investment planning in the facility sector, called GenX, which may very well be used to assist firms determine the best place to locate a knowledge center to reduce environmental impacts and costs.
Location can have a huge impact on reducing a knowledge center’s carbon footprint. As an illustration, Meta operates a data center in Lulea, a city on the coast of northern Sweden where cooler temperatures reduce the quantity of electricity needed to chill computing hardware.
Pondering farther outside the box (way farther), some governments are even exploring the development of data centers on the moon where they may potentially be operated with nearly all renewable energy.
AI-based solutions
Currently, the expansion of renewable energy generation here on Earth isn’t keeping pace with the rapid growth of AI, which is one major roadblock to reducing its carbon footprint, says Jennifer Turliuk MBA ’25, a short-term lecturer, former Sloan Fellow, and former practice leader of climate and energy AI on the Martin Trust Center for MIT Entrepreneurship.
The local, state, and federal review processes required for a brand new renewable energy projects can take years.
Researchers at MIT and elsewhere are exploring using AI to hurry up the technique of connecting latest renewable energy systems to the facility grid.
As an illustration, a generative AI model could streamline interconnection studies that determine how a brand new project will impact the facility grid, a step that always takes years to finish.
And on the subject of accelerating the event and implementation of unpolluted energy technologies, AI could play a significant role.
“Machine learning is great for tackling complex situations, and the electrical grid is alleged to be one among the biggest and most complex machines on this planet,” Turliuk adds.
As an illustration, AI could help optimize the prediction of solar and wind energy generation or discover ideal locations for brand spanking new facilities.
It may be used to perform predictive maintenance and fault detection for solar panels or other green energy infrastructure, or to watch the capability of transmission wires to maximise efficiency.
By helping researchers gather and analyze huge amounts of knowledge, AI could also inform targeted policy interventions aimed toward getting the most important “bang for the buck” from areas similar to renewable energy, Turliuk says.
To assist policymakers, scientists, and enterprises consider the multifaceted costs and advantages of AI systems, she and her collaborators developed the Net Climate Impact Rating.
The rating is a framework that could be used to assist determine the online climate impact of AI projects, considering emissions and other environmental costs together with potential environmental advantages in the long run.
At the tip of the day, probably the most effective solutions will likely result from collaborations amongst firms, regulators, and researchers, with academia leading the best way, Turliuk adds.
“On daily basis counts. We’re on a path where the results of climate change won’t be fully known until it is just too late to do anything about it. It is a once-in-a-lifetime opportunity to innovate and make AI systems less carbon-intense,” she says.