Let’s Analyze OpenAI’s Claims About ChatGPT Energy Use

-

Altman recently shared a concrete figure for the energy and water consumption of ChatGPT queries. In accordance with his blog post, each query to ChatGPT consumes about 0.34 Wh of electricity (0.00034 KWh) and about 0.000085 gallons of water. The akin to what a high-efficiency lightbulb uses in a few minutes and roughly one-fifteenth of a teaspoon.

That is the primary time OpenAI has publicly shared such data, and it adds a crucial data point to ongoing debates in regards to the environmental impact of huge AI systems. The announcement sparked widespread discussion – each supportive and skeptical. On this post I analyze the claim and unpack reactions on social media to have a look at the arguments on either side.

What Supports the 0.34 Wh Claim?

Let’s have a look at the arguments that lend credibility to OpenAI’s number.

1. Independent estimates align with OpenAI’s number

A key reason some consider the figure credible is that it aligns closely with previous third-party estimates. In 2025, research institute Epoch.AI estimated that a single query to GPT-4o consumes roughly 0.0003 KWh of energy  –  closely aligning with OpenAI’s own estimate. This assumes GPT-4o uses a mixture-of-experts architecture with 100 billion energetic parameters and a typical response length of 500 tokens. Nonetheless, they don’t account for other aspects than the energy consumption by the GPU servers and so they don’t incorporate power usage effectiveness (PUE) as is otherwise customary.

A recent academic study by Jehham et al (2025) estimates that GPT-4.1 nano uses 0.000454 KWh, o3 uses 0.0039 KWh and GPT-4.5 uses 0.030 KWh for long prompts (roughly 7,000 words of input and 1,000 words of output).

The agreement between the estimates and OpenAI’s data point suggests that OpenAI’s figure falls inside an affordable range, a minimum of when focusing only on the stage where the model responds to a prompt (called “inference”).

Image by the writer

2. OpenAI’s number is likely to be plausible on the hardware level

It’s been reported that OpenAI servers 1 billion queries per day. Let’s consider the mathematics behind how ChatGPT could serve that quantity of queries per day. If that is true, and the energy per query is 0.34 Wh, then the whole every day energy might be around 340 megawatt-hours, in keeping with an industry expert. He speculates that this could mean OpenAI could support ChatGPT with about 3,200 servers (assuming Nvidia DGX A100). If 3,200 servers must handle 1 billion every day queries, then each server would must handle around 4.5 prompts per second. If we assume one instance of ChatGPT’s underlying LLM is deployed on each server, and that the common prompt ends in 500 output tokens (roughly 375 words, in keeping with OpenAI’s rule of thumb), then the servers would want to generate 2,250 tokens per second. Is that realistic?

Stojkovic et al (2024) have been capable of achieve a throughput of 6,000 tokens per second from Llama-2–70b on an Nvidia DGX H100 server with 8 H100 GPUs. 

Nonetheless, Jegham et al (2025) have found that three different OpenAI models generated between 75 and 200 tokens per second on average. It’s, nonetheless, unclear how they arrived at this.

So plainly we cannot reject the concept that 3,200 servers could give you the option to handle 1 billion every day queries.

Why some experts are skeptical

Despite the supporting evidence, many remain cautious or critical of the 0.34 Wh figure, raising several key concerns. Let’s take a have a look at those.

1. OpenAI’s number might omit major parts of the system

I believe the number only includes the energy utilized by the GPU servers themselves, and never the remaining of the infrastructure – comparable to data storage, cooling systems, networking equipment, firewalls, electricity conversion loss, or backup systems. This can be a common limitation in energy reporting across tech corporations.

As an illustration, Meta has also reported GPU-only energy numbers prior to now. But in real-world data centers, GPU power is simply a part of the complete picture.

2. Server estimates seem low in comparison with industry reports

Some commentators, comparable to GreenOps advocate Mark Butcher, argue that 3,200 GPU servers seems far too low to support all of ChatGPT’s users, especially when you consider global usage, high availability, and other applications beyond casual chat (like coding or image evaluation).

Other reports suggest that OpenAI uses tens and even a whole lot of 1000’s of GPUs for inference. If that’s true, the whole energy use might be much higher than what the 0.34 Wh/query number implies.

3. Lack of detail raises questions

Critics, eg David Mytton, also indicate that OpenAI’s statement lacks basic context. As an illustration:

  • What exactly is an “average” query? A single query, or a full conversation?
  • Does this figure apply to only one model (e.g., GPT-3.5, GPT-4o) or a median across several?
  • Does it include newer, more complex tasks like multimodal input (e.g., analyzing PDFs or generating images)?
  • Is the water usage number direct (used for cooling servers) or indirect (from electricity sources like hydro power)?
  • What about carbon emissions? That depends heavily on the placement and energy mix.

Without answers to those questions, it’s hard to understand how much trust to put within the number or methods to compare it to other AI systems.

Perspectives

Are big tech finally hearing our prayers?

OpenAI’s disclosure is available in the wake of Nvidia’s release of knowledge in regards to the embodided emissions of the GPU’s, and Google’s blog post in regards to the life cycle emissions of their TPU hardware. This might suggest that the firms are finally responding to the numerous calls which were made for more transparency. Are we witnessing the dawn of a brand new era? Or is Sam Altman just playing tricks on us since it is in his financial interests to downplay the climate impact of his company? I’ll leave that query as a thought experiment for the reader.

Inference vs training

Historically, the numbers that we now have seen estimated and reported about AI’s energy consumption has related to the energy use of AI models. And while the training stage will be very energy intensive, over time, serving billions of queries (inference) can actually use more total energy than training the model in the primary place. My very own estimates suggest that training GPT-4 can have used around 50–60 million KWh of electricity. With 0.34 Wh per query and 1 billion every day queries, the energy used to reply user queries would surpass the energy use of the training stage after 150-200 days. This lends credibility to the concept that inference energy is value measuring closely.

Conclusion: A welcome first step, but removed from the complete picture

Just as we thought the controversy about OpenAI’s energy use had gotten old, the notoriously closed company stirs it up with their disclosure of this figure. Many are excited in regards to the incontrovertible fact that OpenAI has now entered the controversy in regards to the energy and water use of their products and hope that this is step one towards greater transparency in regards to the ressource draw and climate impact of massive tech. Alternatively, many are skeptical of OpenAI’s figure. And for good reason. It was disclosed as a parenthesis in a blog post about an an entirely different topic, and no context was given in any respect as detailed above.

Though we is likely to be witnessing a shift towards more transparency, we still need loads of information from OpenAI with a view to give you the option to critically assess their 0.34 Wh figure. Until then, it ought to be taken not only with a grain of salt, but with a handful.


That’s it! I hope you enjoyed the story. Let me know what you’re thinking that!

Follow me for more on AI and sustainability and be happy to follow me on LinkedIn.

ASK ANA

What are your thoughts on this topic?
Let us know in the comments below.

0 0 votes
Article Rating
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

Share this article

Recent posts

0
Would love your thoughts, please comment.x
()
x