Powerful Language Models Are Supercharging Carbon Emissions, New Study Warns: What's the Solution?

Limit the use of high-capacity models.

Each time you type an AI prompt in ChatGPT or any chatbot models, a fresh issue bursts forth from the shadows—its unseen climate cost.

A recent study in Frontiers in Communication has come out with the observation that big language models (LLMs), the source behind tools such as ChatGPT, could be hastening global warming beyond our expectations.

AI Emissions Everywhere

The study, carried out by Germany's Hochschule München University of Applied Sciences, tested 14 of the most widely used LLMs with between 7 billion and 72 billion parameters. The new study was published in Frontiers in Communication.

The results were alarming: certain AI models produce as much as 50 times more carbon emissions per query than others. Generally, the more precise and sophisticated a model, the more energy and emissions it demands.

For instance, reasoning frameworks like GPT-4o typically produce more ponderous and reflective responses, but in great volumes of processing. That processing, in turn, consumes loads of electricity and water, two precious commodities that are ever more stressed in a warming world.

Thinking Tokens vs. Concise Responses

To realize why these emissions differ so extensively, the study tackled the information-processing behavior of LLMs. When you enter a prompt, an LLM decomposes it into tokens, small bits of meaning.

Reasoning-capable models insert additional "thinking tokens" to perform more in-depth computations, enabling them to generate wiser responses.

But this increased thought comes with a cost. Reasoning models averaged 543.5 tokens per question, compared to the 37.7 tokens used by concise models. Additional tokens translate to additional energy used and CO₂ released, Gizmodo writes.

For example, Cogito, with a staggering 70 billion parameters, achieved an impressive 84.9% accuracy in the benchmark test of the study containing 1,000 questions. But it also emitted three times more than comparably sized models that provided faster, less precise answers.

Subject Complexity Also Drives Emissions

The study also found that the type of question asked matters. Inquiries involving complex reasoning, like those in philosophy or abstract algebra, resulted in six times more emissions than simpler topics like geography or basic science.

This finding reinforces the idea that AI's environmental toll is not just a hardware issue, but also a usage issue.

Striking the Balance Between Accuracy and Sustainability

Maximilian Dauner, a researcher, put it this way: "Currently, we see a clear accuracy-sustainability trade-off inherent in LLM technologies."

None of the models that had emissions below 500 grams of CO2 equivalent achieved more than 80% accuracy.

In other words, smarter AI might come at a more expensive environmental price, unless coders find new means of improving efficiency without sacrificing quality.

Smart AI Usage is the Key

The research challenges us to be more discerning in our requests for AI systems. Of course, if you're fond of typing long paragraphs or blogs, you might need to tone down to lower your carbon footprint substantially. Just use the high-capacity models when you "really" need them.

ⓒ 2025 TECHTIMES.com All rights reserved. Do not reproduce without permission.

Join the Discussion