Scroll Top

AI and The Environment: How to Balance Sustainability with Optimization

A robot stands on a rock in an outdoor space surrounded by grass. (Photo by: Zamani Sahudi).
Reading Time: 5 minutes

While AI offers numerous sustainability benefits, including but not limited to optimization, efficiency, and forecasting, it can have an adverse effect on the environment because of the energy, water, and carbon it requires. 

According to AI sustainability expert Sasha Luccioni, AI text produces 30x more waste than simply citing a source from a search engine

This concern is especially relevant now with the stark adoption of GenAI and growing regulatory and public scrutiny of AI over the past few years. 

Considering these concerns, let’s explore AI’s impact on sustainability initiatives. 

In this post, we’ll cover:

  • Why AI’s environmental impact is often invisible. 
  • Resource allocation and management: where the biggest gains are.
  • Tools, frameworks, and real-time data: turning insight into action.
  • Energy-saving techniques. 
  • Governance, cooperation, and the role of policy. 
  • Resource allocation and management: concerns and solutions. 
  • What else can we do? Leveraging real-time data, sustainability tools, and energy-saving techniques. 
  • Conclusion.

Why AI’s Environmental Impact Is Often Invisible 

To begin, AI sustainability is not just a technical issue—it’s a systems issue.

Organizations often underestimate AI’s environmental input for several reasons. 

First of all, cloud computing conceals physical infrastructure. While this makes the cloud service easier for the customer, it makes it difficult to measure the exact number of resources used. Thus, while energy and water costs can feel virtual, they’re very much concrete. 

Also, the daily usage of AI systems outweighs the number of training emissions at scale, so organizations often don’t account for all of the emissions AI takes up. While training is a one-time event, inferences happen every day, and employees often don’t track those inferences. The lack of accurate inference numbers also contributes to companies underestimating how many emissions they really create. 

Finally, data centers affect water-stressed regions and energy grids with different levels of carbon intensity. Water-stressed regions often have water-cooling technologies and lack transparency as to how much water is used, and tech companies are favored over the health of local communities. Meanwhile, carbon-intensity grids often rely on fossil fuels and cooling systems, and they vary in how much carbon they expend. Therefore, both water-stressed regions and carbon-intensity grids suffer from an unequal global impact on carbon emissions. 

All of these factors combined make measuring AI’s impact on the planet feel nearly impossible. 


Resource Allocation and Measurement: Where the Biggest Gains Are 

Considering these gaps in measurement, it’s crucial that we determine how to allocate our resources. 

To start, it’s important to consider whether to use edge or cloud deployment. 

Edge devices are physically closer to the user, while cloud services are expansive and usable across a global network of remote servers. Employing large AI models on edge (local) devices, rather than the cloud, could cut costs and energy, according to EY.

You also want to think about whether you need task-specific models or general-purpose LLMs. Task-specific models help you carry out a particular function, while general-purpose LLMs help you oversee the model. 

Also, consider whether you need image generation capabilities or text generation capabilities. Image generation differs from text generation resources. Images take more resources to generate than text, and general AI applications consume more resources than smaller ones designed for particular tasks, according to EY. Large Language Models (LLMs) also use a ton of power because they have more parameters, which allows them to learn patterns and relationships in data and expand their knowledge. The more knowledge they store, the more power they need, according to MIT computer scientist Noman Bashir in a Science News article. 

It’s important to note that bigger models don’t always produce better outcomes. In fact, a smaller model may be a better use of your resources if your solution is needed in a certain area or smaller scale. In addition, it’s crucial to realize that overusing GenAI could increase your footprint without giving your business a positive ROI. 

Aside from navigating computing, model, and deployment choices, one must consider the measurement gap often present in sustainability models. 

Companies are already taking strides to minimize their environmental impacts through frameworks such as the European Commission’s Corporate Sustainability Reporting Directive, the Carbon Disclosure Project, and the Global Reporting Initiative. 

Furthermore, AI-specific policies such as the EU AI Act and the National Institute of Standards and Technology framework have also taken shape. 

What’s lacking in these policies, however, is the measurement and operational frameworks needed to quantify our environmental impact, according to EY. 

To solve this measuring problem, EY suggests the following:

  • Calculating carbon emissions from counting operations and model parameters or tokens. 
  • Measuring electricity usage based on hardware efficiency from data provided by cloud providers,
  • Measuring water usage. (This is especially important since AI systems require vast amounts of water for cooling, which leaves areas already vulnerable to water loss under greater stress.)
  • Determining the company’s total carbon footprint by calculating direct emissions while also noting the challenge of determining indirect emissions.  

These measurements may not be perfect at first, as the sheer amount of water cooling it takes to power data centers, coupled with the number of emissions it takes to train models, makes it challenging to calculate energy usage. However, with persistence and precision, companies can get closer to accurately measuring their footprint. 

Tools, Frameworks, and Real-Time Data: Turning Insight Into Action

The good news is that organizations are already making headway in their sustainability efforts. EGS Dive advises companies to use real-time data to make informed decisions about energy investments, supply chain operations, and long-haul sustainability efforts. Furthermore, they note that AI makes sustainable data more accessible. It allows operators and non-environmental professionals to see complex scenarios through intuitive interfaces and can quantify the value of opportunities such as low-carbon products. 

EGS Dive also notes that a 2024 PWC survey indicated that while 63% of top-performing companies cited “leveraging AI” as a reason for investing in cloud budgets, 34% of top-performing companies that increased cloud budgets did so for sustainability reasons. This is good news for companies aspiring to be environmentally-friendly. 

Going back to Luccioni, she created a tool called Code Carbon that runs parallel to AI systems and measures their energy use and carbon emissions. She said a tool like this can help leaders make smarter choices about which models to use and deploy models based on renewable energy. 

Code Carbon isn’t the only sustainable-friendly tool that’s been created; Hugging Face runs a leaderboard called AI Energy Score to score models based on how much energy they use across 10 different tasks, such as image and text generation, according to Science News

In this process, it’s important to benchmark AI carbon, water, and energy emissions in order to objectively look at the technology’s performance gaps, savings, and risks. 


Energy-Saving Techniques: What Companies Can Do Today 

While measurement tools are being built, we can take steps in the meantime to ensure we’re using less energy. Bashir suggests using AI less during the daytime and summer, when power demand increases and cooling systems work harder.

Also, Maximilian Dauner from the Munich University of Applied Sciences in Germany  strongly recommends not using any more words than necessary in AI prompts, as any extra words use more power.

In addition, think about choosing the right model for the size of your task. This will ensure you’re not using any more energy than necessary.

Lastly, align your workloads with green energy availability. This can help you stay on a more sustainable path for your business in the long term.

By using AI during off-seasons, using fewer words in prompts, and choosing the right-sized models and workloads, you’ll be well on your way to reducing your company’s carbon footprint.


Governance, Cooperation, and the Role of Policy 

Considering all of these efforts, we must make sure that we share these standards both nationally and internationally. 

Implementing cross-border standards will help everyone contribute to combating climate change. 

As for internal AI governance, AI policies should include sustainability criteria to show employees and stakeholders that the company is keeping sustainability top-of-mind. Also, conducting planetary impact reviews along with the company’s regular performance reviews could be an effective way to inform employees of how many resources they’re using and how they can work more sustainably.


Conclusion: A Planet-First Approach to AI Innovation 

In conclusion, it’s possible to use AI sustainably — you just have to be strategic about it. Measuring AI’s costs against its opportunities and making decisions with a planet-first mindset will be the key to using AI responsibly. Measuring AI use accurately and reporting that use transparently will be key to making progress. In addition, practicing restraint when AI isn’t necessary will also be important for both the planet and your profits.

Overall, sustainable AI will give your business an edge in the market, and the early adopters of sustainable practices will spearhead industry norms.