The Unseen Cost of Intelligence

A field guide to AI’s environmental footprint — and why it belongs on every sustainability agenda

Most organizations that take sustainability seriously have invested considerable effort in understanding where their environmental costs arise. They track emissions. They report against recognized frameworks. They set targets, publish results, and face scrutiny when the numbers fall short.

And then many of those same organizations have, over the last two or three years, started deploying artificial intelligence at scale — in their customer service operations, procurement workflows, HR processes, and product development pipelines. Not as an experiment, but as part of their operational infrastructure.

Hardly anyone has asked about the environmental cost of that infrastructure.

This is the focus of this series. Not whether AI is useful — which it clearly is — but whether our current methods of deployment align with the environmental commitments that sustainability-minded organizations have publicly and sincerely committed to. The answer, in most cases, is: we do not yet know because we have not been measuring.

This article outlines the full scope of the problem: what the footprint includes, why it has remained hidden for so long, and what the seven subsequent articles in this series will explore in detail. Think of it as a field guide to a territory that most sustainability practitioners have not yet navigated.

Why This Gap Exists

Most organizations use sustainability reporting frameworks like GRI, ESRS, the German DNK, and sector-specific standards, which were created before AI became a key operational factor. These frameworks focus on physical processes such as manufacturing, transportation, energy procurement, waste management, and real estate. Digital infrastructure is mainly recognized as IT energy consumption, usually seen as a fixed overhead rather than a variable cost that changes with algorithmic activity.

AI has quickly moved from the IT department to the center of operations, outpacing governance structures. Sustainability reporting teams often lack visibility into which AI systems are running, their scale, or infrastructure. Conversely, those deploying AI are usually not considering carbon accounting. This creates a structural gap between two organizational functions that have not yet been encouraged to communicate.

This is worsened by opacity in the AI supply chain. The main cloud AI providers do not release per-query energy data. Model sizes and training expenses are seldom disclosed in detail. Estimating the footprint of an AI-as-a-service deployment requires judgment due to the lack of direct measurement, which is why most organizations have chosen not to estimate at all.

The gap is mainly organizational, not technical: it’s a disconnect between those responsible for sustainability reporting and those handling AI deployment, without any governance link connecting them.

What the Footprint Consists Of

Before examining the seven dimensions this series will cover, it’s useful to define the overall nature of the problem. The environmental footprint of AI has three structural components, each with a different magnitude and set of mitigation strategies.

Energy consumption is the most noticeable aspect. Large language models need a lot of computing power at two key times: training, which is intensive but occurs only once, and inference—the process of generating responses—which is ongoing and directly increases with usage. A widely cited study from the University of Massachusetts Amherst found that training a single large transformer model can produce CO₂ emissions similar to the lifetime footprint of multiple cars. More importantly for organizations deploying AI at scale, inference costs add up: a single query to a frontier LLM uses about ten times the energy of a typical web search, and these queries total billions daily across the global user base of major AI services.

Water consumption is less often discussed but just as significant. Data centers cool their hardware—especially for AI workloads—mainly through water-based thermal management systems. As AI infrastructure grows, so does the water demand of data centers. Several major providers have reported year-over-year increases in water usage, amounting to hundreds of millions of liters. For organizations in regions already experiencing water stress or that track water-related sustainability indicators, this is a crucial factor that warrants clear consideration.

Hardware and manufacturing form the third, least visible component. The chips that power AI workloads require rare-earth materials, energy-intensive production methods, and generate electronic waste at the end of their life. These costs occur before any inference call is made and are Scope 3 costs that are rarely linked to the AI systems that rely on them. As AI infrastructure grows and hardware update cycles accelerate, this upstream impact becomes increasingly important.

Together, these three components create a significant environmental cost for any organization using AI at operational scale. The issue isn’t whether the cost exists, but whether organizations are accounting for it—and whether they are applying the same governance discipline to algorithmic resource use as they do for energy procurement, travel, or supply chain emissions.

The Values-Practice Gap

For organizations that have made genuine sustainability commitments — B-Corps, Gemeinwohl-Ökonomie members, ESG-committed companies with public reporting requirements — this blind spot is more than a technical oversight. It is a consistency issue.

An organization that carefully manages its Scope 2 electricity use, audits its supply chain for sustainability risks, and publishes a detailed environmental report aligned with CSRD, but also runs enterprise AI workloads without any environmental accounting, has an internal contradiction that informed stakeholders will eventually notice. This is not hypothetical: the intersection of AI governance and ESG reporting is an emerging focus of analyst and investor scrutiny, and organizations that cannot answer basic questions about their AI-related environmental impact will find that gap increasingly difficult to justify.

The more effective approach, however, is viewing it as an opportunity rather than a risk. Organizations that establish strong AI environmental governance now — before it becomes required — will gain a credibility edge that’s hard to replicate later. This mirrors the pattern seen with supply chain sustainability over the past decade: early adopters who developed systematic strategies before regulatory pressure appeared now enjoy a structural advantage over those who waited.

What This Series Covers

This article serves as the introduction. Each of the following seven articles explores one aspect of the issue in detail — offering practical frameworks, specific guidance, and, where applicable, the regulatory context that will make these questions essential for European organizations in upcoming reporting periods.

1

The Unseen Cost of Intelligence  [this article]

Overview & series map

2

From Chip to Query: The Full Lifecycle Footprint of a Large Language Model

Training · Inference · Hardware

3

What Gets Measured Gets Managed: A Framework for Quantifying Algorithmic Resource Use

Metrics & measurement

4

Right-Sizing Intelligence: The Proportionality Principle in AI Deployment

Model selection & efficiency

5

Tokens Have a Price: Prompt Engineering, Caching, and Carbon-Aware Computing

Workflow optimisation

6

Holding Vendors Accountable: What to Ask Your AI Provider About Environmental Impact

Procurement & supply chain

7

Closing the Reporting Gap: How to Account for AI in Your Sustainability Disclosure

CSRD · ESRS · Materiality

8

From Blind Spot to Strategic Advantage: AI Environmental Governance as Competitive Differentiator

Governance · ISO 42001 · EU AI Act

The series is meant to be read in order, but each article is standalone. If your organization has already moved past the awareness stage and needs to focus on a specific question — such as how to measure, how to choose models, or how to disclose — you can go directly to the relevant article. The overview you’re reading now will serve as a reference point for the larger context.

Why Now

There is a tendency to view AI’s environmental footprint as a future issue — something to handle once the technology matures, standards are stable, and measurement tools improve. This is a common pattern in sustainability: the same argument was made about supply chain emissions reporting, water risk disclosure, and Scope 3 accounting in general. In each case, organizations that delayed found themselves managing a compliance transition under tight deadlines rather than developing a strategic capability at their own pace.

The EU AI Act, which took effect in August 2024, currently does not mandate environmental disclosure for AI systems — but it sets a precedent for systematic documentation of AI impacts, and the CSRD’s materiality-driven approach to reporting creates a path through which AI-related environmental costs will increasingly be included in mandatory disclosures. The regulatory direction is clear, even if the specific requirements are still developing.

More fundamentally: organizations that genuinely believe their AI use should reflect their sustainability values do not need a regulatory mandate to start asking the question. They need a framework for what to measure, how to act on what they find, and how to communicate it credibly. That is what this series offers.

The next article starts with physics: examining the full lifecycle footprint of a large language model, from manufacturing the chips it uses to its last inference call in operation.

Leave a Reply

Your email address will not be published. Required fields are marked *