The Myth of the Megawatt
The data center industry has a unit of measurement problem. Everything gets sold in megawatts. Lease a campus — the number is in megawatts. Evaluate a site — the pitch is megawatts. Read an earnings call — the growth is in megawatts. It has become the default language for an industry building the most complex facilities in modern construction.
The problem is that a megawatt is not a megawatt.
A megawatt from a utility feed with a four-nines uptime SLA, backed by redundant substations and dual feeds, is a fundamentally different product than a megawatt from a single utility connection with a two-year-old transformer serving a greenfield site at the edge of a constrained grid. Both are a megawatt. Both will show up identically in a pipeline report. But the first one will keep your GPUs running through a grid disturbance. The second one might not.
The industry has been able to get away with this ambiguity because traditional cloud workloads were relatively tolerant of imperfection. A web server that experiences a brief power disturbance and fails over to a backup does not lose its state. The user refreshes the page and life goes on. An AI training run is different. A cluster of 4,000 GPUs working in parallel on a training job that has been running for six weeks represents an investment of millions of dollars in compute time. A power event that disrupts that cluster — even for seconds — can corrupt a checkpoint, lose days of progress, and cost the operator an amount of money that makes the difference between a good quarter and a bad one.
This is why the quality of the megawatt matters as much as the quantity. And it is why the teams that understand the full electrical stack — from generation to distribution to the last bus bar before the rack — have an advantage that does not show up in a megawatt count.
The Numbers Everyone Cites, and What They Miss
You have seen the projections. The U.S. needs 100-plus gigawatts of data center capacity by 2030. The grid interconnection queue is four to seven years in most major markets. Capital commitments exceed a trillion dollars. These numbers are real and they are useful for understanding the scale of the buildout. But they obscure the more important question, which is not “how much capacity” but “what kind.”
Not all of that 100 gigawatts needs to be the same product. Some of it will serve inference workloads — running trained models in production — which are more distributed, more tolerant of modest power quality variation, and more flexible in where they can be located. Some of it will serve training workloads — building the next generation of models — which are concentrated, latency-sensitive, power-hungry, and completely intolerant of interruption. The electrical infrastructure required for these two use cases is materially different, but the industry often plans for both as if they are the same thing.
A training facility needs power that is not just available but stable. Voltage sags, frequency deviations, harmonic distortion — these are measurements that most commercial tenants never think about but that can materially affect GPU cluster performance. A facility designed for cloud workloads may have power quality that is perfectly adequate for running web applications but insufficient for sustained, high-density AI training. The physical equipment is different. The redundancy architecture is different. The monitoring and response protocols are different.
The developers who understand this distinction are building facilities that are purpose-designed for the workload, not repurposed from the last generation of data center design. The ones who do not understand it are building buildings that technically deliver the contracted megawatts but create operational headaches for the tenant from day one.
Why “Available” Does Not Mean “Ready”
There is a particular form of optimism in this industry that confuses power availability with power readiness. A site might have 200 megawatts “available” in the sense that the local utility has allocated that capacity in its system plan. But available capacity and deliverable capacity are not the same thing.
Deliverable capacity means the physical infrastructure exists to move that power from the generation source to the point of use at the required voltage, with the required redundancy, at the required quality, on the required timeline. In many cases, the gap between what is “available” on a utility’s planning map and what can actually be delivered to a switchgear lineup in a building is measured in years and hundreds of millions of dollars of infrastructure investment.
The substation needs to be built or upgraded. The transmission lines need to be rated for the load. The interconnection equipment needs to be ordered, manufactured, delivered, installed, and tested. Each of these steps has its own timeline, its own supply chain constraints, and its own permitting requirements. A utility that says “we can serve 200 megawatts” is often saying “we can serve 200 megawatts after we complete $150 million in upgrades that will take 36 months” — which is a very different statement than what ends up in the marketing material.
The experienced operators in this market have learned to pressure-test every claim about power availability. They ask to see the system impact study. They ask about the condition and age of the existing substation equipment. They ask what other loads are being planned in the same service territory that might compete for the same capacity. They ask what happens if the utility’s upgrade timeline slips by six months. These are not paranoid questions. They are the questions that separate projects that deliver from projects that announce.
The Cooling Equation Nobody Talks About
Power is only half the thermodynamic equation. Every megawatt that goes into a GPU comes back out as heat. And the way you reject that heat determines whether your facility actually operates at the density you designed for.
The shift to liquid cooling in AI facilities is well-documented. What is less discussed is how profoundly it changes the mechanical engineering, the construction sequence, the commissioning process, and the operational profile of the facility. Air-cooled data centers are relatively forgiving environments. The airflow dynamics are well-understood. The equipment is commodity. The failure modes are predictable. Liquid-cooled facilities are a different discipline. You are running fluid through piping networks at pressures and temperatures that require mechanical engineering precision, leak detection systems, water treatment protocols, and maintenance procedures that are more akin to an industrial process plant than to a traditional data center.
A facility that is designed for 80 kilowatts per rack but whose cooling system cannot actually reject that heat load at peak ambient temperature is not an 80-kilowatt-per-rack facility. It is whatever its cooling system can sustain under real-world conditions. And real-world conditions include the hottest day of the year, not the annual average that the design engineer used in the modeling.
This is where the gap between paper specs and operational reality becomes expensive. A tenant that contracts for a specific power density and discovers that the facility cannot sustain it under load is not going to quietly accept reduced capacity. They are going to demand remediation, negotiate rent reductions, or leave. The cost of getting the cooling wrong is not a construction change order. It is a tenant relationship.
What the Smart Money Is Asking
The most sophisticated buyers in this market — the hyperscalers, the AI labs, the sovereign funds backing large-scale campuses — have gotten dramatically more rigorous in how they evaluate facilities over the last 18 months. The days of signing a lease based on a megawatt number and a delivery date are ending.
The questions have changed. They want to see the electrical one-line diagram and understand every point of potential failure between the source and the rack. They want to know who commissioned the facility, what testing protocols were used, and whether the commissioning team will be available for warranty support. They want to see the cooling system’s performance data under simulated full load, not just the manufacturer’s rated specs. They want to understand the operational team’s experience with the specific equipment and configurations in the building.
These buyers are not being difficult. They are being rational. They have learned, often through painful experience, that a megawatt on paper and a megawatt in practice are different things. They have had training runs fail because of power quality issues that were not caught during commissioning. They have had cooling systems that could not sustain design density during summer months. They have had facilities that passed every checklist but did not perform under the actual workload.
The market is bifurcating. There are facilities that are built to deliver — where the megawatt is real, the cooling is proven, the commissioning is rigorous, and the operational team knows what they are managing. And there are facilities that are built to lease — where the numbers look right on the spec sheet but the details behind them have not been stress-tested.
The first category will command premium rents, retain tenants, and compound their reputation. The second will compete on price, churn tenants, and spend their margin on remediation. The separation is already underway. The smart money can tell the difference. The question is whether your facility can survive the scrutiny.