Deploy to a Portfolio Company
Request Deployment
48-hour shortlist. 7-day start.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Contact Us!
The fastest way to reach us is via email at support@thinktanq.com
For more detailed conversations, please fill out the form below.
Request Deployment
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
January 21, 2026

The Data Center Industry Is Building Yesterday's Buildings for Tomorrow's Workloads

There is a building going up right now — probably within a hundred miles of wherever you are reading this — that will be obsolete the day it opens. Not because the construction is bad. Not because the design is flawed by the standards it was measured against. But because the standards changed while the building was being built.

The data center industry spent twenty years refining a formula. Standard rack densities. Standard cooling approaches. Standard power distribution architectures. Standard redundancy schemes. The formula worked beautifully for cloud computing, for enterprise IT, for colocation, for content delivery. It produced tens of thousands of facilities across the world that run the digital economy reliably and efficiently.

That formula does not work for what is coming next.

The workloads that AI training represents are not an incremental step up from cloud computing. They are a categorically different demand pattern that breaks assumptions the industry has been engineering around for two decades. The power per rack is not 10 percent higher. It is 800 percent higher. The heat rejection is not a tuning problem for existing cooling systems. It requires an entirely different thermodynamic approach. The network architecture is not a faster version of the old one. It is a fundamentally different topology designed for collective computation rather than independent tasks.

And yet the majority of data centers currently under construction were designed using the old playbook. Standard footprints. Standard MEP designs. Standard construction sequences. They will be perfectly good buildings for perfectly ordinary workloads. But they will not serve the workloads that are driving the trillion-dollar investment cycle the industry is riding right now.

The Density Wall

The clearest expression of this mismatch is power density. For most of the data center era, the industry operated in a comfortable band of 5 to 10 kilowatts per rack. The electrical distribution systems, the cooling systems, the structural loading, and the fire suppression systems were all engineered for this range. A building designed for 8 kilowatts per rack can reliably deliver 8 kilowatts per rack all day, every day, and that was enough.

AI training racks operate at 40 to 120 kilowatts. Some next-generation configurations are pushing toward 150. This is not a marginal increase that can be accommodated by upgrading a few components. It cascades through every system in the building.

The electrical distribution needs to handle more current per circuit, which means heavier conductors, larger bus ducts, and more switchgear — all of which take more space and weigh more. The floors need to support heavier equipment loads. The cooling system needs to reject heat at rates that overwhelm traditional air-handling approaches, which is why the industry is moving to liquid cooling — but liquid cooling introduces piping, pumps, coolant distribution units, and leak detection systems that the old building shells were never designed to accommodate.

You cannot retrofit a 10-kilowatt building to serve a 100-kilowatt workload. You can try, and some people are trying, and the results range from expensive to disappointing to dangerous. The structural, mechanical, and electrical changes required are so extensive that you are essentially building a new facility inside the shell of an old one, at a higher cost and lower quality than if you had started from scratch.

This matters because there is a significant amount of capital currently being deployed into facilities that are designed at legacy densities. Some of these projects were conceived two or three years ago, when the AI training boom was less visible. Some are being built by developers whose design teams have not yet adapted to the new requirements. Some are deliberate bets on inference workloads, which do operate at lower densities — but even inference density assumptions are climbing faster than most forecasts predicted.

The question every developer and investor in this space should be asking is not "how many megawatts are we building" but "are we building the right kind of megawatt for the workload the market will demand when this building opens in 18 months?"

The Cooling Transition Is Harder Than Anyone Admits

The industry talks about the shift to liquid cooling as if it were a product swap. Remove the air handlers. Install the coolant distribution. Done. In practice, it is one of the most significant construction and operational transitions the data center industry has ever undertaken.

Liquid cooling systems introduce a set of engineering challenges that air-cooled facilities never had to contend with. Fluid dynamics at scale. Pressure management across hundreds of rack-level connections. Chemical treatment and water quality maintenance. Leak detection and containment in environments where a single failure can damage millions of dollars in GPU hardware. Maintenance procedures that require mechanical engineering skill sets that most data center operations teams have never needed.

The construction sequence changes too. In an air-cooled building, the mechanical and IT infrastructure are largely independent systems that get installed by different trades on different timelines. In a liquid-cooled building, the cooling infrastructure is physically integrated with the IT equipment at the rack level. The piping runs to every rack. The connections must be precise. The testing must verify every joint in a system that may have thousands of connection points across a large facility. A single leak in a poorly made connection can take down an entire row of compute.

This is not an argument against liquid cooling. It is clearly the right approach for high-density AI workloads. It is an argument that the transition is a construction and operations transformation, not a component upgrade. The facilities that will perform well are the ones where the mechanical design, the construction methodology, and the commissioning process were all conceived around liquid cooling from the beginning — not adapted from an air-cooled template.

The Network Is Part of the Building Now

In a traditional data center, the network is infrastructure that lives inside the building but is largely independent of it. The building provides power and cooling and physical security. The tenant brings their own network equipment and configures it to their needs. The building's physical layout has minimal impact on network performance.

In an AI training facility, the network is the building. The physical arrangement of GPU racks relative to each other directly affects training performance. The length of fiber and copper runs between nodes is a performance variable measured in nanoseconds. A cluster of 4,000 GPUs performing a collective computation needs to exchange data between nodes at speeds and latencies that are dictated by physics — the speed of light in fiber, the propagation delay in copper, the switching latency in the network fabric.

This means the building's floor plan, the rack layout, the cable routing, even the aisle dimensions are network engineering decisions as much as they are architectural ones. A facility that was designed with a traditional data center floor plan — standard hot aisle/cold aisle configuration, modular rack placement, flexible cable tray routing — may not be physically capable of supporting the network topology that an AI training cluster requires without significant modification.

The best AI facilities are being co-designed with the compute architecture. The network team and the building team work from the same model. The rack positions are determined by network topology requirements, not by structural column spacing. The cable pathways are designed for specific fiber run lengths that the network architecture demands. This is a level of integration between IT and facilities that the industry has never had to manage, and it requires a different kind of team — one that understands both the computation and the construction.

The Cost of Getting It Wrong

A data center that does not serve the workload it was built for is not a minor investment loss. It is a stranded asset. The tenant who needs 80 kilowatts per rack is not going to lease a building designed for 10, no matter how nice the building is or how good the location is. The workload requirements are non-negotiable. The physics do not compromise.

The market will sort this out, as markets always do. Facilities that were designed for the current generation of AI workloads will fill and command premium economics. Facilities that were designed for the previous generation will compete for a shrinking pool of lower-density tenants, at lower rents, with thinner margins. Some will find a home. Some will not.

The trillion-dollar buildout currently underway will produce both outcomes. The question is not whether enough capacity gets built. It is whether the right capacity gets built. And the answer to that question depends entirely on whether the teams doing the building understand the workload they are building for — not the workload they built for last time.