The Platform

Taylor Vick/Unsplash

Out with the old, in with the new, and data centers are no exception.

The digital landscape is constantly evolving, and nowhere is this more apparent than in the realm of data centers. These technological nerve centers constructed a mere half-generation ago, now find themselves on the brink of irrelevance. Industry veterans note a tectonic shift: For decades, the blueprint for data centers remained static, but today’s rapidly advancing technology has rendered those once cutting-edge designs anachronistic.

As one industry veteran observed: “Data center designs were stable for about the last two decades. If you built one twenty years ago, the one someone else built ten years later would have had pretty much the same design concepts.”

We are now witnessing an inflection point marked by the integration of artificial intelligence and neural networks into corporate enterprises, upending the traditional data center paradigm. The infrastructure designed to power yesterday’s technology cannot satiate the voracious appetite of these new computational behemoths. Consequently, the specifications for contemporary data centers have evolved dramatically, necessitating a reimagining of every aspect, from power delivery and storage density to server configurations and cooling capabilities.

The advent of generative AI data centers demands an unprecedented scale of power, dwarfing the needs of their predecessors by an order of magnitude. Such an escalation is emblematic of the broader technological upheaval that casts a shadow over legacy data centers, rendering them unsuitable for retrofitting to meet this new generation’s exigencies.

The depth of redesign required to accommodate the manifold increase in power and the intensified demands on every facet of design—from cooling to structural integrity—is beyond the reach of simple upgrades.

The dramatic upsurge in power usage is most palpably felt in the deployment of GPUs for specific, power-intensive tasks, marking a departure from the past and transforming data center design into a distinct scientific discipline. With the injection of such power comes a cascade of design considerations: power density, rack space, and cooling systems must all be reconceived to contend with the resultant heat production.

Historical “Best Practices,” once the gospel of data center design, now face obsolescence. The tried-and-true methodologies that governed data center construction for two decades are inadequate for the contemporary suite of challenges. Today’s data center must not only accommodate a heavier array of equipment but also implement innovative cooling methods, such as liquid cooling, to dissipate the intense heat generated by densely packed GPUs.

The physical infrastructure, too, must evolve. The load-bearing capacity of flooring, once guided by established rules of thumb, now requires reevaluation to support the increased weight of today’s equipment. These modifications are not trivial and cannot be appended to existing structures designed under a different set of assumptions.

An additional imperative is the uncompromising requirement for full redundancy in both power and broadband connectivity, with diverse routing being non-negotiable. The Nine Rs of intelligent infrastructure—reliability, redundancy, reduced operating costs (green), robust design, resiliency, routing, resistance-to-attacks (physical and cyber), rigorous regular testing, and refinement—constitute the new canon for modern data center design.

For companies venturing into the data center market, the wisdom of constructing a new facility tailored to current and future specifications, rather than repurposing an antiquated one, becomes apparent. The investment calculus must consider not just the present, but the projected life span of the asset, factoring in the risk of premature obsolescence due to infrastructural deficiencies.

The discourse on retrofitting older data centers for GenAI applications persists, yet the reality suggests that such endeavors might be tantamount to fiscal folly. The financial outlay required for such comprehensive overhauls could very well warrant starting afresh.

As for the specialized real estate sector, caution is advised. Some entities are attempting to offload outdated data centers incapable of supporting GenAI applications. Prospective buyers should be wary of discounted offers for such facilities. To echo an industry aphorism: The veneer of a bargain cannot disguise the inherent shortcomings of obsolete infrastructure.

James Carlini is a strategist for mission critical networks, technology, and intelligent infrastructure. Since 1986, he has been president of Carlini and Associates. Besides being an author, keynote speaker, and strategic consultant on large mission critical networks including the planning and design for the Chicago 911 center, the Chicago Mercantile Exchange trading floor networks, and the international network for GLOBEX, he has served as an adjunct faculty member at Northwestern University.