Five technology trends shaping data centre development
The Vertiv Frontiers report details technology trends driving data centre development and the macro forces impacting the industry.
The macro factors include: extreme densification accelerated by AI and HPC workloads; gigawatt scaling at speed (data centres are now being deployed rapidly and at unprecedented scale); data centres as a unit of compute (the AI era requires facilities to be built and operated as a single system); and silicon diversification – data centre infrastructure must adapt to an increasing range of chips and compute.
Meanwhile the technology trends the report identifies include:
A shift to DC power architecture. Most current data centres still rely on hybrid AC/DC power distribution from the grid to the IT racks, which includes three to four conversion stages and some inefficiencies. This existing approach is under strain as power densities increase, largely driven by AI workloads. The shift to higher voltage DC architectures enables significant reductions in current, size of conductors, and number of conversion stages while centralising power conversion at the room level. Hybrid AC and DC systems are pervasive, but as full DC standards and equipment mature, higher voltage DC is likely to become more prevalent as rack densities increase. On-site generation and microgrids will also drive adoption of higher voltage DC.
The use of private on-premises inference centres by regulated industries. The way data centres are delivered will depend on the specific requirements and conditions of the organisation, says the report. While this will impact businesses of all types, highly regulated industries such as finance, defence, and healthcare may need to maintain private or hybrid AI environments via on-premises data centres due to data residency, security, or latency requirements.
Increasing use of on-site energy generation. Short-term on-site energy generation capacity has been essential for most standalone data centres for decades to support resiliency. However, widespread power availability challenges are creating conditions to adopt extended energy autonomy, especially for AI data centres, says the report. Investment in on-site power generation, via natural gas turbines and other technologies, does have several intrinsic benefits but is primarily driven by power availability challenges. Technology strategies such as Bring Your Own Power (and Cooling) are likely to be part of ongoing energy autonomy plans.
Advanced data centre design employing digital tools. With increasingly dense AI workloads and more powerful GPUs comes a demand to deploy complex AI factories with speed. Using AI-based tools, data centres can be mapped and specified virtually via digital twins. The IT and critical digital infrastructure can be integrated, often as prefabricated modular designs, and deployed as units of compute, reducing time-to-token by up to 50%. This approach will be important to efficiently achieving the gigawatt-scale buildouts required for future AI advancements.
Swifter adoption of liquid cooling. The report finds AI workloads and infrastructure have accelerated the adoption of liquid cooling. But conversely, AI can also be used to further refine and optimise liquid cooling solutions. Liquid cooling has become mission-critical for a growing number of operators, but AI could provide ways to further enhance its capabilities. AI, in conjunction with additional monitoring and control systems, has the potential to make liquid cooling systems smarter and more robust by predicting potential failures and effectively managing fluid and components. This trend should lead to increasing reliability and uptime for high value hardware and associated data/workloads.
The macro factors include: extreme densification accelerated by AI and HPC workloads; gigawatt scaling at speed (data centres are now being deployed rapidly and at unprecedented scale); data centres as a unit of compute (the AI era requires facilities to be built and operated as a single system); and silicon diversification – data centre infrastructure must adapt to an increasing range of chips and compute.
Meanwhile the technology trends the report identifies include:
A shift to DC power architecture. Most current data centres still rely on hybrid AC/DC power distribution from the grid to the IT racks, which includes three to four conversion stages and some inefficiencies. This existing approach is under strain as power densities increase, largely driven by AI workloads. The shift to higher voltage DC architectures enables significant reductions in current, size of conductors, and number of conversion stages while centralising power conversion at the room level. Hybrid AC and DC systems are pervasive, but as full DC standards and equipment mature, higher voltage DC is likely to become more prevalent as rack densities increase. On-site generation and microgrids will also drive adoption of higher voltage DC.
The use of private on-premises inference centres by regulated industries. The way data centres are delivered will depend on the specific requirements and conditions of the organisation, says the report. While this will impact businesses of all types, highly regulated industries such as finance, defence, and healthcare may need to maintain private or hybrid AI environments via on-premises data centres due to data residency, security, or latency requirements.
Increasing use of on-site energy generation. Short-term on-site energy generation capacity has been essential for most standalone data centres for decades to support resiliency. However, widespread power availability challenges are creating conditions to adopt extended energy autonomy, especially for AI data centres, says the report. Investment in on-site power generation, via natural gas turbines and other technologies, does have several intrinsic benefits but is primarily driven by power availability challenges. Technology strategies such as Bring Your Own Power (and Cooling) are likely to be part of ongoing energy autonomy plans.
Advanced data centre design employing digital tools. With increasingly dense AI workloads and more powerful GPUs comes a demand to deploy complex AI factories with speed. Using AI-based tools, data centres can be mapped and specified virtually via digital twins. The IT and critical digital infrastructure can be integrated, often as prefabricated modular designs, and deployed as units of compute, reducing time-to-token by up to 50%. This approach will be important to efficiently achieving the gigawatt-scale buildouts required for future AI advancements.
Swifter adoption of liquid cooling. The report finds AI workloads and infrastructure have accelerated the adoption of liquid cooling. But conversely, AI can also be used to further refine and optimise liquid cooling solutions. Liquid cooling has become mission-critical for a growing number of operators, but AI could provide ways to further enhance its capabilities. AI, in conjunction with additional monitoring and control systems, has the potential to make liquid cooling systems smarter and more robust by predicting potential failures and effectively managing fluid and components. This trend should lead to increasing reliability and uptime for high value hardware and associated data/workloads.
