For many years, traditional data centers have been vast halls of servers with power and cooling as secondary considerations. The rise of generative AI has modified these facilities into AI factories, flipping the architectural script. Power infrastructure, once an afterthought, is becoming the first factor that dictates the size, location, and feasibility of recent deployments.
We’re at a critical inflection point, where the industry can now not depend on incremental improvements, and a fundamental architectural shift is required. This latest blueprint should be more efficient, scalable, and able to managing the facility demands of recent AI.
The answer involves a two-pronged approach: implementing an 800 Volts direct current (VDC) power distribution system alongside integrated, multi-timescale energy storage. This isn’t nearly keeping the lights on—it’s about constructing the muse for the longer term of computing.
Rising power demands of AI workloads
For years, a major advance in processor technology meant a roughly 20% rise in power consumption. Today, that predictable curve has been shattered. The motive force is the relentless pursuit of performance, enabled by high-bandwidth interconnects like NVIDIA NVLink, which drive 1000’s of GPUs to operate as a single, monolithic processor.
To attain the low latency and high bandwidth required, these connections depend on copper cabling. Nonetheless, copper’s effective reach is restricted, creating what might be called a performance-density trap. To construct a more powerful AI system, you could pack more GPUs right into a smaller physical space. This architectural necessity directly links performance to power density.
The leap from the NVIDIA Hopper to the NVIDIA Blackwell architecture is a great example. While the person GPU power consumption (TDP) increased by 75%, the expansion of the NVLink domain to a 72-GPU system resulted in a 3.4x increase in rack power density. The payoff was a staggering 50x increase in performance, nevertheless it also put racks on a path from tens of kilowatts to well over 100, with a megawatt per rack now on the horizon. Delivering this level of power at traditional low voltages, like 54 VDC, is physically and economically impractical. The immense current required would result in high resistive losses and necessitate an unsustainable volume of copper cabling.
The volatility challenge of synchronous workloads
Beyond sheer density, AI workloads introduce a second, equally formidable challenge: volatility. Unlike a conventional data center running 1000’s of uncorrelated tasks, an AI factory operates as a single, synchronous system. When training a big language model (LLM), 1000’s of GPUs execute cycles of intense computation, followed by periods of knowledge exchange, in near-perfect unison.
This creates a facility-wide power profile characterised by massive and rapid load swings. This volatility challenge has been documented in joint research by NVIDIA, Microsoft, and OpenAI on power stabilization for AI training data centers. The research shows how synchronized GPU workloads may cause grid-scale oscillations.
The ability draw of a rack can swing from an “idle” state of around 30% to 100% utilization and back again in milliseconds. This forces engineers to oversize components for handling the height current, not the typical, driving up costs and footprint. When aggregated across a whole data hall, these volatile swings—representing tons of of megawatts ramping up and down in seconds—pose a major threat to the steadiness of the utility grid, making grid interconnection a primary bottleneck for AI scaling.
A brand new power delivery architecture
Addressing this multifaceted crisis requires a multifaceted solution. The proposed architectural blueprint is a dual-pronged strategy that tackles scale and volatility challenges by transitioning to 800 VDC power distribution coupled with the deep integration of energy storage.
Benefits of 800 VDC
Essentially the most effective option to combat the challenges of high-power distribution is to extend the voltage. Transitioning from a conventional 415 or 480 VAC 3-phase system to an 800 VDC architecture offers significant advantages, including:
Native 800 VDC end-to-end integration
Generating 800 VDC at the power level and delivering it on to 800 VDC compute racks eliminates redundant conversions, improving overall power efficiency. This architecture supports high-density GPU clusters, unlocks higher performance per GPU, and enables more GPUs per AI Factory, driving greater compute throughput and revenue potential for partners. It also ensures future scalability beyond 1 MW per rack and seamless interoperability across the AI Factory power ecosystem.
Reduced copper and price
With 800 VDC, the identical wire gauge can carry 157% more power than 415 VAC. Using an easier three-wire setup (POS, RTN, PE) as an alternative of 4 for AC, fewer conductors and smaller connectors are required. This reduces copper use, lowers material and installation costs, and eases cable management, critical as rack power inlets scale toward megawatt levels.
Improved efficiency
A native DC architecture eliminates multiple, inefficient AC-to-DC conversion steps that occur in traditional systems, where end-to-end efficiency might be lower than 90%. This streamlined power path boosts efficiency and reduces waste heat.
Simplified and more reliable architecture
A DC distribution system is inherently simpler, with fewer components like transformers and phase-balancing equipment. This reduction in complexity results in fewer potential points of failure and increases overall system reliability.
This isn’t uncharted territory. The electrical vehicle and utility-scale solar industries have already embraced 800 VDC or higher to enhance efficiency and power density, making a mature ecosystem of components and best practices that might be adapted for the information center.
Reducing the swings with multi-timescale energy storage
While 800 VDC solves the efficiency-at-scale problem, it doesn’t address workload volatility. For that, energy storage should be treated as a necessary, lively component of the facility architecture, not only a backup system. The goal is to create a buffer—a low-pass filter—that decouples the chaotic power demands of the GPUs from the steadiness requirements of the utility grid.
Because power fluctuations occur across a large spectrum of timescales, a multi-layered strategy is required using:
- Short-duration storage (milliseconds to seconds): High-power capacitors and supercapacitors are placed near the compute racks. They react quickly to soak up the high-frequency power spikes and fill the transient valleys created by LLM workload idle periods.
- Long-duration storage (seconds to minutes): Large, facility-level battery energy storage systems (BESS) are positioned on the utility interconnection. They manage the slower, larger-scale power shifts, comparable to the ramp-up and ramp-down of entire workloads, and supply ride-through capability during transfers to backup generators.
The 800 VDC architecture is a key enabler for this strategy. Current data center energy storage is connected in keeping with the AC power delivery. By going to 800 VDC, it becomes easier to mix storage in probably the most appropriate location.
800 VDC power distribution in next-generation AI factories


Next-generation AI factories will transition from today’s AC distribution to an 800 VDC distribution model. Today’s architecture involves multiple power conversion stages. Utility-supplied medium voltage (e.g., 35 kVAC) is stepped right down to low voltage (e.g., 415 VAC). This power is then conditioned by an AC UPS and distributed to compute racks via PDUs and busways. Inside each rack, multiple PSUs convert the 415 VAC to 54 VDC, which is then distributed to individual compute trays for further DC-to-DC conversions.
The long run vision centralizes all AC-to-DC conversion at the power level, establishing a native DC data center. On this approach, medium-voltage AC is directly converted to 800 VDC by large, high-capacity power conversion systems. This 800 VDC is then distributed throughout the information hall to the compute racks. Architecture streamlines the facility train by eliminating layers of AC switchgear, transformers, and PDUs. It maximizes white space for revenue-generating compute, simplifies the general system, and provides a clean, high-voltage DC backbone for direct integration of facility-level energy storage.
The transition to a completely realized 800 VDC architecture will occur in phases, giving the industry time to adapt and the component ecosystem to mature.


The NVIDIA MGX architecture will evolve with the upcoming NVIDIA Kyber rack architecture, which is designed to make use of this latest 800 VDC architecture (see Figure 2). Power is distributed at a high voltage on to each compute node, where a late-stage, high-ratio 64:1 LLC converter efficiently steps it right down to 12 VDC immediately adjoining to the GPU. This single-stage conversion is more efficient and occupies 26% less area than traditional multi-stage approaches, freeing up invaluable real estate near the processor.
The trail forward: a call for collaboration
This transformation can’t be achieved in a vacuum. It requires urgent, focused, and industry-wide collaboration. Organizations just like the Open Compute Project (OCP) provide an important forum for developing the open standards to make sure interoperability, speed up innovation, and reduce costs for your entire ecosystem. The industry must align on common voltage ranges, connector interfaces, and safety practices for 800 VDC environments.
To speed up adoption, NVIDIA is collaborating with key industry partners across the information center electrical ecosystem, including:
- Silicon providers: AOS, Analog Devices, Efficient Power Conversion, Infineon Technologies, Innoscience, MPS, Navitas, onsemi, Power Integrations, Renesas, Richtek, ROHM, STMicroelectronics, Texas Instruments.
- Power system components: Bizlink, Delta, Flex, Lead Wealth, LITEON, Megmeet.
- Data center power systems: ABB, Eaton, GE Vernova, Heron Power, Hitachi Energy, Mitsubishi Electric, Schneider Electric, Siemens, Vertiv.
We’re publishing the technical whitepaper 800 VDC Architecture for Next-Generation AI Infrastructure and presenting details on the 2025 OCP Global Summit. Any company fascinated about supporting the 800 VDC Architecture can contact us for more information.
