The AI boom reshaping the computing landscape is poised to scale even faster in 2026. As breakthroughs in model capability and computing power drive rapid growth, enterprise data centers are being pushed beyond the bounds of conventional server and rack architectures. That is creating latest pressures on power budgets, thermal envelopes, and facility space.
NVIDIA MGX modular reference architecture provides forward-looking designs that enable faster time-to-market (TTM) with standardized constructing blocks. MGX helps system partners integrate fast-evolving technologies and deliver the flexible, energy-efficient platforms modern AI data centers require.
This post explores the subsequent evolution within the MGX modular reference architecture: a 6U (800 mm) chassis configuration designed specifically for the subsequent generation of accelerated compute and networking platforms. This includes the brand new liquid-cooled variant of the NVIDIA RTX PRO 6000 Blackwell Server Edition GPU.
Flexible, future-proof design with enhanced serviceability
Forward-looking compatibility and adaptability are core design principles of the MGX 6U platform. It encompasses a single chassis that may span multiple computing generations and workload profiles. It’s designed to support today’s strongest computing platforms while offering future-proof compatibility, reducing the necessity for disruptive redesigns over time.
Partners can design these systems with multiple MGX-based host-processor modules (HPMs), including x86 platforms and the next-generation NVIDIA Vera CPU. This permits standardizing on a single server design while supporting multiple CPU architectures and workload requirements.
Lastly, the larger chassis volume creates accessible service pathways for maintenance. Key components like network cards, power supplies, and other field-replaceable units are easy to achieve. This simplifies serviceability and reduces operational overhead when managing rack-scale infrastructure.
Sustainable, efficient computing with liquid-cooled NVIDIA RTX PRO Server
The MGX 6U design is the muse for the subsequent wave of accelerated computing platforms, starting with a brand new liquid-cooled NVIDIA RTX PRO Server. This latest RTX PRO Server configuration will feature eight of the most recent liquid-cooled RTX PRO 6000 Blackwell Server Edition GPUs, together with advanced AI networking capability delivered by NVIDIA BlueField-3 DPUs and NVIDIA ConnectX-8 SuperNICs with built-in PCIe Gen 6 switches (Figure 1).


With a compact, single-slot liquid-cooled form factor, RTX PRO 6000 Blackwell delivers breakthrough performance for powering AI factories and accelerating demanding enterprise AI workloads with improved thermal efficiency. It’s able to running the complete suite of NVIDIA enterprise software, including NVIDIA AI Enterprise, NVIDIA Omniverse, NVIDIA vGPU, and NVIDIA Run:ai. It provides a universal data center platform for constructing and deploying the subsequent generation of AI-enabled applications, from agentic AI and physical AI to scientific computing, simulation, graphics, and video.
Moreover, the RTX PRO 6000 Blackwell Server Edition GPU is validated by greater than 50 leading enterprise ISVs spanning engineering, scientific computing, and skilled visualization applications, in addition to essentially the most widely adopted orchestration, management, and AI ops platforms.


High-performance AI networking with NVIDIA ConnectX
Network performance is crucial to maximise the performance of AI workloads at scale. MGX 6U reference design supports ConnectX-8 AI networking today and can support ConnectX-9 when it becomes available, delivering Ethernet and InfiniBand connectivity options to satisfy diverse data center and workload requirements.
The liquid-cooled RTX PRO Server, based on the MGX 6U configuration, encompasses a streamlined system architecture that features the latest-generation ConnectX-8 SuperNICs with integrated PCIe Gen 6 switches.
Built for AI workloads, ConnectX-8 with integrated PCIe Gen 6 switches supports as much as 400 Gb/s of network bandwidth per RTX PRO 6000 Blackwell GPU (based on a 2:1 GPU-to-NIC ratio).
Along with streamlining the design and reducing server complexity versus systems with dedicated PCIe switches, ConnectX-8 effectively doubles per‑GPU network bandwidth. This helps to remove I/O bottlenecks and speeds data movement between GPUs, NICs, and storage, leading to as much as 2x higher NCCL all‑to‑all performance and more scalable multi‑GPU, multi‑node workloads across AI factories.
AI runtime security and infrastructure acceleration with NVIDIA BlueField
As accelerated infrastructure grows in scale and complexity, securing every layer of the system becomes essential. The MGX 6U design features NVIDIA BlueField data processing units (DPUs) to bring zero-trust security and infrastructure acceleration directly into the info center layer. The BlueField processor offloads and accelerates functions comparable to line-rate encryption, micro-segmentation, and real-time threat detection—enforcing least-privilege access while preserving the host’s computing resources (GPU/CPU) to concentrate on AI and other modern workloads.
By isolating control and management planes in hardware, BlueField enables organizations to guard AI pipelines from emerging threats while accelerating networking, storage, and virtualization services. Enterprises can further extend these capabilities by deploying validated BlueField-accelerated applications from leading software providers, enhancing each infrastructure efficiency and cybersecurity coverage. This mixture helps make sure that RTX PRO Server deployments can scale securely, with consistent performance and policy enforcement across every node within the AI factory.
Constructing future-ready AI factories
As NVIDIA Blackwell and future GPU generations proceed to push beyond traditional computing boundaries, the NVIDIA MGX modular architecture ensures AI factories can evolve with silicon innovations. For ecosystem partners constructing the subsequent generation of accelerated computing platforms, MGX reduces engineering costs, shortens time to market, and delivers multigenerational compatibility while ensuring optimal performance and efficiency for enterprises deploying AI workloads at scale.
Systems featuring the liquid-cooled NVIDIA RTX PRO 6000 Blackwell Server Edition GPU, together with liquid-cooled RTX PRO Servers based on the MGX 6U configuration, are expected to reach from global system builders in the primary half of 2026.
