NVIDIA Unveils MGX Server Specification to Enable Versatile Accelerated Computing for Data Centers

NVIDIA Computex opening keynote unveiled the revolutionary NVIDIA MGX server specification to cater to the diverse and evolving accelerated computing needs of data centers worldwide. The MGX specification offers system manufacturers a modular reference architecture that empowers them to build over 100 server variations efficiently and cost-effectively. With its flexibility and scalability, the MGX platform is tailored to support a wide range of applications, including AI, high-performance computing, and Omniverse.

NVIDIA Unveils MGX Server

Reduced Development Time and Costs

By leveraging MGX, manufacturers can slash development costs by up to 75% and reduce development time to just six months—a two-thirds reduction compared to conventional approaches.

This streamlined process ensures accelerated deployments and empowers businesses to quickly adapt to their specific computing requirements.

Collaboration with Industry Leaders

NVIDIA has collaborated with leading system manufacturers, including ASRock Rack, ASUS, GIGABYTE, Pegatron, QCT, and Supermicro, who have committed to adopting the MGX server specification.

  • QCT and Supermicro will be the first to market with MGX designs, which are set to debut in August.
  • Supermicro’s ARS-221GL-NR system will incorporate the NVIDIA Grace™ CPU Superchip.
  • QCT’s S74G-2U system will leverage the NVIDIA GH200 Grace Hopper Superchip.

Unleashing the Power of MGX

The modular design of the MGX specification empowers system manufacturers to build servers that cater to unique workloads, such as HPC, data science, large language models, edge computing, graphics and video, enterprise AI, and design and simulation.

Whether it’s AI training or 5G applications, MGX facilitates the efficient handling of multiple tasks on a single machine.

Addressing Diverse Computing Needs

The unveil of MGX server specification by NVIDIA has been unveiled taking into consideration challenges data centers face today of meeting increasing compute demands while also reducing carbon emissions and costs.

The MGX specification takes this a step further by enabling system manufacturers to meet each customer’s unique budget, power delivery, thermal design, and mechanical requirements. This flexibility ensures that data centers can maximize their capabilities while minimizing their environmental impact.

Versatile Form Factors and Compatibility

MGX supports various form factors, including 1U, 2U, and 4U chassis, which can be either air or liquid cooled.

  • Compatible with a wide range of NVIDIA GPUs, including H100, L40, and L4 models.
  • Supports different CPUs, such as NVIDIA Grace CPU Superchip, GH200 Grace Hopper Superchip, and x86 CPUs.
  • Networking capabilities are enhanced with NVIDIA BlueField®-3 DPU and ConnectX®-7 network adapters.
  • Designed to seamlessly integrate with current and future generations of NVIDIA hardware.

Driving Acceleration with Software

The MGX specification is supported by NVIDIA’s comprehensive software stack, which enables developers and enterprises to build and accelerate AI, HPC, and other applications. NVIDIA AI Enterprise is the software layer of the NVIDIA AI platform that:

  • offers access to over 100 frameworks, pretrained models, and development tools for AI and data science.
  • enables businesses to leverage the full potential of both hardware and software.
  • helps accelerate AI and data science tasks and drives innovation.
  • Helps businesses achieve optimal performance by utilizing the combined power of MGX and NVIDIA AI Enterprise.

As the industry embraces this transformative technology, the era of versatile and efficient data centers is set to become a reality. Alongside the unveiling of the MGX server specification, Huang introduced a game-changer for the gaming industry—NVIDIA Avatar Cloud Engine (ACE) for Games, an innovative custom AI model foundry service aimed at revolutionizing the way non-playable characters (NPCs) interact with players.

Source: NVIDIA Newsroom

About The Author

Leave a Comment

Your email address will not be published. Required fields are marked *