ASRock Rack Unveils AI Servers Powered by NVIDIA Blackwell Accelerators
ASRock Rack, a leading provider of server solutions, has announced a range of AI systems equipped with NVIDIA Blackwell GB200, B200, and B100 accelerators. These powerful servers are designed to tackle resource-intensive tasks in the realms of artificial intelligence (AI) and high-performance computing (HPC). ASRock Rack showcased these cutting-edge systems, including those featuring advanced liquid cooling technology, at a recent event.
NVIDIA Blackwell-based Systems: The newly introduced products, built upon the NVIDIA Blackwell architecture, comprise several impressive offerings. The NVIDIA GB200 NVL72 ORV3 rack system stands out with its liquid cooling capabilities, ensuring optimal performance and thermal management. Additionally, ASRock Rack presented the 6U8X-GNR2/DLC NVIDIA HGX B200 server, which boasts Direct-to-chip liquid cooling technology and supports up to eight NVIDIA HGX B200 accelerators in a 6U form factor.
For those seeking even more power, the 6U8X-EGS2 NVIDIA HGX B100 server is designed to accommodate up to eight NVIDIA HGX B100 accelerators. All of these new ASRock Rack NVIDIA HGX servers provide support for up to eight NVIDIA BlueField-3 SuperNIC DPUs, enabling seamless integration and enhanced networking capabilities.
NVIDIA MGX Modular Architecture: ASRock Rack also showcased systems based on the NVIDIA MGX modular architecture. The 4UMGX-GNR2 server, built in a 4U form factor, is a notable example. This dual-socket server offers the flexibility to install eight FHFL accelerators and features five FHHL PCIe 5.0 x16 slots and one HHHL PCIe 5.0 x16 slot. The latter supports NVIDIA BlueField-3 DPU and NVIDIA ConnectX-7 NIC, while the server also includes 16 hot-swappable E1.S (PCIe 5.0 x4) drive bays for ample storage capacity.
Commitment to AI and HPC: Weishi Sa, President of ASRock Rack, expressed the company’s dedication to delivering cutting-edge solutions for the most demanding workloads in large language model (LLM) training and generative AI. “We’ve introduced data center solutions based on NVIDIA Blackwell architecture for the most demanding workloads in large language model (LLM) training and generative AI. We intend to continue to expand the family of these servers,” stated Sa.
ASRock Rack’s commitment to pushing the boundaries of AI and HPC was further evident at Computex 2024, where the company showcased additional systems powered by NVIDIA accelerators. Among them was the MECAI-GH200 model, which holds the distinction of being the world’s most compact server equipped with the NVIDIA GH200 superchip at the time of its announcement.
With the introduction of AI servers powered by NVIDIA Blackwell accelerators, ASRock Rack has solidified its position as a frontrunner in providing advanced solutions for AI and HPC workloads. The combination of cutting-edge accelerators, liquid cooling technology, and modular architecture sets these servers apart, offering unparalleled performance and flexibility for data centers and enterprises engaged in resource-intensive tasks such as LLM training and generative AI.
As the demand for powerful AI and HPC solutions continues to grow, ASRock Rack’s commitment to innovation and its partnership with NVIDIA position the company to meet the evolving needs of the industry. The unveiling of these groundbreaking servers marks a significant milestone in the advancement of AI and HPC technologies, paving the way for further breakthroughs and discoveries in these fields.