NVIDIA Blackwell Sets New Benchmark Records, Powers Rise of AI Factories
NVIDIA’s new GB200 NVL72 system, built on the Blackwell architecture, has delivered record-breaking results in the MLPerf Inference v5.0 benchmarks, showcasing unmatched AI throughput. This marks the debut of the GB200 NVL72 — a rack-scale, multi-GPU system engineered for high-performance inference in next-gen AI factories.
The GB200 NVL72 achieved up to 30x higher throughput on the Llama 3.1 405B benchmark compared to prior systems, enabled by massive interconnect bandwidth and optimized software stacks. NVIDIA’s DGX B200 system also excelled in the newly introduced Llama 2 70B Interactive test, tripling performance over its predecessor and excelling under real-world latency constraints.
The NVIDIA Hopper platform continues to deliver increased value with ongoing optimizations, now showing 1.6x performance gains on earlier benchmarks like Llama 2 70B. This reflects NVIDIA’s ability to improve results through software innovation as models scale in size and complexity.
NVIDIA’s AI factories — purpose-built data centers designed for training, fine-tuning, and inferencing large AI models — are becoming central to enterprise and national AI infrastructure. Powered by full-stack NVIDIA technology, AI factories integrate cutting-edge compute, high-speed networking, data orchestration, and robust software frameworks.
NVIDIA's comprehensive ecosystem supports both on-premises and cloud-based AI factory deployments through its DGX SuperPOD and DGX Cloud platforms. These solutions deliver the performance, flexibility, and scalability enterprises need to meet the exponential demand for AI.
Global efforts in countries like India, Japan, and Norway are already embracing AI factories to drive innovation and economic transformation. As enterprises increasingly move toward AI reasoning at scale, NVIDIA’s infrastructure and partner network are helping them manufacture intelligence with greater speed, efficiency, and impact.
2025-04-02
Comments
Share your comments