NVIDIA Collaborates With SoftBank Corp. to Power SoftBank’s Next-Gen Data Centers
COMPUTEX—NVIDIA and SoftBank Corp. today announced they are collaborating on a pioneering platform for generative AI and 5G/6G applications that is based on the NVIDIA GH200 Grace Hopper™ Superchip and which SoftBank plans to roll out at new, distributed AI data centers across Japan.
Paving the way for the rapid, worldwide deployment of generative AI applications and services, SoftBank will build data centers that can, in collaboration with NVIDIA, host generative AI and wireless applications on a multi-tenant common server platform, which reduces costs and is more energy efficient.
The platform will use the new NVIDIA MGX™ reference architecture with Arm Neoverse-based GH200 Superchips and is expected to improve performance, scalability and resource utilization of application workloads.
“As we enter an era where society coexists with AI, the demand for data processing and electricity requirements will rapidly increase. SoftBank will provide next-generation social infrastructure to support the super-digitalized society in Japan,” said Junichi Miyakawa, president and CEO of SoftBank Corp. “Our collaboration with NVIDIA will help our infrastructure achieve a significantly higher performance with the utilization of AI, including optimization of the RAN. We expect it can also help us reduce energy consumption and create a network of interconnected data centers that can be used to share resources and host a range of generative AI applications.”
“Demand for accelerated computing and generative AI is driving a fundamental change in the architecture of data centers,” said Jensen Huang, founder and CEO of NVIDIA. “NVIDIA Grace Hopper is a revolutionary computing platform designed to process and scale-out generative AI services. Like with other visionary initiatives in their past, SoftBank is leading the world to create a telecom network built to host generative AI services.”
The new data centers will be more evenly distributed across its footprint than those used in the past, and handle both AI and 5G workloads. This will allow them to better operate at peak capacity with low latency and at substantially lower overall energy costs.
SoftBank is exploring creating 5G applications for autonomous driving, AI factories, augmented and virtual reality, computer vision and digital twins.
Virtual RAN With Record-Breaking Throughput
NVIDIA Grace Hopper and NVIDIA BlueField®-3 data processing units will accelerate the software-defined 5G vRAN, as well as generative AI applications, without bespoke hardware accelerators or specialized 5G CPUs. Additionally, the NVIDIA Spectrum Ethernet switch with BlueField-3 will deliver a highly precise timing protocol for 5G.
The solution achieves breakthrough 5G speed on an NVIDIA-accelerated 1U MGX-based server design, with industry-high throughput of 36Gbps downlink capacity, based on publicly available data on 5G accelerators. Operators have struggled to deliver such high downlink capacity using industry-standard servers.
New Reference Architecture
NVIDIA MGX is a modular reference architecture that enables system manufacturers and hyperscale customers to quickly and cost-effectively build over a hundred different server variations to suit a wide range of AI, HPC and NVIDIA Omniverse™ applications.
By incorporating NVIDIA Aerial™ software for high-performance, software-defined, cloud-native 5G networks, these 5G base stations will allow operators to dynamically allocate compute resources, and achieve 2.5x power efficiency over competing products.
“The future of generative AI requires high-performance, energy-efficient compute like that of the Arm Neoverse-based Grace Hopper Superchip from NVIDIA,” said Rene Haas, CEO of Arm. “Combined with NVIDIA BlueField DPUs, Grace Hopper enables the new SoftBank 5G data centers to run the most demanding compute- and memory-intensive applications and bring exponential efficiency gains to software-defined 5G and AI on Arm.”