- Nvidia is working with Japan’s SoftBank on advanced edge cloud platforms
- The GPU giant believes it has the answer to telco edge function multi-tenancy where 5G is just another software function running in an ‘AI factory’
- It is claiming to have immediate answers to virtual and Open RAN challenges and clearly has Intel in its sights
- But is SoftBank’s immediate eagerness an indication of current telco requirements?
With the interest in AI at fever pitch right now, Nvidia is seeking to capitalise on multiple new business opportunities, and one of those is in the radio access network (RAN), where the graphics processing unit (GPU) giant has clearly spied the ideal opportunity to usurp Intel from its current position as the primary supplier of semiconductor hardware for virtualised telecom applications.
Its pitch? Nvidia has the “accelerated compute” hardware for server architectures that can simultaneously support AI and software-defined 5G functionality in a way that delivers enhanced performance, flexibility, agility and resource efficiency. Not only that, it has – as reported earlier this week – a major network operator, Japan’s SoftBank, that shares its enthusiasm and which is preparing to deploy such an architecture – see Nvidia and SoftBank team on GenAI, 5G/6G platform.
However, while SoftBank is ready to embrace a next-generation edge cloud-oriented platform that can simultaneously support multiple applications, are other network operators ready to do so? Nvidia’s proposition is compelling but, like many compelling offers in the telecom sector, it might just be ahead of its time and be dependent on the acceleration of a few industry trends – edge computing, virtual and Open RAN – that are currently lagging expectations in terms of the pace and size of deployments.
For Ronnie Vasishta, senior VP of telecom at Nvidia, such trends, just like the uptake and use of AI tools, are absolutely happening and are in need of the best possible supporting technology foundations, and that’s not coming from traditional telecom vendors or central processing unit (CPU) vendors (Intel was never mentioned by name during Vasishta’s briefing, it should be noted.
“The CPU has been around for decades and has been open and adaptable but now it is being replaced by accelerated compute systems,” opined Vasishta, and those accelerated compute systems come from Nvidia in the form of its integrated combination of GPU and data processing unit (DPU) elements that have been designed to work together and can then run domain-specific software stacks, whether for robotics or for telecom. (The DPU is a system-on-chip component that combines a multi-core CPU, a network interface card and “flexible and programmable acceleration engines that offload and improve applications performance for AI and machine learning, zero-trust security, telecom and storage, among others,” according to Nvidia).
“All the datacentres are using this accelerated compute architecture – AI is the tipping point, as GenAI is enabling software to become transformational,” stated Vasishta. That the datacentre giants are turning to Nvidia’s hardware is evidenced by the vendor’s recent earnings report – see GenAI fever gives Nvidia a boost.
But what about telecom networks? Vasishta doesn’t believe they’re fit for purpose any longer.
“Traditional telecom networks are built for a single purpose: To run networks and meet peak demand so they are over-provisioned,” he noted – that much is admitted by the operators, as we heard during the recent DSP Leaders World Forum – see BT’s Watson: Telcos need to address the ‘overprovision paradox’.
“As GenAI gets used more, so peak demand will grow and this will lead to [even greater over provisioning and] under utilisation,” he added.
The answer is full virtualisation, which the industry has been talking about for about a decade and which is still in development.
“The Nvidia architecture enables 5G to become not only virtualised but also software-defined… That means 5G becomes a software-defined overlay in a datacentre” with an orchestrated software stack that eliminates the overprovision paradox, according to the Nvidia executive.
“Even if the datacentre is only running the RAN at 25% utilisation, the rest can be used for AI processing, and if demand increases for 5G then the resources get re-assigned,” boasted Vasishta.
Well that certainly ties in with many PowerPoint visions, but…
“It’s easy to say but hard to do, but can be achieved using a combination of CPU, GPU and DPU,” and with that architecture, “5G becomes a workload in an AI factory”.
But most of today’s 5G networks are not built and designed like that and, although virtual RAN (vRAN) and Open RAN are being deployed in pockets around the world, it’s still very much a minority play and, according to research house Dell’Oro, early 2023 saw a slowdown in vRAN and Open RAN market growth.
Vasishta, though, is very trend focused.
“Current RAN [systems] perform well and are optimised. But the majority of RAN computer workloads are now moving to server architectures,” he claimed, which is very arguable, unless “now” has a very long timeline.
What is more true, though, is that current vRAN and Open RAN deployments require “purpose-built accelerators to get up to speed. But single-purpose accelerator deployments eliminate cloud economics as they are used only for the RAN – there are no good economics because it is single provisioned,” noted Vasishta, adding that the Nvidia architecture delivers integrated in-line acceleration by design and which is also multi-purpose – the same accelerator is used for AI” as well as to support 5G, including Open RAN specifications. In addition, “the same hardware used for public cloud, private cloud and distributed cloud,” added the Nvidia man, who then took a giant leap of faith.
“So you are essentially getting 5G for free on top of those AI workloads!”
That’s not so easy to argue, though the model doesn’t require a separate hardware investment for 5G (nothing comes for free, as we have all learned over the years…).
The other trend that adds fuel to Nvidia’s fire, but which is also not yet widespread, is the investment in distributed enhanced edge compute capabilities by telcos. In Vasishta’s view, “Telcos are busy building regional datacentres so they can offer cloud services. If these become AI factories, they can run AI and RAN on the same systems.”
There’s no doubt this is an interesting approach to next-generation communications networking architecture and it’s not hard to see how this fits in with the medium- and long-term visions of progressive telcos.
But the economics will face cold, hard analysis by the telcos, and even Vasishta admits that this approach doesn’t make sense unless the Nvidia-based architecture is being used as the supporting hardware for multiple applications, for the pooling of software stacks using shared compute resources. He admits that taking such an approach for a standalone distributed unit (DU) in a virtual/Open RAN deployment doesn’t make sense – “there are cheaper ways to do this,” he noted.
But, he also added, “operators are seeing the benefits of pooling compute architectures for multiple workloads. If you overlay 5G on top of AI, you get cost benefits,” he stated. “We are talking to telcos that have the power and racks at their sites and want to use those sites more efficiently with more computational pooling rather than single source [use case], such as a DU.”
And these can be used for pooling – for running AI and 5G functions with the ability to assign resources to 5G in real time as and when demand increases? Orchestration of stacks and resources is part of the Nvidia stack, claimed Vasishta, and applications can be prioritised “in most instances,” he added. Network operator executives will, of course, want to know about the instances when that can’t be achieved…
Nvidia is clearly making the most of its time in the limelight to hammer home a message that will certainly tie in with telco roadmaps, but Nvidia’s claims and comparisons with existing and alternative architectures, including Intel-based vRAN and Open RAN deployments, have not been independently assessed and cost-compared, and the economics will certainly be put under intense scrutiny, because Nvidia’s products are not cheap and GPUs are known to be power hungry, which is not something that will sit well with operators with energy-efficiency boxes to tick.
So, does Nvidia have a credible story for telcos? Is this something that will chime with their network, capex, opex and strategic needs? Can Nvidia wrest the virtual RAN chip crown from Intel?
ABI Research’s senior research director, Dimitris Mavrakis, believes Nvidia might be onto something. “Even with high-end massive MIMO configurations, several functional splits, and the availability of fibre, allow the placement of the DU at a more centralised location, perhaps at an aggregation point or a base-station hotel,” rather than in very close physical proximity to the antennas and towers, noted the analyst in an email response to questions. “These are prime locations for edge datacentres and, as far as I am aware, operators are desperate to monetise them in the best way possible,” he added.
“When we started discussing telco edge, there was an argument that its success requires a ‘hero’ use case that makes the economics work. I believe that AI – and generative AI specifically – could well be this use case that brings telco edge assets into a much more prominent position. This is why I believe the Nvidia story is very interesting, but… the additional cost will need to be justified with the existence of a strong business case before large-scale deployments commence,” added Mavrakis, who expects to see network operators in South Korea follow in SoftBanks’ footsteps and work with Nvidia.
“We have been doing some work in what we call ‘distributed connected computing’ – when the telecom network becomes a processing platform for a variety of use cases. This is ideal for AI applications, especially when inference needs to take place close to the device or user,” said the ABI Research man. “Both Intel and Nvidia are now trying to become leaders in this market,” he added.
Indeed, let’s not forget that Intel is not a bystander: Nvidia may have the edge (excuse the pun) currently with a proposition for pooled network edge resource management, but Intel is also well placed to work with partners and existing early vRAN and Open RAN users to develop and offer an alternative to Nvidia – in time. And it’s not just Intel, of course – AMD has aspirations here too and will not stand by and let the GPU specialist own the software-defined 5G infrastructure narrative.
GenAI fever is heating up a lot of interesting debates and industry developments – this new battle for the RAN architecture of future choice looks like one that will command a lot of airtime in the coming few years.
- Ray Le Maistre, Editorial Director, TelecomTV