- Liberty Global held an AI-focused event in London this week
- The company brought in experts from Microsoft and EY to discuss all things AI
- They are all bullish that LLMs will ultimately result in a positive sustainability outcome as the focus shifts away from training
As the telco industry is looking at ways to be more sustainable, including by using AI to optimise energy consumption, there are significant concerns about the vast amounts of energy needed to feed power-hungry large language models (LLMs). Still, experts from Liberty Global, Microsoft and EY are unfazed and are bullish that the models will improve with time and bring energy-efficiency gains nonetheless.
Earlier this week, Liberty Global organised a panel discussion in London to discuss all things AI, including the telco group’s intentions to use the technology and its impact on a number of areas.
Responding to a question by TelecomTV about whether the industry might end up in a paradox situation in which the power needed to utilise AI will exceed any attempts to reduce energy through the use of AI, Liberty Global CTO Enrique Rodriguez (pictured above, second from left) was optimistic that such a scenario will improve over time.
“I think today we’re still at a stage where the development of AI and, very specifically, LLMs” is still the dominant focus for the AI sector, rather than the use of AI applications, noted the CTO. “The LLMs happen to be very power-hungry, mainly on the development side… [but] as we start deploying them,” the pendulum will start to shift. “I expect the balance is going to change over the next few years where less and less of the pie goes into the generation side and a lot more of the pie goes into the usage side,” and that will help to shift the focus to energy efficiency rather than usage.
He gave an example with an early test on the telco group’s mobile networks that showed that within a week, the company had cut its power usage by 10% using AI. “We have no other initiative that comes even close to saving 10% of power on a mobile network. So, I’m pretty optimistic that we’re going to see the other side of this,” Liberty Global’s tech top-dog explained.
According to the CTO, using AI can ultimately save power on mobile towers: “These are hardcore engineering problems where AI can provide a significant improvement.”
In terms of the impact of the growing demand for AI workloads on its network, Rodriguez said it is “not nearly as big a deal as certain people make it out to be”.
However, the impact is much more noticeable when it comes to datacentres – another of the company’s interests as it owns 50% of datacentre operator AtlasEdge.
“We are seeing the requirements for these datacentres increase dramatically and we are seeing that probably the first signs of an actual bottleneck are more on the power infrastructure than on pretty much everything else [together],” he stated.
Jed Griffiths, UK chief digital officer at Microsoft (pictured, third from left), agreed that the use of AI will ultimately have a positive impact on energy consumption. He noted that there is still a lot of optimisation to be done when it comes to both datacentre usage and telco network energy consumption. “There are not huge uses of automation and optimisation in the energy system at the moment – it’s still very traditional. There is a huge amount of scope there,” Griffiths added.
Harvey Lewis, partner at EY (pictured, far right), offered concrete figures about some of the LLMs available today and suggested that they consume a substantial amount of energy. “When you look at the rough estimates about the training of GPT-4 by Open AI, it required about 25,000 Nvidia A100 GPUs [graphics processing units] to run for about 100 days, and the amount of electricity consumed in the training of that model would have powered 30 million UK homes for a year and a half,” noted Lewis.
But as Rodriguez pointed out, the pendulum will soon swing away from LLM training and more towards inference. “If you put these things to work, the compute requirements are substantially lower. And if you think that these models are becoming more genuinely capable, we can put them to work in many different ways, so you’re seeing some benefits in the use of these expensive models,” explained Lewis.
He was also bullish that the tech industry in general is getting “much, much better at training the right models for the right tasks” which, he explained, makes the algorithms “much more energy-efficient”.
“Last week, Microsoft released Phi-3, which is an open-source model – it is not a GPT-4 class model, but it’s not too far from it” and yet its carbon footprint is much smaller. One reason for that is the pre-selection of the data used to train the LLM. “If you curate the data when you train that model, you’re only training it on data that it needs to fulfil the task” and, as a result, “you get massive efficiency gains,” explained Lewis.
He concluded that there are constant advancements in this space, but “clearly, more work needs to be done”.
- Yanitsa Boyadzhieva, Deputy Editor, TelecomTV
Email Newsletters
Sign up to receive TelecomTV's top news and videos, plus exclusive subscriber-only content direct to your inbox.