- NFV’s been too slow to arrive so a new white paper describing a new basic framework has been produced
- It’s not a rip it up and start again, but a restructuring
- Integration is the real problem
What’s gone wrong with NFV? There has long been mutterings about its progress being too slow and it seemingly being unable to meet its own objectives. The first volley of charges was, amongst other things, about initial NFV conceptions being based on virtual machines and therefore unable to meet resilience requirements.
As a result the movement started switching its attention to cloud native and containers. Then open source became seen as an important way past vendor lock-in, but that seemed to slow development even further.
When challenged, the NFV founders (who authored the 2012 white paper that kicked the movement off) pointed out that these things always take more time than initially thought and, given the scope and complexity of NFV, it wasn’t surprising that virtualized networks hadn’t sprung up fully formed. It would be more like 10 years from start to finish before all the issues were worked through, they said. Which gives the movement to about 2022 to come good.
Will it?
Will NFV as it’s currently developing be ready to usher in ‘real’ 5G (the very high speed 5G with all the attached use cases) when required?
Some think not, so it’s not surprising that a new conception for how NFV should be organised has been introduced at this week's Open Networking Summit North America 2019 via, fittingly, a new white paper and website.
‘Lean NFV’ makes the case for a partial re-think. Not a rip it up and start again, but a restructuring so that greater and better ongoing integration of all the NFV elements can be engineered.
Its proponents argue that there’s currently too much tight coupling going on and that makes innovation and automation very difficult. They say it’s mostly a question of coordinating dozens or hundreds of components which is difficult under the current structure and, it’s claimed, is at the heart of NFV’s current problems.
Instead, the white paper identifies the three main components in today’s conception:
The NFV manager: This is the entity that handles common lifecycle management tasks for both individual VNFs and end-to-end NFV service chains.
The computational infrastructure: This includes the compute resources (bare metal or virtualized) and the connectivity between them (provided by a physical or virtual fabric); the former is managed by a compute controller (e.g., Openstack) and the latter by an SDN controller.
Virtualized Network Functions (VNFs): These can include both data plane and control plane components.
To meet its de-complexity requirement the lean NFV proponents want to add a fourth component to the above : a key-value (KV) store, that serves as a universal point of integration.
“We believe that rather than standardizing on all-encompassing architectures, or adopting large and complicated codebases, the NFV movement should focus exclusively on simplifying these three points of integration, leaving all other aspects of NFV designs open for innovation. To this end, we advocate adding a fourth element to the NFV solution, a key-value (KV) store, that serves as a universal point of integration.”
There is much more to be explored on this issue. Expect read more on Lean NFV next week.
Email Newsletters
Sign up to receive TelecomTV's top news and videos, plus exclusive subscriber-only content direct to your inbox.
Subscribe