Let’s look a bit deeper as to what NFV is. It is comprised of virtual network functions (VNF), which we all are familiar with. We view them running on commodity hardware. Hence, they need an infrastructure layer: compute and storage resources connected by an Ethernet switch. This layer is referred to as network function virtualization infrastructure (NFVI).

To make the infrastructure shareable across the different VNFs, there is a need for virtualization (sharing) of interfaces, compute resources, storage resources, and network connectivity. This need is fulfilled by the hypervisordomain. The domain abstracts the underlying hardware and makes the VNF portable. It manages and schedules the compute and storage resources among the different VNFs. To facilitate the networking between the VNFs themselves and also with the external network, it provides a switching functionality, essentially providing the capability to build an overlay network.

One of the attractive features of NFV is the ability to instantiate the VNFs across geographies, and on demand. The orchestration and the management framework uses the services of the NFVI and the hypervisor domain to make that possible by providing the capability of loading and monitoring of virtual machines and making the necessary network interconnects.

Now with the building blocks in place, what is left is the service. Typically a VNF realizes a given functionality. For an end-to-end service it may be necessary to have a combination of VNFs. software-defined networking (SDN)facilitates this by connecting the VNFs in a predefined sequence to realize an end-to-end service. This is something we commonly refer to as “service chaining”.

One of the key requirements of NFV is service fulfillment. To ensure that the new architecture meets the same level of service guarantee as in today’s network, we need enhancements in the hypervisor domain. These enhancements are essential as requirements of throughput, latency, and jitter for a VNF are far different from those of a data-center application. We see new enhancements emerging in this area, such as user mode poll drivers, SRIOV, and DMA remapping. They help to increase throughput by offloading some of the functions from the compute domain or by optimizing them through software architectural changes such as poll drivers. The manifestation of this is seen in the increased usage of intelligent NICs as part of the NFVI as it aids the offload process (e.g., SRIOV).

The orchestration and management of the NFV applications and the hypervisor layers is another key challenge to be solved. Here, the industry is leveraging its knowledge of deploying cloud infrastructure using software like OpenStackor CloudStack to automate the deployment of the VNFs while addressing the service requirements by building the capability to choose and launch the appropriate NFVI to host the VNFs. These enhancements ensure that the end-to-end service quality metrics are met.

However, NFV has teething troubles, like any new technology. The comfort factor here is that we are seeing a different way of doing things, where different ecosystem partners are bringing in their areas of expertise. Technology that is mature and has been used very effectively in some other environment is being adapted to suit the new domain. It is not something that is very new and alien.

With the ecosystem guided by the ETSI NFV working group — which is working overtime to resolve all the challenges pertaining to standardization; implementation; testing and debugging; and performance — we expect that NFV will be adopted readily by most communication service providers very soon.