Cisco architects explain the benefits of NFV and the market drivers that are enabling its adaption. They start the journey towards NFV by analyzing the network evolution over the past decades. They also focus on building the foundation knowledge of NFV by introducing the architectural framework and its components.

This chapter is from the book

Network functions virtualization (NFV) is a fast-emerging technology area that is heavily influencing the world of networking. It is changing the way networks are designed, deployed, and managed, transforming the networking industry towards a virtualization approach and moving away from customized hardware with prepackaged software.

This chapter walks you through the NFV journey and the market drivers behind it. It allows you to get acquainted with the concepts of NFV and examines the ongoing efforts towards standardization. It lays the foundation which is instrumental in understanding networking industry transition to NFV. It explains how the industry is evolving from a hardware centric approach to a virtualized and software—based network approach in the effort to meet the need and feed of cloud-based services which demand open, scalable, elastic and agile networks.

The main topics covered in this chapter are:

Evolution from traditional network architecture to NFV

NFV standardization efforts and an overview of the NFV architectural framework

Benefits and market drivers behind NFV

The Evolution of Network Architecture

To appreciate the motivation and need behind the networking industry’s fast adoption of NFV, it’s helpful to take a look at the history of networking and the challenges that it faces today. Data communication networks and devices have evolved and improved over time. But while networks have become faster and more resilient with higher capacity, they still struggle to cope with the demands of the changing market. The networking industry is being driven by a new set of requirements and challenges brought forward by cloud-based services such as infrastructure to support those services and demands to make them work more efficient. Mega-scale data centers hosting computing and storage, a factorial increase in data-enabled devices, and Internet of Things (IoT) applications are just some of examples of areas that need to be addressed for improved throughput and latency in existing networks.

This section examines traditional networks and networking devices and identifies the reasons they have been unable to cope with the new types of demands. It also takes a look at the way NFV brings a fresh perspective and different solution to these market-driven needs.

Traditional Network Architecture

The traditional phone network and perhaps even telegram networks are examples of the earliest data transport networks. Early on, the design criteria and quality benchmark by which networks were judged were latency, availability, throughput, and the capacity to carry data with minimal loss.

These factors directly influenced the development and requirements for the hardware and equipment to transport the data (text and voice, in this case). Additionally, hardware systems were built for very specific use cases and targeted functions, ran tightly coupled proprietary operating systems on them, and were meant to perform only specific functions. With the advent of data transport networks, the requirements and factors that influence the network’s design and the devices’ efficiency stayed unchanged (for example, the network design should achieve highest throughput with minimum latency and jitter over extended distances with minimal loss).

All the traditional networking devices were made for specific functions, and the data networks built were tailored and customized to meet these efficiency criteria effectively. The software or code running on these custom-designed hardware systems was tightly coupled to it, closely integrated with the silicon Field Programmable and Customized Integrated Circuits and focused exclusively on performing the specific functions of the device.

Figure 1-1 illustrates some of the characteristics of traditional network devices deployed today.

With the exponential increase in bandwidth demand, heavily driven by video, mobile, and IoT applications, service providers are constantly looking for ways to expand and scale their network services, preferably without significant increase in costs. The characteristics of traditional devices present a bottleneck to this requirement and create many constraints that limit the scalability, deployment costs, and operational efficiency of the network. This situation forces the operators to consider alternatives that can remove the limitations. Let’s examine some of these limitations.

Flexibility Limitations

Vendors design and develop their equipment with a generic set of requirements and offer the functionality as a combination of specific hardware and software. The hardware and software are packaged as a unit and limited to the vendor’s implementation. This restricts the choices of feature combinations and hardware capabilities that can be deployed. The lack of flexibility and customization to meet fast-changing requirements results in inefficient use of resources.

Scalability Constraints

Physical network devices have scalability limitations in both hardware and software. The hardware requires power and space, which can become a constraint in densely populated areas. The lack of these resources may limit the hardware that can be deployed. On the software side, these traditional devices may not be able to keep up with the scale of changes in the data network, such as number of routes or labels. Each device is designed to handle a limited multi-dimensional scale, and once that ceiling is hit, the operator has a very limited set of options aside from upgrading the device.

Time-to-Market Challenges

As requirements grow and change over time, equipment isn’t always able to quickly keep up with these changes. Service providers often delay offering new services to meet the shift in the market requirements. Implementing new services requires upgrading the networking equipment. This leads to complex decisions to choose the appropriate migration path. This route may imply re-evaluation of new equipment, redesign of the network, or possibly new vendors that may be more suitable to meet the new needs. This increases the cost of ownership and longer timeline to offer new services to customers, resulting in loss of business and revenue.

Manageability Issues

Monitoring tools employed in the networks implement standardized monitoring protocols such as a Simple Network Management Protocol (SNMP), NetFlow, syslog, or similar systems for gathering device state and information. However, for monitoring vendor-specific parameters, relying on standard protocols may not suffice. For example, a vendor may be using nonstandard MIB or vendor-defined syslog messages. For such in-depth level of monitoring and control the management tools become very specific and tailored for the vendor’s implementation. Whether these management tools are built in-house or offered directly by the vendors, it is sometimes not feasible to port these to a different vendor’s devices.

High Operational Costs

The operational costs are high because of the need to have highly trained teams for each vendor-specific system being deployed in the network. This also tends to lock the provider into a specific vendor, because switching to a different vendor would mean additional costs to retrain operational staff and revamp operational tools.

Migration Considerations

Devices and networks need to be upgraded or reoptimized over a period of time. This requires physical access and on-site personnel to deploy new hardware, reconfigure physical connectivity, and upgrade facilities at the site. This creates a cost barrier for migration and network upgrade decisions, slowing down the offering of new services.

Capacity Over-Provisioning

Short- and long-term network capacity demands are hard to predict, and as a result networks are built with excess capacity and are often more than 50% undersubscribed. Underutilized and overprovisioned networks result in lower return on investment.

Interoperability

For faster time to market and deployment, some vendors try to implement new networking functionality before it is fully standardized. In many cases, this implementation becomes proprietary, which creates inter-operability challenges that require service providers to validate interoperability before deploying it in production environment.

Introducing NFV

In data centers, the server virtualization approach is already proven technology, where stacks of independent server hardware systems have mostly been replaced by virtualized servers running on shared hardware.

NFV builds on this concept of server virtualization. It expands the concept beyond servers, widening the scope to include network devices. It also allows the ecosystem to manage, provision, monitor, and deploy these virtualized network entities.

The acronym NFV is used as a blanket term to reference the overall ecosystem that comprises the virtual network devices, the management tools, and the infrastructure that integrates these software pieces with computer hardware. However, NFV is more accurately defined as the method and technology that enables you to replace physical network devices performing specific network functions with one or more software programs executing the same network functions while running on generic computer hardware. One example is replacing a physical firewall appliance with a software-based virtual machine. This virtual machine provides the firewall functions, runs the same operating system, and has the same look and feel—but on non-dedicated, shared, and generic hardware.

With NFV, the network functions can be implemented on any generic hardware that offers the basic resources for processing, storage, and data transmission. Virtualization has matured to the point that it can mask the physical device, making it possible to use commercial off the shelf (COTS) hardware to provide the infrastructure for NFV.

COTS

Commercial off the shelf (COTS) refers to any product or service that is developed and marketed commercially. COTS hardware refers to general-purpose computing, storage, and networking gear that is built and sold for any use case that requires these resources. It doesn’t enforce usage of a proprietary hardware or software.

In traditional network architecture, vendors are not concerned about the hardware on which their code will run, because that hardware is developed, customized, and deployed as dedicated equipment for the specific network function. They have complete control over both the hardware and the software running on the device. That allows the vendors flexibility to design the hardware and its performance factors based on the roles these devices will play in the network. For example, a device designed for the network core will have carrier-class resiliency built into it, while a device designed for the network edge will be kept simpler and will not offer high availability to keep its cost low. In this context, many of the capabilities of these devices are made possible with the tight integration of hardware and software. This changes with NFV.

In the case of virtualized network functions, it is not realistic to make assumptions about the capabilities that hardware has to offer, nor is it possible to very tightly integrate with the bare hardware. NFV decouples the software from hardware, and boasts to offer the ability to use any commercially available hardware to implement the virtualized flavor of very specific network functions.

Virtualization of networks opens up new possibilities in how networks can be deployed and managed. The flexibility, agility, capital and operational cost savings and scalability that is made possible with NFV opens up new innovation, design paradigm and enables new network architectures.