How Hardware Can Boost NFV Adoption

These days there’s clearly a massive amount of interest in all things relating to network function virtualization (NFV). But based on historical trends, the adoption of high-performance hardware can boost NFV adoption by providing a stronger platform for applications.

The history of computing is one balanced between hardware and software. Time and again, hardware advances have proven to be a boon to software, because the hardware innovation can mitigate the overhead introduced by new software. NFV is not likely to be any different. As SDxCentral has been covering as part of its Business Insights series, virtualization introduces a performance penalty that must be solved with hardware.

“When you introduce network virtualization on top of the same physical switches in place you gain flexibility,” says Cliff Grossner, an industry analyst with IHS Technology. “But you also take a performance hit.”

To compensate for that performance hit, Grossner says more aggressive rollouts of network virtualization platforms in production environments will naturally occur as more robust hardware platforms to run this software become available.

“History has already shown that some virtualization functions always wind up getting embedded in silicon to improve performance,” adds Grossner.

Virtualization Looks for Speed

The good news it there are now a raft of hardware technologies that promise to boost NFV deployments and help build out more robust software-defined data centers (SDDCs). This could benefit data centers for service providers and enterprises alike. A recent report released by Dell shows that 80 percent of IT and business decision makers are in the process of making the transition to an SDDC architecture. Vendors of all sizes are aiming to accelerate that shift via a wide variety of upgrades to existing network infrastructure.

For example, Netronome has announced that it is now making a 25G Ethernet version of its adapter that offloads the processing of virtual switches such as Open vSwitch (OVS) from the server. With samples scheduled to be available in September, Nick Tausanovitch, vice president of solution architecture and silicon product management for Netronome, says the goal is to make it simpler for organizations to embrace, for example, network function virtualization (NFV) applications without having to use up processors cores on a server to run it.

“There’s a 20X gain in efficiency when you offload OVS from the server,” says Tausanovitch.

Hybrid Hardware Approach for NFV

Paul Anderson, director of marketing for Array Networks, says that while Array Networks makes it possible to run its software on a server or on a dedicated appliance, the simple fact is that dedicated hardware makes it simpler to guarantee network service levels.

Rather than getting caught up on philosophical debates, Anderson suggests most organizations would be better off coming to the realization that they will wind up deploying advanced networking services on dedicated hardware as well as on commodity servers. On a public cloud, for example, deploying software may be the only real option, while on premise an IT organization is going to want higher performance to meet the demands of next-generation networking software.

“There’s always going to be a performance penalty for virtualization,” says Anderson.

Obviously, many providers of network virtualization and software-defined networking technologies are anxious to get their wares deployed sooner than later. But most networks today consist of a mix of physical switches and routers from different vendors that are already maxed to capacity.

In fact, much of that networking gear is more than three years old. Networking virtualization injects a much need layer of agility into those environments that makes it possible to networks at a higher level of abstraction. But the simple fact of the matter is that all that networking virtualization software needs to run somewhere. For every CPU cycle allocated to running network virtualization software there is just that much less horsepower to process packets.

While the organizations as a whole may be excited by the prospect of their networks becoming easier to manage; almost none of them are going to be willing to sacrifice anything in the way of application performance to actually achieve it.

SPONSORED

Michael Vizard is a contributing analyst and reporter for SDxCentral. Michael is a seasoned IT journalist, with nearly 30 years of experience writing and editing about enterprise IT issues. He formerly was editorial director for Ziff-Davis Enterprise, where he launched the company’s custom content division, and has also served as editor in chief for CRN and InfoWorld. He also has held editorial positions at PC Week, Computerworld and Digital Review.

Win a $200 Amazon Gift Card

New Report: 2016 Cloud Automation and DevOps Report – What’s Next for Networking in the Cloud?

2016 Cloud Automation and DevOps Report: What’s Next for Networking in the Cloud? is available for free download. This FREE Report examines how cloud management, automation, and DevOps are likely to influence and integrate with networking and SDx technology in the future.

About SDxCentral

Engage With us

This material may not be copied, reproduced, or modified in whole or in part for any purpose except with express written permission from an authorized representative of SDNCentral, LLC. In addition to such written permission to copy, reproduce, or modify this document in whole or part, an acknowledgement of the authors of the document and all applicable portions of the copyright notice must be clearly referenced. All Rights Reserved.