What It Takes to Build Next-Generation Data Centers

Joel Snyder, Ph.D., is a senior IT consultant with 30 years of practice. An internationally recognized expert in the areas of security, messaging and networks, Dr. Snyder is a popular speaker and author and is known for his unbiased and comprehensive tests of security and networking products. His clients include major organizations on six continents.

For most network architects, virtualization was the warning shot across the bow: Organizations shouldn’t build data center networks data networks today the way they built them even five years ago, because the fundamental building blocks of enterprise applications have changed.

But virtualization is only one of the changes persuading data center managers to abandon the traditional core-distribution-edge architecture in favor of flatter and faster models. Some of the other trends pushing new requirements on data center architects are:

Jumps in server speed and density, virtualized or not, requiring burst speeds faster than 1 gigabit per second

Few network managers are in the position to do a rip-and-replace on their data center network. But the installation of new storage and virtualization equipment does offer the opportunity to rethink data center design rather than bolt new equipment onto old structures.

The Four Requirements of Rethinking Data Center Support

Data center networks are being rearchitected as part of a transition to the next generation of data centers, reimagining how applications and data centers are built. This change extends from the power and cooling to the servers and storage, as well as the networking. The push to rethink how networks support data centers is being driven by four key requirements:

Nonblocking (and high speed): As devices and storage systems require microbursts up to 40Gbps, the need for a nonblocking switching architecture in data centers becomes critical to predictable application behavior and user satisfaction. The average speed of servers may still float below 1Gbps in most data centers, but engineering for averages will affect application performance. Server network connections are universally moving to 10 Gigabit Ethernet in the next few years.

Lower latency: Movement away from edge-distribution-core toward spine-and-leaf architectures is the most significant change in current designs, reducing hop counts. The terminology has been around for a decade or more, but the technology is only now becoming widely available.

Layer 2 flattening: Virtualization and virtual machine migration around and between data centers requires Layer 2 extension to maintain IP addressing. Traditional architectures that fill subnets based on optimized routing need to be rebuilt to support new requirements brought on by virtualization and high- speed data center interconnects.

High availability: Network managers are becoming serious about end-to-end high-availability designs, from dual-rail power to redundant network connections at every point in the data center. At the same time, the need for failover times measured in milliseconds, not seconds, is driving new protocols and approaches to high-availability switching and routing.

Requirements in the data center for higher security and more distributed management and control have added to the challenge. Network management and configuration control, long decoupled from daily operations, are being pushed away from dedicated network teams and into server managers. This stems from virtual switching platforms and aggressive development and operations teams trying to get network complications out of the way of their applications.

At the same time, administrators are reconsidering security. Traditional approaches in data centers that assume a trusted status of systems are being upended. The daily news about breach after breach of “secure” applications clearly contrasts the costs of security with the much higher costs of insecurity.

This particular trend is being collectively driven by technical requirements, equipment replacement, virtualization, security and changing views of network management.