For most of the last four months I’ve been on the road, doing what I enjoy most: talking with customers about the benefits of a data center fabric and the QFabric architecture. That is the best part of my job, and I haven’t had this much fun in a long time.

It’s been particularly interesting to observe what a mind-expanding concept the QFabric architecture is for most customers. They quickly grasp the benefits of a flat, any-to-any fabric to interconnect their data center infrastructure, and the concept of single-hop latency for all data flows while eliminating the challenges of managing the physical locality of processing and data is compelling as they virtualize their data centers. The ability to manage the fabric as a single, logical entity while eliminating the need to run multi-link protocols such as Spanning Tree or TRILL is almost too good to be true.

I know it’s been a while since I last had a chance to share my thoughts on this site, but I couldn’t resist this one. Cisco recently announced its “End-to-End High-Performance Trading Fabric.” A couple of things about this announcement struck me as particularly interesting.

First, it doesn’t appear that there is any real news here other than the fact they released performance figures for their Nexus 3068 switch. It is so unlike Cisco to publicize its performance data; low latency of its switches is not one of their strengths. Cisco has not published performance numbers on the Nexus 5548 and for a good reason. The 5548’s performance on packets sizes of 512B and less is egregious.

On March 30, Cisco announced an update to their data
center “vision.” We are pleased to see
that they have endorsed our view of the problem—that the legacy multi-tiered
data center network architecture must change to enable a truly virtualized data
center. This evolution will begin with
the architecture, where hierarchical tree structures will give way to flat
fabrics. We at Juniper have been
promoting this vision for the last two years and Cisco has now joined the
chorus. Where we fundamentally differ is
in our view of what constitutes a fabric and how one implements that
fabric. Whereas Cisco is attempting to
build a flatter, non-blocking network using their existing and evolving Nexus
switches, we have focused on building a fabric by rethinking the concept of the
switch.

"You
state, 'Thus the interfaces to servers and storage should track standards...',
but why stop at the interfaces to the servers and storage?” Robert wrote. “I
believe we should have standards throughout the entire network. I'm not saying
the other vendors are doing a better job at this, FabricPath isn't
standard-friendly either. At least with TRILL or SPB we have people working on
defining a standard, a standard which can be implemented by any vendor in order
to create an OPEN network fabric.”

This is
an excellent question, and speaks to the heart of the design of the QFabric™ switch. Now
that we have gone public with the architecture, I can better respond. Robert's question reflects his
perception that the QFabric switch is a network and, like any network, he
strongly believes that it should be built out of standard interfaces. We at Juniper completely agree—networks
should be open. However this is where
the confusion about the QFabric switch most often arises - the natural assumption
that the QFabric switch is a network.
It is not. The QFabric
switch is, in fact, a switch.

Before the
QFabric switch, if you wanted to scale a network beyond the capacity of a
single switch, you were required to build a hierarchy of individual switches—the
legacy tree structure we see in today’s data center
networks. This approach, which adds
cost, latency, and operational complexity, has been adopted by every other
vendor attempting to build flatter, non-blocking networks in the data center.

byandyingram
on ‎02-28-201103:12 PM - last edited on ‎05-09-201110:13 AM by Jewels

After
30 years in this industry, I am starting to detect a pattern. Every so often, a new product comes to the
market that forever changes the industry—and changes what is possible. This is
a rare event; yet in my career, I have had the privilege to personally participate
in four such episodes.

The
first time I was a hardware product manager at HP when we rolled out the
world's first RISC
microprocessor-based server. It was seven
times faster than a DEC VAX 11/780 at a quarter of the cost. It was my job to convince customers to
implement something called RISC into their data centers—ironically the most
risk-averse part of the IT infrastructure.
More than once, I had to explain that "really, it is spelled with a
C, not a K.”

At
the time, RISC was a revolutionary new architecture based on the concept that
less is more. By simplifying the
instruction set, it was possible to shrink more of the computer system onto a
single piece of silicon. The result was dramatic. Whereas the VAX implemented the system across
13 separate boards interconnected via a bus, we were able to implement the main
parts of the server on a single board.
Faster, cheaper and more reliable, the child of Joel Birnbaum's powerful
vision was a remarkable achievement. It
forever changed the way computers were built.
It was the logical path forward.

At
last the big day is here! Juniper has
been talking about the power of a data center fabric for two years, since we
first publicly disclosed the existence of Project
Stratus. We’ve discussed why we were
building a fabric and what it would mean to our customers. But we have intentionally been silent on the
specifics of the architecture and the underlying technological magic.

That
changes today. In simultaneous locations
around the world, we are sharing the secrets of QFabric, the fruit of the Stratus project, via webcast; and introducing the first component
of the fabric, the QFX3500 Switch.

A
few hundred years ago, a Franciscan friar named William of Ockham professed a concept
that today is known as Occam's Razor. He stated that when confronted with multiple
alternatives, the simplest path is usually the correct one. That is the defining concept behind
QFabric. Pradeep Sindhu, our founder and
spiritual leader, looked at the data center network and saw a simpler, more
correct path to solve this most difficult of network problems. We believe the path he chose can eventually
transform every data center in the world.

It is funny how history repeats itself. This pattern is particularly noticeable when
one has spent more than 30 years in an industry that changes at a light
speed…I’ve seen history repeat itself multiple times.

35 years ago, when I was typing out my JCL punch cards
for the mainframe, I needed to specify the parameters for managing the core
memory used by my program. Today,
operating systems now fully manage memory in a server. Essentially the need to actively manage
memory has been abstracted by the O/S and is now transparent to the users.

A decade ago, when it came to managing the data on disks,
we were required to run a volume manger to specify how data was striped across
multiple spindles. Then along came
advanced filesystems such as ZFS, which now manage the volumes for us. Like memory management, volume management has
become transparent.

The pace of change in the data center is brisk to say the
least. One of the most significant
drivers of change is the broad adoption of server virtualization, which is
designed to allow multiple applications to independently co-exist on the same
physical server. There have been many
different approaches to server virtualization in the past: “envelopes” in MVS (zOS); Mainframe Domain
Facility from Amdahl; Dynamic System Domains and Containers from Sun; and so
forth. Today, the preferred solution is
to use hypervisors to encapsulate applications and their operating system
instances inside a virtual machine (VM).

It may seem like hypervisors such as VMware’s ESX have
sprung out of nowhere. In fact, the
hypervisor has been more than 45 years in the making and can be traced back to a
1964 R&D project at IBM’s Cambridge research facility running on a modified
IBM 360-40 mainframe. Initially known as
CP-40 and later as VM/CMS, it was eventually released as IBM’s first fully
supported hypervisor in 1972 under the name VM/370. Although it remained in the shadow of MVS,
VM/370 proved to be the O/S that customers would not let IBM kill off. Today,
it is known as z/VM and runs on IBM’s z-series mainframes.

One of the keys in building a revolutionary new fabric
architecture for the data center is not forcing unnecessary change on the rest
of the data center. Data centers want to
evolve gracefully. The goal is to
unleash the promise of the modern data center without disrupting how the infrastructure
connects to the fabric or how applications are implemented.

Thus the interfaces to servers and storage should track
standards so that the existing servers and storage can connect while evolving
to incorporate newer protocols such as Data Center Bridging (DCB), VEPA, and
FCoE. It should also be possible to
implement existing applications on top of the fabric to take advantage of the
lower latency and greater agility to improve the user experience, but without
requiring any alterations to the applications themselves. .

byandyingram
on ‎02-07-201109:55 AM - last edited on ‎02-18-201104:15 PM by Jewels

I attended the Gartner Data Center
conference in Las Vegas this past December.
George Weiss and Andy Butler, two Gartner server analysts I have known
and respected for some time, presented a concept they call “Fabric Computing.” They talked about a flexible fabric that
could interconnect all the resources in the data center, all the way down to
the processor cores, caches, and memory.
The goal is to create a very flexible, elastic infrastructure on which a
variety of applications can be easily provisioned and efficiently
operated. This is the ideal
infrastructure to enable cloud computing, as part of a private or public model.

Those who follow my posts
know I have been talking about the power of fabrics
in the data center for some time. Flat, any-to-any fabrics form the ideal
network topology for the modern data center. I believe Gartner’s vision of
fabric computing is on the verge of reality, albeit on a slightly modified
basis. Rather than one fabric, there
will be two fabrics, dictated by the speed of light and electrons.

byandyingram
on ‎02-02-201109:50 AM - last edited on ‎08-04-201104:40 PM by Jewels

For years, the primary role of the data center network was to connect users to applications. Over time—even as these networks evolved from terminal networks (Bisync) to SNA, Token Ring, DECnet and, finally, to Ethernet—their role remained largely the same. And because there was typically a human at the far end of the network, a certain amount of latency could be tolerated.

With the evolution to SOA-based applications and shared storage, however, the data center network has taken on a new role, becoming an extension of the server and its memory hierarchy.