Consider these impressive stats shared in a keynote from Cisco’s CTO and CSO Padmasree Warrior last week at Cisco Live, London:

50 Billion “things” including trees, vehicles, traffic signals, devices and what not will be connected together by 2020 (vs. 1000 devices connected in 1984)

2012 created more information than the past 5000 years combined!

2/3rd of the world’s mobile data will be video by 2015.

These statistics may seem a bit surprising, but the fact is, they cannot be ignored by CIOs and others chartered with the responsibility of managing IT infrastructure.

Impact on Enterprise and SP Infrastructure strategies

Further, these trends are not silo’d and are certainly not happening in a vacuum. For example, Bring-your-Own Device (BYOD) and the exponential growth of video endpoints, may be happening in the “access”, but they are causing a ripple effect upstream in the data center and cloud environments, and coupled with new application requirements, are triggering CIOs across larger Enterprise and Service Providers to rapidly evolve their IT infrastructure strategies.

It is much the same with cloud infrastructure strategies. Even as Enterprises have aggressively adopted the journey to Private Cloud, their preference for hybrid clouds, where they can enjoy the “best of both worlds” – public and private have grown as well. However, the move to hybrid clouds has been somewhat hampered by challenges as outlined in my previous blog: Lowering barriers to hybrid cloud adoption – challenges and opportunities.

The Fabric approach

To address many of these issues, Cisco has long advocated the concept of a holistic data center fabric, heart of its Unified Data Center philosophy. The fundamental premise of breaking silos, and bringing together disparate technology silos across network, compute and storage is what makes this so compelling. At the heart of it, is the Cisco Unified Fabric, serving as the glue.

As we continue to evolve this fabric, we’re making three industry-leading announcements today that help make the fabric more scalable, extensible and open.

Let’s talk about SCALING the fabric first:

Industry’s highest density L2/L3 10G/40G switch: Building upon our previous announcement of redefining fabric scale, this time we introduces a New Nexus 6000 family with two form factors – 6004 and 6001. We expect these switches to be positioned to meet increasing bandwidth demands, for spine/leaf architectures, and for 40G aggregation in fixed switching deployments. We expect the Nexus 6000 to be complementary to the Nexus 5500 and Nexus 7000 series deployments, and is not to be confused with the Catalyst 6500 or Nexus fabric interconnects.

The Nexus 6000 is built with Cisco’s custom silicon, and 1 micro-second port to port latency. It has forward propagated some of the architectural successes of the Nexus 3548, the industry’s lowest latency switch that we introduced last year. Clearly, as in the past, Cisco’s ASICs have differentiated themselves against the lowest common denominator approach of the merchant silicon, by delivering both better performance as well as greater value due to the tight integration with the software stack.

The Nexus 5500 incidentally gets 40G expansion modules, and is accompanied by a brand new Fabric Extender – the 2248PQ, which comes with 40G uplinks as well. All of these, along with the 10G server interfaces, help pair the 10G server access with 40G server aggregation.

Also as part of the first step in making the physical Nexus switches services ready in the data center, a new Network Analysis Module (NAM) on the Nexus 7000 also brings in performance analytics, application visibility and network intelligence. This is the first services module with others to follow, and brings in parity with the new vNAM functionality as well.

Next, EXTENSIBILITY:

Industry’s simplest hybrid cloud solution: Over the last few years, we have introduced several technologies that help build fabric extensibility – the Fabric Extender or FEX solution is very popular extending the fabric to the server/VM, as are some of the Data Center Interconnect technologies like Overlay Transport Virtualization (OTV) or Location ID Separation Protocol (LISP), among others. Obviously each have their benefits.

The Nexus 1000V Intercloud takes these to the next level by allowing the data center fabric to be extended to provider cloud environments in a secure, transparent manner, while preserving L4-7 services and policies. This is meant to help lower the barriers for hybrid cloud deployments and is designed to be a multi-hypervisor, multi-cloud solution. It is expected to ship in the summer timeframe, by 1H CY13.

This video does a good job of explaining the concepts of the Intercloud solution:

Cloud computing has evolved from the hype cycle of the last few years, to being an integral part of the Enterprise IT strategy as well as a fundamental service provider offering. The types of cloud constructs have evolved as well – public, private, hybrid and community clouds are all the basic variants, with more sophisticated application-specific cloud offerings continuing to evolve.

While the journey to the private cloud has been continuing and relatively maturing, at least in the more developed countries, and public cloud services offerings are becoming relatively ubiquitous, adoption and deployment of hybrid cloud offerings have had a relatively modest uptake.

The reason for this is not because the allure of hybrid clouds is unappealing, or that it has few use-cases. It is quite the opposite. There are several use-cases all of which are applicable to real-world IT deployments today:

Workload migration: Seamless migration of workloads from the data center or private cloud to the public cloud for better capacity utilization.

Dev/QA operations: Testing of new applications can induce requirement for additional temporary capacity and having an extensible hybrid cloud is quite appealing, instead of investing in on-premise infrastructure.

Cloud-bursting: To handle the needs of bursty applications, temporary capacity allocation in public cloud environments can be extremely cost-effective, providing the convenience of “infrastructure-on-demand”

If the use-cases are real and the benefits are so apparent, why have Enterprise not gone all out to deploy more robust hybrid clouds? Why have only few Enterprise and selective applications followed this model?

I can think of a few. To make it real, let’s consider the use-case of migrating a virtual machine (VM) from the private cloud to a provider cloud, as an example to illustrate some of the challenges:

The role of the CIO is changing so fast these days that you can hardly catch your breath. No longer can the chief information officer focus on maintaining the traditional IT infrastructure. Now it’s about the proliferation of devices and how they can help to build your business. Read More »

Some of the individuals posting to this site, including the moderators, work for Cisco Systems. Opinions expressed here and in any corresponding comments are the personal opinions of the original authors, not of Cisco. The content is provided for informational purposes only and is not meant to be an endorsement or representation by Cisco or any other party. This site is available to the public. No information you consider confidential should be posted to this site. By posting you agree to be solely responsible for the content of all information you contribute, link to, or otherwise upload to the Website and release Cisco from any liability related to your use of the Website. You also grant to Cisco a worldwide, perpetual, irrevocable, royalty-free and fully-paid, transferable (including rights to sublicense) right to exercise all copyright, publicity, and moral rights with respect to any original content you provide. The comments are moderated. Comments will appear as soon as they are approved by the moderator.