Towards an Open Datacenter with an Interoperable Network
Part I – Why Standardize ?

Standards have played a pivotal role throughout history. Just ask my 9th grade daughter. This story is going to sound like a digression, but bear with me…like most stories, there’s an important morale at the end that will save money for your IT organization.

This past week, my daughter learned that the economic unification of China between 247 – 221 BC was due, in part, to the standardization of weights and measures, including the length of ox cart axles (which facilitated transport of goods on the road systems ). The history of technology contains many examples like this one, showing how standards are beneficial. They promote buying confidence by helping to future-proof purchases (no need to worry that your new ox cart won’t fit on the roads). They encourage competition and commoditization, which lowers capital expense (if all the ox carts are the same size, then I can buy the lowest priced cart that fits my needs). And they promote innovation, interoperability and avoid confusion in the marketplace (does it matter if my ox cart is red or blue, as long as it fits on the road? Probably not. But if I can build a cart with the same axel width that can hold twice as much produce, then I’ve created meaningful innovation and differentiated myself from the other ox carts).

In the same way, a standardized approach to more modern commodities, like data center switches, makes sense too. Much has been written about how we can standardize on parts of the solution that have long development times, like silicon ASICs, and differentiate through those aspects which have faster turnaround, like software.

But what about the benefits of buying all your ox carts from the same place? Doesn’t this mean you can get lower prices from buying in bulk and having a good relationship with a single ox cart provider? Won’t you have to train your mechanics to only fix one kind of ox cart, using a common set of tools, and thus save training expenses? Surprisingly, the answer is no to all of these questions, according to a study conducted by Gartner Group for major networking equipment vendors at a large number of customer deployments. For example, this study found that working with a single vendor actually costs a premium of up to 20% over multi-sourced environments, since that vendor isn’t constrained by competitive pricing. Since the tools to fix different types of ox carts (and network switches) are mostly common regardless of brand, there isn’t a need to increase staff or training.

In fact, according to this study, CIOs who don’t re-evaluate their single vendor networking choices aren’t living up to their fiduciary responsibilities. So check out this report for more details, and next time I’ll tell you how to distinguish between true industry standard networking implementations, and those who just want to take you for a ride in their ox cart.

Questions about how networking standards can save you money? Ask me through either my blog or Twitter feed.

Towards an Open Data Center with an Interoperable Network
Part III – What have we done so far?

In support of networking solutions for open, interoperable data center networks, IBM has taken the lead in creating a series of technical briefs known as ODIN (from my last blog post) which describe the issues facing these networks, and the standards which can be applied to address these issues (for the complete list of materials, see the IBM System Networking website). So what is ODIN, and why does it matter ?

First, we should clearly state that ODIN is NOT a new industry standard, and does not compete with existing standards. Rather, the ODIN documents explain and interpret existing standards and describes best practices for incorporating standards into a multi-vendor network. It can be used to guide strategic planning discussions, help prepare a vendor-agnostic request for proposal (RFP), and clarify preferred technologies to optimize each aspect of the network design. Taking advantage of this material can promote buying confidence by letting administrators choose among multiple networking vendors and avoid incompatible offerings that lock them into a nonstandard architecture

ODIN addresses best practices and interpretations of networking standards that are vital to efficient data center operations. These methods and standards facilitate the transition from discrete, special-purpose networks, each with its own management tool, to a converged, flattened network with a common set of management tools. They represent a proven approach that has been implemented by IBM and others within their own data centers using existing products, as well as through engagements with industry-leading clients worldwide. The first release of this material includes features such as:

Emerging standards for overlay networks featuring software-defined networking and OpenFlow, as well as emerging network overlays such as distributed overlay virtual Ethernet (DOVE)

Additional features and standards will be added in the future. For now ODIN has been endorsed by many leading industry companies and universities, and more are expected to participate in the future. Drop me a line if you’d like to know how your company or university can participate.

What do you think about an industry standard approach to networking? Give me your feedback below, or send a quick response to my Twitter feed.

Towards an Open Data Center with an Interoperable Network
Part II – What are we trying to fix?

Over the past several years, progressive data centers have undergone fundamental and profound architectural changes. Nowhere is this more apparent than in the data center network infrastructure. In this post, we’ll take a look at some of the problems with conventional networks, and next time we’ll introduce the fundamentals of an approach to deal with these issues.

Instead of under-utilized devices, multi-tier networks, and complex management environments, the modern data center is characterized by highly utilized servers running multiple VMs, flattened, lower latency networks, and automated, integrated management tools. Software defined network overlays are emerging which will greatly simplify the implementation of features such as dynamic workload provisioning, load balancing, redundant paths for high availability, and network reconfiguration. Cloud networks with multi-tenancy, resource pooling, and other features are becoming increasingly commonplace. Finally, to provide business continuity and backup/recovery of mission critical data, high bandwidth links between virtualized data center resources are extended across multiple data center locations.

Highly virtualized data centers offer greater resource utilization and lower costs. They can also simplify management if network issues such as latency, resilience, and multi-tenant support for public and private cloud environments are addressed. To realize the greatest benefits from virtualization, networks must be optimized to support high volumes of east-west traffic. This can be accomplished by flattening the network to a two-tier design, using Layer 2 domains to facilitate VM migration, and deploying network overlays to enable efficient virtual switches. While existing storage networks will likely continue in their present role for some time, the opportunity to converge networking and storage traffic is enabled by new lossless networking protocols that guarantee data frame delivery. Each of these exercises requires a non-trivial extension of the existing data network. Collectively, they present a daunting array of complex network infrastructure changes, with fundamental and far-reaching implications for the overall data center design.

The networking industry has responded to these changes with a bewildering array of standardized and proprietary solutions, making it difficult to determine the best course of action. IBM believes that the practical, cost-effective evolution of data networks must be based on open industry standards and end-to-end interoperability of multi-vendor solutions (for a few words on the importance of standards, see my last blog entry). That’s why IBM has recently published a series of technical briefs, endorsed by many industry leading companies, that lay out a path towards an open data center with an interoperable network (which we’ll call by its acronym ODIN….after the ruler of Asguard in ancient Norse mythology. Coincidentally, his symbol the valknut looks a bit like a 2 ties network topology).

Next time, we’ll give you an overview of the first series of ODIN documents and discuss why they’re important. Let me know the biggest problems in your network by responding to this post below, or for shorter problems on my Twitter feed.

If you haven’t guessed from the blatant pop culture reference in the title of my blog , I spent the first week of June at the IBM Edge storage conference (and I promise if you keep reading that I’ll refrain from making any puns on the Edge theme – despite the temptation to bring up a favorite Irish rock band hero ). Anyway, it would hardly be appropriate to mention another band when Foreigner did such an awesome job rockin’ the conference. Who knew when I was growing up that the 80’s would produce the greatest rock ballads of all time ?

Anyway, it’s been a great week at IBM Edge, hearing about all the latest advances in storage technology; in case you missed the talk on SVC Stretch Clusters as an example of the ODIN reference architecture, let me say a few words about it here. This will get a bit technical, but don’t worry…we’re not going to have a quiz at the end.

The problem we’re trying to solve is VM mobility over extended distance, and multi-site workload deployment across data centers. VM mobility not only improves availability of your applications, it’s a more efficient way to use limited storage resources. The most common reason for using this approach typically involves some form of business continuity or disaster avoidance/recovery solution, including such planned events as migrating one data center to another or eliminating downtime due to scheduled maintenance. But given an increasingly global work force, there are other good reasons to explore VM mobility. Many clients are realizing that this approach provides load balancing and enhanced user performance across multiple time zones (the so-called “follow the sun” approach). Others are realizing that by moving workloads over distance, it’s possible to optimize the cost of power to run the data center; since the lowest cost electricity is available at night, this strategy is known as “follow the moon”.

IBM has announced a software bundle featuring Storage Volume Controller (SVC), which includes Stretch Clustering over long distance. This provides read/write access to storage volumes located far apart from each other, enabling data replication across multiple data centers. SVC works in concert with Tivoli Productivity Center (TPC) to manage your storage, and integration with VMWare’s products like VMotion and vCenter enables transparent migration of virtual machines and their corresponding data or applications.

Let’s consider two data centers separated by up to 300 km (supported in SVC 6.3), and interconnected by a traditional IP network such as the internet or by dark optical fiber. We require many of the features for an Open Datacenter with an Interoperable Network (ODIN) for this solution, including lossless Ethernet fabrics, automated port profile migration, Layer 2 VLANS in each location, and an intersite Layer 2 VLAN supporting MPLS/VPLS (preferably with a 10G or 100G Ethernet line speed between sites, since the SAN infrastructure is likely running either 8G or 16G Fibre Channel). An SVC split cluster uses industry standard Fibre Channel links for both node-to-node communication and for host access to SVC nodes, so your production sites must be connected by Fibre Channel links or FC-IP.

Generally a business continuity solution will define one physical location as a failure domain, though this can vary depending on what you’re trying to protect against; a failure domain could also consist of a group of floors in a single building, or just different power domains in the same data center. In order for SVC to decide which storage nodes survive if we lose a failure domain, the solution uses a quorum disk (a management disk that contains a reserved area used exclusively for system management). At a minimum, you should have one active quorum disk on a separate power grid in one of your failure domains; up to three quorum disks can be configured with SVC, though only one is active at any given time. Metro mirroring is recommended for this type of solution; a maximum round trip delay of 80 ms is supported (note that routing is required, since the fabrics at each location are not merged).

Connectivity between sites may take several forms. First, if the regular Internet provides sufficient quality of service (QoS) and meets your business objectives for recovery time, recovery point, etc., the IBM SVC uses industry standard protocols (FC-IP) in conjunction with a Brocade switch infrastructure to transport storage over distance. This is typically a low cost option, though you might require multiple circuits with load balancing (a so-called virtual trunk). Second, it’s possible to run a Brocade inter-switch link (ISL) between SVC nodes (with SVC 6.3.0 or higher). Brocade switches provide ISL options including consolidation of up to four ISLs at 4 Gbit/s each (creating a 16G trunk), or up to eight ISLs at 16 Gbit/s each (creating a 128G trunk). Buffer credit support for up to 250 km (nearly the SVC limit) is available. SVC supports SAN routing (including FC-IP links) for intercluster storage connections. Finally, note that you can connect multiple locations with optical fiber and use a variety of protocol-agnostic wavelength division multiplexing (WDM) products in this solution. This may provide better QoS or dedicated bandwidth for large applications. A 10G passive WDM option is available on some Brocade switches (with options such as in-flight compression and encryption), or a stand-alone WDM product can be employed (IBM has qualified many such solutions, including those from ODIN participants Adva, Ciena, and Huawei). Your local service provider may also offer a variety of managed service backup options using a combination of these features. Attachment of each SVC node to both local and remote SAN switches (without ISLs) is typically done in this case. Both the ISL and non-ISL approaches are known as split I/O groups.

IBM SVC storage manager works in concert with vCenter through API plug-ins. This includes VADP (which provides data protection for snapshot backups at the VMware-level rather than the LUN level, allowing you to concentrate on the value of the VM rather than the physical location of the associated data). Performance improvements can be achieved by offloading some functions to the storage hypervisor, as well. The storage hypervisor includes a virtualization platform, controller, and management (TPC supports application aware snapshots of your data through Flash Copy Manager). At the management level, IBM also allows the storage hypervisor components to be managed as plug-ins for vCenter. VM location can be managed through vCenter with Global Server Load Balancer (GSLB), which works in concert with a Brocade API plug-in. Further, vCenter is integrated with Brocade Application Resource Broker (ARB), which can report VM status back to a Brocade ADX switch. vCenter and GSLB manage both VM and IP profiles, performing intelligent load balancing to redirect traffic to the VM’s new location.

With this combination of ODIN best practices, IBM SVC, and Brocade SAN/FC-IP connectivity, your data can rest easy, wherever it happens to be (and so can you).

While I’ve been trying to enjoy the nice summer weather as much as anyone (even with teenagers, Disney World is simply awesome) the wheels of technology continue to push forward even during summer vacation. For example, IBM recently hosted the System X and PureSystems Technical University in San Francisco, California. With over 27 major sponsors and exhibitors ranging from Intel to QLogic, this was an event worth attending. As usual, my interest lies in all things related to data center networking, so I was pleased to see more content on IBM’s Storage Volume Controller (SVC) presented by one of our business partners, Brocade (although IBM invented SVC some time ago, Brocade was only recently qualified to support stretch clusters as part of this solution). Regular readers of my blog will recall that Brocade is among the endorsers for the Open Datacenter Interoperable Network (ODIN), and that the SVC Stretch Cluster solution was discussed previously when I presented at the IBM Storage Edge conference in June. I’d like to mention a few additional features of storage networking using SVC that didn’t make it into my earlier blog, and try to segue from Disney World to World Wide Port Names (let me know how you think this works out).

If you missed this event and would like to follow along, the presentation from Brocade can be accessed at the IBM Tech University site; once you’ve created a login, just search for presentation evr51. You can also catch up on this solution through the IBM storage roads show making its way around the country for the next month or so.

Multi-site storage deployments are useful for many applications. These include improved physical security, disaster avoidance/recovery, and increased uptime by moving workloads to different compute centers. The IBM SVC Stretch Cluster solution aligns your storage access needs with virtual machine mobility across extended distances. The actual distance depends on your latency requirements; since we can’t get around the speed of light limitations (yet), for typical applications IBM recommends 100 to 150 km or so, although the solution is qualified up to 300 km or more. SVC Stretch Clusters provides read/write access to storage volumes across multiple sites, and works in concert with Tivoli management products to insure synchronous data replication. Also, SVC supports SAN routing with industry standard FC-IP links for intercluster communications and volume mirroring within split cluster groups. The underlying IP infrastructure complies with ODIN best practices, and includes Brocade offerings such as the MLXe switch to provide line rate 1, 10, and 100 Gbit/s Layer 2 connectivity based on MPLS and VPLS/VLL.

Digging down into the technology a bit further, Brocade supports the IBM 16 Gbit/s Fibre Channel adapters used in System X solutions; both single and dual port options are available, running over 1,000,000 IOPS per adapter. These adapters support features including SAO (application quality of service assignment), target rate limiting, boot over SAN, boot LUN discovery, NPIV, and switched N_ports. The IBM Flex systems include embedded offerings such as a 24 port or 48 port scalable SAN switch, also running 16 Gbit/s links with over 500,000 IOPS per port. The SAN switches used in SVC provide additional buffer credits to support long distance connectivity (half a dozen ports running up to 250 km without performance droop, with negligible droop up to 300 km or longer). To reduce the number of fibers required between sites and save cost when connecting two remote locations, you can consolidate up to four lower data rate links into a single inter-switch link at 16 Gbit/s, and then logically combine up to eight ISLs into a single high performance frame-based trunk.

When using the Brocade Fibre Channel adapters in a fabric, it’s possible to eliminate fabric reconfiguration when adding or replacing servers. You can also reduce the need to modify zones and LUN masking, since you can pre-provision fabric ports with virtual worldwide port names ((WWPNs) and boot your LUN zones, fabric zones, and LUN masks. It’s easy to migrate virtual WWPNs within a switch, and map them to physical devices to help with asset management. Further, you can use diagnostic port features to non-intrusively verify that your ports, transceivers, and cables are in good working order, reducing the fabric deployment and diagnostic times from days to a few hours or less (depending on the size of your fabric).

If you’d prefer to connect multiple sites using wavelength multiplexing (such as the offerings from ODIN endorsers Adva, Ciena, or Huawei) you can run ISLs directly over a WDM network. I’ll have more to say about WDM solutions qualified by IBM in a future blog. For now, here’s a quick tip for configuring your Brocade switch fabric: if you want to run line rate 10 Gbit/s from the Brocade SAN switch directly over WDM, the first 8 ports on the FC16-32 or FC16-48 switches can be configured to operate at this data rate – you can save a slot in the DCX with this configuration. And remember that you can always logically partition the switches to isolate different traffic types, so you can connect storage resources in a PureFlex with a larger existing SAN that might be running your System Z FICON traffic, and keep the two applications isolated from each other.

Your SVC Stretch Cluster solution compliments the integrated compute power of PureFlex, and both of them can co-exist in your data center. All the PureFlex resources are managed from one point with Flex System Manager (FSM), and the use of open industry standard protocols mean that you’ll be getting the lowest possible hardware cost. Of course, you knew all that if you made it to PureSystems Technical University for your summer vacation, so you can get started saving money and improving storage performance right away. If you missed it, don’t worry…IBM will be offering more technical university events in the coming months, spread around the world, for not only PureSystems but many other brands as well. If you can attend, drop me a line & let me know how you liked it; I’ll keep everyone posted on the feedback through my blog & Twitter feed.

During InterOp 2012 in Las Vegas, IBM released a set of five technical briefs which lay out the path towards creating an Open Datacenter with an Interoperable Network (ODIN). This approach uses industry standards as the preferred means to address key issues in next generation data center networking. The response has been tremendous, and ODIN has been very well received across the industry. I've been posting a lot about this in my blog lately, but for your convenience here's the current list of everyone who's endorsed ODIN so far, in no particular order:

Juniper Networks noted in an endorsement from their Vice President of Global Alliances that there is an unprecedented array of technical challenges which ODIN will address, including cost effective scaling, highly virtualized data centers, and reliable delivery of data frames.

Brocade said that “using an approach like ODIN…facilitates the deployment of new technologies”

Huawei said that “ODIN addresses best practices and interpretations of networking standards that are vital to efficient data center operation.” Also, Huawei Fellow Peter Ashwood-Smith shows an ODIN view of the future data center network in his webinar for Interop, entitled “How to prepare your infrastructure for the cloud using open standards.”

Extreme Networks said in their endorsement that “Having open, interoperable, and standard-based technologies can enhance (these) cost savings by allowing choice of best-of-breed technologies.”

NEC noted that software-defined networking (SDN) is part of ODIN, and has emerged as the preferred approach to solving Big Data and network bottleneck issues.

BigSwitch said in their blog “The Importance of Being Open” that “ODIN is a great example of how we need to maintain openness and interoperability in next generation networks”

Adva Optical Networking in their blog on "the missing piece in the cloud computing puzzle" talked about the role of ODIN in the wide area network, including both dark fiber solutions, MPLS/GPLS, and emerging trends using SDN to manage cloud computing and the WAN. They also cited recent SDN work with the Ofelia project in Europe as an example of ongoing work towards open standards in the WAN.

Ciena pointed out in a post from their CTO and Senior Vice-President that “the use of open standards has been one of the fundamental “change agents” in the networking industry”. These standards are “associated with encouraging creativity by enabling a diverse and rapidly expanding user group” and “generally support the most cost-effective scaling”. They called ODIN “a nearly ideal approach” and said that ODIN “is on its way to becoming industry best-practice for transforming data-centers”.

Marist College provided a university’s perspective, as their CIO noted that their support of ODIN was part of their broader efforts to insure that the next generation of technology students are prepared for the challenges which await them. Marist also cited related work with their National Science Foundation funded lab for enterprise computing and their cloud computing computational resources.

Thanks to everyone for showing your support of open industry standards and the ODIN approach to data center networking. I’m honored and humbled by this strong show of support from so many industry leaders, and I’m very excited to be taking the first steps with all of you on this journey towards a more open, interoperable data center network. As we continue to develop more content for ODIN, both around new standards as well as deeper technical descriptions of reference architectures which implement the ODIN design principles, I’ll keep you posted on further activities with these and other companies.

Would you like to be next to endorse ODIN, and receive eternal fame and glory by being mentioned in my blog ? Let me know where I can point to your endorsement, or drop me a line on my Twitter feed

In addition to the many industry leading companies who have endorsed IBM's recently released technical briefs, describing an Open Datacenter with an Interoperable Network (ODIN), the first academic endorsement of ODIN has recently come from Marist College (Go Red Foxes !). In their endorsement, Marist notes that their support of ODIN was part of their broader efforts to insure that the next generation of technology students are prepared for the challenges which await them. Marist also cited their related work with the National Science Foundation funded lab for enterprise computing, their network interoperability lab, and their cloud computing computational resources. Also commenting on ODIN as part of their Twitter feed were IBM Vice President Ross Mauri (a member of the Marist Board of Directors) and Marist Vice President and Chief Information Officer Bill Thirsk. I'm sure there will be opportunities for IBM and other ODIN supporters to work with colleges such as Marist on research and interoperability that will benefit the open design principles set forth in the ODIN documents.

I’m pleased to report that Juniper Networks has publicly endorsed the open data center interoperable network (ODIN) approach to designing data center networks. If you've been following this blog, then you know that on May 8, IBM released a set of technical briefs describing ODIN during the InterOp conference in Las Vegas. This approach to using industry standards as the preferred means to designing data center networks has been supported by Juniper, as discussed in this blog post from Liz King, Vice-President of Global Alliances. Many thanks to Juniper for their support of open networking standards; I’m sure we’ll have more to say about how these solutions should be designed in the near future.

In Part I of this post, we looked more closely at the networking under the covers of an IBM PureScale system. We found that a reasonably configured PureSystems solution could comfortably support a whole lot of VMs in the space of only a few racks (no, I’m not going to repeat the numbers here; check out my last post for more details). I also promised to explain why networking would drive the next big innovations on this platform.

This dense packing of compute power is exactly why the network will be so important to the future of this system. Before PureSystems, large amounts of servers and storage would have to be spread out across the data center; network latency and physical distance would ultimately limit performance. Now that multi-core processors, advanced storage technology, and other features have made it possible to fit this much processing power into a few racks, we can take full advantage of Ethernet running up to 40 Gbit/s and Fibre Channel running up to 16 Gbit/s to realize very high bandwidth and low latency over short distances.

Now, imagine what happens in a few years as these trends continue. When the network can run 100 Gbit/second or faster, it becomes the highest speed interconnect on the platform. We’ll be able to interconnect more processors (each of which will also be more powerful than they are today and will host more VMs), with negligible performance impact due to the network. Multi-processor systems on the order of several thousand physical processors could become economically viable for many users, not just the most advanced applications.

At the same time, storage is integral to PureSystems, not a separate add-on from another company. In the future, server to storage access technologies previously reserved only for high performance computing can begin to trickle down into more commercial integrated platforms. And future integrated systems, enabled by the network, could then reach levels of parallelism and performance far beyond what we know today; think of how video games have brought the equivalent of a graphic supercomputer into your living room at very low cost. With latency between servers and storage becoming a non-issue, these systems would be ideal for processing the type of gigantic data sets which are showing up in financial, health care, retail, transportation, and a host of other fields. All of this stems from the PureSystems being rolled out now, so you get not only the immediate benefits of this platform but a path forward into even more powerful computing applications as time goes on.

Of course, when this happens everyone will marvel at the incredible advances in multi-core processors, multi-thread software, and other fields. But let’s not forget the standards-based, high bandwidth, physical and virtual networks under the covers of these systems that will quietly be doing their part to revolutionize computing, yet again.

What do you think the future of networks, or video games for that matter? Share your comments below, or respond to my Twitter feed

Every few years, IBM announces some major innovation in the way computers are designed, used or deployed. You might remember the transition from CMOS to BiCMOS mainframes, copper-doped ASICs, or open source Linux for the enterprise. Each of these represented a major shift in the way we think about and use computational power to accomplish a huge variety of tasks. Recently, IBM announced its latest innovation, the PureSystems platform of integrated servers, storage and networking.

By now, you’ve probably seen at least some information about how PureSystems accelerates cloud deployments, simplifies the data center, and consolidates computing resources. But, I’m a networking guy, so my view of the world is a bit different. Much like the famous view of the world as seen from New York City , when I look at PureSystems, I see a lot of advanced servers, storage, and software hanging off the true technological marvel – the integrated data center network.

At the risk of appearing a bit single-minded, I’d like to talk about one of the unsung heroes of the PureSystems revolution, namely the networking technology that ties PureSystems together. And then I’d like to point out that not only is the network a key part of PureSystems, it’s got the potential to drive the next series of big innovations on this platform, and maybe even across the computing industry.

Let’s start with a quick review of the PureSystems network.

First, it’s designed for flexibility; you can choose a combination of networking protocols, including Fibre Channel (up to 16 Gbit/second), Ethernet (10 to 40 Gbit/second), or InfiniBand (QDR and FDR data rates). You can plug up to four switches into a PureSystems chassis, and link multiple chassis together using the 10, 40 or 10/40GbE IBM System Networking RackSwitch top of rack (TOR) switches. This lets you scale PureSystems from a single chassis, up through multi-rack systems (where a rack can hold up to 4 chassis).

PureSystems also supports a virtual Ethernet switch running in the hypervisor, the IBM Distributed Switch 5000v. IBM’s virtual switches, blade switches, and TORs all support industry standards including switch-resident IBM VMready with IEEE 802.1Qbg to enable VM migration (either between VMs on the same physical server, or across multiple physical servers).

And, this platform makes really good use of server virtualization; each chassis can hold up to 14 half-wide blade servers or 7 full-wide blade servers, running your choice of workloads on Linux, Windows, or AIX. Yes, I said AIX…you can plug either IBM Power microprocessor blades or Intel x86 blades into a PureSystems chassis. With around 160 servers in a 4 rack system, even a moderately virtualized system can fit over 1,600 VMs quite comfortably. That’s a tremendous amount of compute power in a relatively small package, and it comes pre-integrated with a single system manager that lets you manage all the physical and virtual resources in the system (without any third party tools).

Now that we know a bit about the networking technology inside PureSystems, why should we get excited about it ? Tune in to Part II of my blog to find out! Meanwhile, let me know what you think about the importance of networking for integrated systems by commenting on my blog, or through my Twitter feed.

During a webinar presented at InterOp 2012 describing how to prepare your infrastructure for the cloud using open standards, Huawei has indicated their support for the Open Datacenter Interoperable Network (ODIN) approach. Huawei joins a growing number of companies who recognize that the best path forward for next generation data centers lies in the use of open industry standards and protocols. You can read more about the importance of open standards and ODIN in my earlier blog posts or through my Twitter feed. Stay tuned for the latest news from InterOp and the world of data center networking !

As noted in a recent post on this blog, Huawei had included a mention of the Open Datacenter Interoperable Network (ODIN) in their InterOp Webinar on open standards for cloud networking. In addition, Huawei has now posted a more detailed endorsement of ODIN on their blog site. According to this site, " ODIN addresses best practices and interpretations of networking standards that are vital to efficient data center operation". For those of you who haven't reviewed the ODIN materials yet, they include a description of the transformation taking place in modern data center networks and how to best address these issues using open industry standards. Keep watching this space for more news on ODIN and other data center networking issues.

The list of companies endorsing IBM's recently announced Open Datacenter with an Interoperable Network (ODIN) continues to grow. Ciena is the most recent company to endorse ODIN, as noted in their blog post from their CTO and Senior Vice-President, Products and Technology, Steve Alexander. In this post, Ciena says that ODIN "looks to be a nearly ideal approach to allow the connect, compute, and store resources to be virtualized and operationally united for simplicity and scale". In fact, the use of industry standards to enable more tightly integrated solutions has been recently demonstrated in IBM's PureSystems offerings, which were announced on April 11; you can read more about PureSystems in my earlier blog posts. I'm very pleased that Ciena has endorsed the ODIN approach, and I'm sure we'll see more examples of this design approach in the coming months. Remember, let me know what you think about ODIN by commenting on this blog, or on my Twitter feed, and keep watching this site for the latest data center networking news.

I’m pleased to report that BTI has become the latest company
to publicly endorse the Open Datacenter Interoperable Network (ODIN) approach
to designing data center networks. As regular readers of my blog know, IBM has released
a set of technical briefs describing ODIN, which provides an approach to using
open industry standards to create next generation data center networks.I’ve written, podcasted, and been interviewed
many times about ODIN, all of which is linked from my blog.This approach to using industry standards as
the preferred means to designing data center networks has been endorsed in this post from Chandra Pandey, Vice-President of Platform Solutions at BTI. Many thanks for this support of open
networking standards; I’m sure we’ll have more to say about how to create these
solutions with IBM and BTI technology in the near future.

I’m pleased to report that Brocade has publicly endorsed the open data center interoperable network (ODIN) approach to designing data center networks. On May 8, IBM released a set of technical briefs describing ODIN during the InterOp conference in Las Vegas. This approach to using industry standards as the preferred means to designing data center networks is discussed further in Brocade's blog. Many thanks to Brocade for their support of open networking standards; I’m sure we’ll have more to say about how to build these solutions in the near future.