Every few years, IBM announces some major innovation in the way computers are designed, used or deployed. You might remember the transition from CMOS to BiCMOS mainframes, copper-doped ASICs, or open source Linux for the enterprise. Each of these represented a major shift in the way we think about and use computational power to accomplish a huge variety of tasks. Recently, IBM announced its latest innovation, the PureSystems platform of integrated servers, storage and networking.

By now, you’ve probably seen at least some information about how PureSystems accelerates cloud deployments, simplifies the data center, and consolidates computing resources. But, I’m a networking guy, so my view of the world is a bit different. Much like the famous view of the world as seen from New York City , when I look at PureSystems, I see a lot of advanced servers, storage, and software hanging off the true technological marvel – the integrated data center network.

At the risk of appearing a bit single-minded, I’d like to talk about one of the unsung heroes of the PureSystems revolution, namely the networking technology that ties PureSystems together. And then I’d like to point out that not only is the network a key part of PureSystems, it’s got the potential to drive the next series of big innovations on this platform, and maybe even across the computing industry.

Let’s start with a quick review of the PureSystems network.

First, it’s designed for flexibility; you can choose a combination of networking protocols, including Fibre Channel (up to 16 Gbit/second), Ethernet (10 to 40 Gbit/second), or InfiniBand (QDR and FDR data rates). You can plug up to four switches into a PureSystems chassis, and link multiple chassis together using the 10, 40 or 10/40GbE IBM System Networking RackSwitch top of rack (TOR) switches. This lets you scale PureSystems from a single chassis, up through multi-rack systems (where a rack can hold up to 4 chassis).

PureSystems also supports a virtual Ethernet switch running in the hypervisor, the IBM Distributed Switch 5000v. IBM’s virtual switches, blade switches, and TORs all support industry standards including switch-resident IBM VMready with IEEE 802.1Qbg to enable VM migration (either between VMs on the same physical server, or across multiple physical servers).

And, this platform makes really good use of server virtualization; each chassis can hold up to 14 half-wide blade servers or 7 full-wide blade servers, running your choice of workloads on Linux, Windows, or AIX. Yes, I said AIX…you can plug either IBM Power microprocessor blades or Intel x86 blades into a PureSystems chassis. With around 160 servers in a 4 rack system, even a moderately virtualized system can fit over 1,600 VMs quite comfortably. That’s a tremendous amount of compute power in a relatively small package, and it comes pre-integrated with a single system manager that lets you manage all the physical and virtual resources in the system (without any third party tools).

Now that we know a bit about the networking technology inside PureSystems, why should we get excited about it ? Tune in to Part II of my blog to find out! Meanwhile, let me know what you think about the importance of networking for integrated systems by commenting on my blog, or through my Twitter feed.

In Part I of this post, we looked more closely at the networking under the covers of an IBM PureScale system. We found that a reasonably configured PureSystems solution could comfortably support a whole lot of VMs in the space of only a few racks (no, I’m not going to repeat the numbers here; check out my last post for more details). I also promised to explain why networking would drive the next big innovations on this platform.

This dense packing of compute power is exactly why the network will be so important to the future of this system. Before PureSystems, large amounts of servers and storage would have to be spread out across the data center; network latency and physical distance would ultimately limit performance. Now that multi-core processors, advanced storage technology, and other features have made it possible to fit this much processing power into a few racks, we can take full advantage of Ethernet running up to 40 Gbit/s and Fibre Channel running up to 16 Gbit/s to realize very high bandwidth and low latency over short distances.

Now, imagine what happens in a few years as these trends continue. When the network can run 100 Gbit/second or faster, it becomes the highest speed interconnect on the platform. We’ll be able to interconnect more processors (each of which will also be more powerful than they are today and will host more VMs), with negligible performance impact due to the network. Multi-processor systems on the order of several thousand physical processors could become economically viable for many users, not just the most advanced applications.

At the same time, storage is integral to PureSystems, not a separate add-on from another company. In the future, server to storage access technologies previously reserved only for high performance computing can begin to trickle down into more commercial integrated platforms. And future integrated systems, enabled by the network, could then reach levels of parallelism and performance far beyond what we know today; think of how video games have brought the equivalent of a graphic supercomputer into your living room at very low cost. With latency between servers and storage becoming a non-issue, these systems would be ideal for processing the type of gigantic data sets which are showing up in financial, health care, retail, transportation, and a host of other fields. All of this stems from the PureSystems being rolled out now, so you get not only the immediate benefits of this platform but a path forward into even more powerful computing applications as time goes on.

Of course, when this happens everyone will marvel at the incredible advances in multi-core processors, multi-thread software, and other fields. But let’s not forget the standards-based, high bandwidth, physical and virtual networks under the covers of these systems that will quietly be doing their part to revolutionize computing, yet again.

What do you think the future of networks, or video games for that matter? Share your comments below, or respond to my Twitter feed

While I’ve been trying to enjoy the nice summer weather as much as anyone (even with teenagers, Disney World is simply awesome) the wheels of technology continue to push forward even during summer vacation. For example, IBM recently hosted the System X and PureSystems Technical University in San Francisco, California. With over 27 major sponsors and exhibitors ranging from Intel to QLogic, this was an event worth attending. As usual, my interest lies in all things related to data center networking, so I was pleased to see more content on IBM’s Storage Volume Controller (SVC) presented by one of our business partners, Brocade (although IBM invented SVC some time ago, Brocade was only recently qualified to support stretch clusters as part of this solution). Regular readers of my blog will recall that Brocade is among the endorsers for the Open Datacenter Interoperable Network (ODIN), and that the SVC Stretch Cluster solution was discussed previously when I presented at the IBM Storage Edge conference in June. I’d like to mention a few additional features of storage networking using SVC that didn’t make it into my earlier blog, and try to segue from Disney World to World Wide Port Names (let me know how you think this works out).

If you missed this event and would like to follow along, the presentation from Brocade can be accessed at the IBM Tech University site; once you’ve created a login, just search for presentation evr51. You can also catch up on this solution through the IBM storage roads show making its way around the country for the next month or so.

Multi-site storage deployments are useful for many applications. These include improved physical security, disaster avoidance/recovery, and increased uptime by moving workloads to different compute centers. The IBM SVC Stretch Cluster solution aligns your storage access needs with virtual machine mobility across extended distances. The actual distance depends on your latency requirements; since we can’t get around the speed of light limitations (yet), for typical applications IBM recommends 100 to 150 km or so, although the solution is qualified up to 300 km or more. SVC Stretch Clusters provides read/write access to storage volumes across multiple sites, and works in concert with Tivoli management products to insure synchronous data replication. Also, SVC supports SAN routing with industry standard FC-IP links for intercluster communications and volume mirroring within split cluster groups. The underlying IP infrastructure complies with ODIN best practices, and includes Brocade offerings such as the MLXe switch to provide line rate 1, 10, and 100 Gbit/s Layer 2 connectivity based on MPLS and VPLS/VLL.

Digging down into the technology a bit further, Brocade supports the IBM 16 Gbit/s Fibre Channel adapters used in System X solutions; both single and dual port options are available, running over 1,000,000 IOPS per adapter. These adapters support features including SAO (application quality of service assignment), target rate limiting, boot over SAN, boot LUN discovery, NPIV, and switched N_ports. The IBM Flex systems include embedded offerings such as a 24 port or 48 port scalable SAN switch, also running 16 Gbit/s links with over 500,000 IOPS per port. The SAN switches used in SVC provide additional buffer credits to support long distance connectivity (half a dozen ports running up to 250 km without performance droop, with negligible droop up to 300 km or longer). To reduce the number of fibers required between sites and save cost when connecting two remote locations, you can consolidate up to four lower data rate links into a single inter-switch link at 16 Gbit/s, and then logically combine up to eight ISLs into a single high performance frame-based trunk.

When using the Brocade Fibre Channel adapters in a fabric, it’s possible to eliminate fabric reconfiguration when adding or replacing servers. You can also reduce the need to modify zones and LUN masking, since you can pre-provision fabric ports with virtual worldwide port names ((WWPNs) and boot your LUN zones, fabric zones, and LUN masks. It’s easy to migrate virtual WWPNs within a switch, and map them to physical devices to help with asset management. Further, you can use diagnostic port features to non-intrusively verify that your ports, transceivers, and cables are in good working order, reducing the fabric deployment and diagnostic times from days to a few hours or less (depending on the size of your fabric).

If you’d prefer to connect multiple sites using wavelength multiplexing (such as the offerings from ODIN endorsers Adva, Ciena, or Huawei) you can run ISLs directly over a WDM network. I’ll have more to say about WDM solutions qualified by IBM in a future blog. For now, here’s a quick tip for configuring your Brocade switch fabric: if you want to run line rate 10 Gbit/s from the Brocade SAN switch directly over WDM, the first 8 ports on the FC16-32 or FC16-48 switches can be configured to operate at this data rate – you can save a slot in the DCX with this configuration. And remember that you can always logically partition the switches to isolate different traffic types, so you can connect storage resources in a PureFlex with a larger existing SAN that might be running your System Z FICON traffic, and keep the two applications isolated from each other.

Your SVC Stretch Cluster solution compliments the integrated compute power of PureFlex, and both of them can co-exist in your data center. All the PureFlex resources are managed from one point with Flex System Manager (FSM), and the use of open industry standard protocols mean that you’ll be getting the lowest possible hardware cost. Of course, you knew all that if you made it to PureSystems Technical University for your summer vacation, so you can get started saving money and improving storage performance right away. If you missed it, don’t worry…IBM will be offering more technical university events in the coming months, spread around the world, for not only PureSystems but many other brands as well. If you can attend, drop me a line & let me know how you liked it; I’ll keep everyone posted on the feedback through my blog & Twitter feed.