Seattle, SC11 was the scene for the unveiling of IBM's new Blue Gene/Q supercomputer project to solve the most challenging problems facing engineers and scientists, such as predicting the path of hurricanes, analyzing the ocean floor to discover oil, simulating nuclear weapons performance and decoding gene sequences. SC11 attendees could challenge IBM’s Watson supercomputer in a game of Jeopardy and see the most innovative network research projects in “programming the network” using OpenFlow.

OpenFlow allows the implementation of software-defined networking to enable significant innovation in High Performance Computing, which is highly reliant on network infrastructure. At SC11, the SCInet Research Sandbox (SRS) gave researchers access to over 100 Gigabits per second of capacity to demonstrate the promise of OpenFlow on a software-programmable testbed network running on the SCinet infrastructure. The SC11 SRS will feature for the first time a 10 Gigabit Ethernet, multi-vendor OpenFlow network testbed to provide OpenFlow capabilities for wide area networking. I am pleased to report that our OpenFlow-enabled IBM RackSwitch G8264 will play a major role in this landmark demonstration of OpenFlow capabilities.

As part of the SRS, Indiana University (IU) deployed a 100 Gigabit Ethernet network for its high-speed Lustre WAN between the IU data center in Indianapolis and the convention center in Seattle utilizing OpenFlow technology for path selection and OpenFlow-based IBM RackSwitch G8264s. IU’s Global Research Network Operations Center at Indiana University has extensive network expertise and is the home of the recently announced Network Development and Deployment Initiative based on OpenFlow. IU deployed two Lustre filesystems at the ends of a 100Gb network connecting Bloomington, Indiana and the SC11 show floor. The IU demo executed real-world scientific applications that will saturate this 100Gb link. At the saturation point, application traffic will be dynamically routed over an alternative network using OpenFlow, to tune traffic based on need, priority and capacity.

Also at SC11, IBM System Networking showcased our new smarter networking solutions including products our IBM System Networking RackSwitch G8316, a 40 Gigabit Ethernet (GbE) aggregation switch optimized for High Performance Computing and other applications requiring high bandwidth and low latency. The IBM System Storage SAN768B-2 and SAN384B-2 fabric backbones are among the industry's newest Fibre Channel switching infrastructure, providing reliable, scalable, high-performance foundations for private cloud storage and highly virtualized environments. The IBM System Storage SAN48B-5 SAN switch is designed to meet the demands of hyper-scale, private cloud storage environments by delivering 16 Gbps Fibre Channel technology and capabilities that support highly virtualized environments.

SC11 was a singular gathering of the High Performance Computing community, and it was a privilege for the IBM System Networking team to have the opportunity to meet with so many innovators in the engineering and science community.