I’ve spent the last few months working on IBMs’ plans for next generation data center fabric. It is a fascinating area, one ripe for innovation and some radical new thinking. When we were architecting on demand, and even before that, working on the Grid Toolbox, one of the interesting futures options was InfiniBand or IB.

What made IB interesting was that you could put logic in either end of the IB connection. Thus turning a standard IB connection into a custom switched connector by dropping your own code into the host channel adapter (HCA) or target channel adapter (TCA). Anyway, I’m getting off course. The point was that we could use an industry standard protocol and connection to do some funky platform specific things like specific cluster support, quality of service assertion, or security delegation without compromising the standard connection. This could be done between racks at the same speed and latency as between systems in the same rack. This could open up a whole new avenue of applications and would help to distribute work inside the enterprise, hence the Grid hookup. It never played out that way for many reasons.

Over in the Cisco Datacenter blog, Douglas Gourlay is considering changes to his “theory” on server disaggregation and network evolution – he theorises that over time everything will move to the network, including memory. Remember, the network is the computer?

He goes on to speculate that “The faster and more capable the network the more disaggregated the server becomes. The faster and more capable a network is the more the network consolidates other network types.” and wants time to sit down and “mull over if there is an end state”.

Well nope, there isn’t and end state. First off, the dynamics of server design and environmental considerations mean that larger and larger centralized computers will still be in vogue for a long time to come. Take for example iDataplex. It isn’t a single computer, but what is these days? In their own class are also the high end Power 6 595 Servers, again not really single servers but intended to multi-process, to virtualise etc. There is a definite trend for row scale computing, where additional capacity is dynamically enabled off a single set of infrastructure components and while you could argue these are distributed computers, just within the row, they are really composite computers.

As we start to see fabric settle down and become true fabrics, rather than either storage/data connections or network connections, new classes of use, new classes of aggregated systems will be designed. This is what really changes computing landscape, how they are used, not how they are built. The idea that you can construct a virtual computer from a network was first discussed by former IBM guru Irving Wladawsky-Berger. His Internet computer illustration was legend inside IBM and used and re-used in presentations throughout the late 1990s.

However, just like the client/server vision of the early ’90s, the distributed computing vision of the mid 90’s, and Irvings’ Internet computer of the late 1990s, plus all those those that came before and since, the real issue is how to use what you have, and what can be done better. That for me is the crux of the emerging world of 10Gb Ethernet, Converged Enhanced Ethernet, fibre channel over Ethernet et al. Don’t take existing systems and merely break them apart and network them, because you can.

As data center fabrics allow low latency, non-blocking, any to any and point to point communication, why force traffic through a massive switch and lift system to enable this to happen? Enabling storage to talk to tapes, for networks to access storage without going via a network switch or a server, enabling server to server, server to client, device to device surely has some powerful new uses. The live dynamic streaming and analysis of all sorts of data, without having to have it pass through a server. Appliances which dynamically vet, validate and operate on packets as they pass through from one point to another.

Since Douglas ended his post with a qoute, I thought this apropo “And each day I learn just a little bit more, I don’t know why but I do know what for, If we’re all going somewhere let’s get there soon, Oh this song’s got no title just words and a tune”. – Bernie Taupin.

About & Contact

I'm Mark Cathcart, formally a Senior Distinguished Engineer, in Dells Software Group; before that Director of Systems Engineering in the Enterprise Solutions Group at Dell. Prior to that, I was IBM Distinguished Engineer and member of the IBM Academy of Technology. I am a Fellow of the British Computer Society (bsc.org) I'm an information technology optimist.

I was a member of the Linux Foundation Core Infrastructure Initiative Steering committee. Read more about it here.