Archive for February, 2008

One of the topics I’m trying to close on at the moment is Power Systems Security. I have my views on where I think we need to be, where the emerging technology challenges are, what the industry drivers are(yours and ours), and the competitive pressures.

If you want to comment or email me with your thoughts on Power Systems security, I’d like to hear. What’s important, what’s not? Of course I’m interested in OS related issues, AIX, i, or Linux on Power. I’m also interested in requirements that span all three, that need to apply across hardware and PowerVM.

Interested in mobility? Want your keys to move between systems with you? Not much good if you move the system but can’t read the data becuase you don’t have key authority. Is encryption in your Power Systems future? Is it OK to have it in software only, to have it as an offload engine or does it need to run faster via acceleration. Do you have numbers, calculations on how many, what key sizes etc.

Let’s be clear though, we have plans and implementations in all these areas. What I’m interested in are your thoughts and requirements.

There’s an excellent analysis by Frank Dzubeck over on Network World today about the new Enterprise Data Center and that hoary old chestnut latency. I don’t know who briefed Frank, it wasn’t me, Jeff and I talked this afternoon and I asked, it wasn’t him, since the article covered also the z10 announcement, I have a good idea though ;-)

Frank covers ensembles, data center utilization and the some of the new data center fabric issues extremely well. He also makes the point, that I’d like folks to be clear about, that this isn’t the resurgance of the mainframe, or everthing back to a central server.

We’ve grown use to indefinite waits, or unbelievably fast response times from certain popular websites, but the emerging problem is around latency in the data center. How to deliver service levels and response times in an increasingly rich and complex systems environment. It’s OK to build a data center or server subsystem focussed around a single business model, something like Amazons EC2 or S3, or Googles search and query engines; it’s another to integrate a vast array of different vendors IT equipment bought at different times for different business applications and services and integrate them all together and orchestrate them as business services. While MapReduce may or may not be as good as, or better than a database, not everything is going to be run in this fashion.

Fibre channel over ethernet is a going to happen, 10Gb ethernet opens up some real options in terms of both integrating systems, and distributing services. It will be almost as fast to connect to another server as it is to talk between cores and processors within the same server. This disclosure from IBM Research today shows the way to the next generation of interconnected infrastructure, working at 300-Gbit/second, the bus goes optical making the integration of rich data systems video, VOIP, total encryption of data, secure key based secure infrastructure services, integrated with more traditional transactional systems a real possibility.

The opportunity isn’t to take the same old stuff and distribute it because the fabric is faster, it’s about better integrating systems, exploiting new ways of doing things. Introducing a common event infrastructure, being more intelligent about WAN and Application routing, having a publish/subscribe/consume model for the infrastructure and genuinely opening it up and simplifying it.

Of course, there a re lots of blanks to be filled in, but the new Enterprise Data Center is taking shape.

To net it out from my perspective though, there is a lot of good technology behind this, and an interesting direction summarized nicely starting on page-10 on the POV paper linked from the new data center page or here.

What it lays out are the three main stages of adoption for the new data center, simplified, shared and dynamic. The Clabby analytics paper, also linked from the new data center page or here, puts the three stages in a more consumable practical tabular format.

They are really not new, many of our customers will have discussed these with us many times before. In fact, there’s no coincidence that the new Enterprise Data Center vision was launched the same day as the new IBM Z10 mainframe. We started discussing and talking about these these when I worked for Enterprise Systems in 1999, and we formally laid the groundwork in the on demand strategy in 2003. In fact, I see the Clabby paper has used the on demand operating environment block architecture to illustrate the service patterns. Who’d have guessed.

Simplify: reduce costs for infrastructure, operations and management

Share: for rapid deployment of infrastructure, at any scale

Dynamic: respond to new business requests across the company and beyond

However, the new Enterprise Data Center isn’t based on a mainframe, Z10 or otherwise. It’s about a style of computing, how to build, migrate and exploit a modern data center. Power Systems has some unique functions in both the Share and Dynamic stages, like partition mobility, with lots more to come.

For some further insight into the new data center vision, take a look at the presentation linked off my On a Clear day post from December.

We’ve announced another performance and benchmark record this week, IBM WebSphere Application Server benchmark involved more than 109,850 concurrent clients and produced 14,004.42 SPECjAppServer2004 JOPS@Standard (jAppServer Operations Per Second), which translates into more than 50 million business transactions over the course of the benchmark’s hour-long runtime. That’s a lot of clients, and a lot of transactions!

The performance run was completed on IBM POWER6 BladeCenter servers powered by two dual-core IBM® POWER6® 4.0 GHz processors and IBM DB2 Universal Database v9.5 on a System p p595 running AIX.

We ran the test over 52-processors, 2-cores per processor and with SMT on. The software config included 26 WAS instances. Now, the issue here isn’t performance, 26-instances isn’t so bad from a config and deployment perspective either. But wouldn’t it be better if you could bundle that all up into a couple of racks and use cloning, automatic deployment, recovery, scheduling etc. and on an even more consolidated, energy efficient platform.

Funnily enough, we are working on that. The IBM Press release mentions IMPACT 2008, that might be good timing, I won’t be there as I’m off to do the Machu Picchu thing at the start of April.

Prior to the new WebSphere+Power double-up, the 4Q2007 record was held by Oracle on HP-UX Integrity Server Blade Cluster, with 10,519.43 JOPS over 24 server instances on 22 2-core processors; Sun also submitted a SPARC T5120 SPECjAppServer2004 benchmark with Sun Java System Application Server 9.1 running 6-nodes, 18-server instances on 48-cores, 6-chips and only scored 8,439.36 JOPS.

You can read the full press release with links to SPEC and IMPACT 2008 here.

Feb. 12th Datamation announced their product of the year awards, the IBM Power Systems p570 server won enterprise server of the year, up against the IBM System x 3950 M2 Server, the HP MediaSmart Server, and the Dell PowerEdge 2970.

A couple of things from the “Monkmaster” this morning peaked my interest and deserved a post rather than a comment. First up was James post on “your Sons IBM“. James discusses a recent theme of his around stackless stacks, and simplicity. Next-up came a tweet link on cohesiveFT and their elastic server on demand.

These are very timely, I’ve been working on a effort here in Power Systems for the past couple of months with my ATSM, Meghna Paruthi, on our appliance strategy. These are, as always with me, one layer lower than the stuff James blogs on, I deal with plumbing. It’s a theme and topic I’ll return to a few times in the coming weeks as I’m just about to wrap up the effort. We are currently looking for some Independent Software Vendors( ISVs) who already package their offerings in VMWare or Microsoft virtual appliance formats and either would like to do something similar for Power Systems, or alternatively have tried it and don’t think it would work for Power Systems.

Simple, easy to use software appliances which can be quickly and easily deployed into PowerVM Logical Partitions have a lot of promise. I’d like to have a market place of stackless, semi-or-total black box systems that can be deployed easily and quickly into a partition and use existing capacity or dynamic capacity upgrade on demand to get the equivalent of cloud computing within a Power System. Given we can already run circa 200-logical partitions on a single machine, and are planing something in the region of 4x that for the p7 based servers with PowerVM, we need to do something about the infrastructure for creating, packaging, servicing, updating and managing them.

We’ve currently got six-sorta-appliance projects in flight, one related to future datacenters, one with WebSphere XD, one with DB2, a couple around security and some ideas on entry level soft appliances.

So far it looks like OVF wrappers around the Network Installation Manager aka NIM, look like the way to go for AIX based appliances, with similar processes for i5/OS and Linux on Power appliances. However, there are a number of related issues about packaging, licensing and inter and intra appliance communication that I’m looking for some input on. So, if you are an ISV, or a startup, or even in independent contractor who is looking at how to package software for Power Systems, please feel free to post here, or email, I’d love to engage.

Long time friend, and former IBM VM and LAN Systems Director, now fellow Austin resident, Art Olbert point me to this video. It’s the University of Manitoba holding a funeral procession for their mainframe system after some 47-years of service. Nothing on their web site says what they’ve replaced it with, I’ve emailed them and asked. Their web site is currently running on Apache on Linux after migrating from Solaris some time in 2005. As always, Slashdot covers this with comments that range from the helpful to the absolutely bizarre.

Art is familiar with this type of stunt, Art is lovingly remembered for blowing up an IBM mainframe at the announcement of the IBM LAN Server in the 1990’s. Sorry Art, couldn’t avoid mentioning it :-) – Ahh the good old days.

About & Contact

I'm Mark Cathcart, Senior Distinguished Engineer, in Dells Software Group. I was formerly Director of Systems Engineering in the Enterprise Solutions Group at Dell, and an IBM Distinguished Engineer and member of the IBM Academy of Technology. I'm an information technology optimist.