Oh, Now it’s legacy IT that’s dead. Huh?

I got a pingback Dana Gardners ZDNet blog for my “Is SOA dead?” post. Dana, rather than addressing the issue I raised yesterday, just moved the goalposts, claiming “Legacy IT is dead“.

I agree with many of his comments, and after my post “Life is visceral“, which Dana so ably goes on to prove with his post. I liked some of the fine flowing language, some of it almost prosaic, especially this “We need to stop thinking of IT as an attached appendage of each and every specific and isolated enterprise. Yep, 2000 fully operational and massive appendages for the Global 2000. All costly, hugely redundant, unique largely only in how complex and costly they are all on their own.” – whatever that means?

However, thinking about a reasonable challenge for anyone considering jumping to a complete services or cloud/services, not migrating, not having a roadmap or architecture to get there, but as Dana suggests, grasping the nettle and just doing it.

One of the simplest and easiest examples I’ve given before for why I suspect as Dana would have it, “legacy systems” exist, is becuase there are some problems you just can NOT be split apart a thousand times, whose data can NOT be replicated into a million pieces.

Let’s agree. Google handles millions of queries per seconds, as do ebay and Amazon, well probably. However, in the case of the odd goggle query not returning anything, as opposed to returning no results, no one really cares or understands, they just click the browser refresh button and wait. Pretty much the same for Amazon, the product is there, you click buy, and if every now and again there was one item of a product left at an Amazon store front, if someone else has bought it between the time you looked for it and decided to buy, you just suffer through the email that the item will be back in stock in 5-days after all, it will take longer than that to track down someone to discuss it with.

If you ran your banking or credit card systems this way, no one would much care when it came to queries. Sure, your partner is out shopping, you are home working on your investments. Your partner goes to get some cash, checks the balance and the money is there. You want to transfer a large amount of money into a money market account, you check and there amount is just there, you’ll transfer some more into the account overnight from your savings and loan and you know your partner only ever uses credit, right?. You both proceed, a real transactional system lets one of you proceed and the other fails, even if there is only 1-second, and possibly less difference between your transactions coming in.

In the google model, this doesn’t matter, it’s all only queries. If your partner does a balance check, a second or more after you’ve done a transfer, and see’s the the wrong balance, it will only matter when they are embarressed 20-seconds later trying to use that balance, that isn’t there anymore.

Of course, you can argue banks dont’ work like that, they reconcile balances at the end of the day. You will though when that exceptional balance charge kicks-in if both transactions work. Most banks systems are legacy systems from a different perspective, and should be dead. We, as customers, have been pushing for straight through processing for years, why should I wait for 3-days for a check to clear?

So you can’t have it both ways, out of genuine professional understanding and interest, I’d like to see any genuine transaction based systems that are largely or wholly services based or that run in the cloud.

In order to what Dana advocates, move off ALL legacy systems, those transaction systems need to cope with 1000, and upto 2000 transactions per second. Oh yeah, it’s not just banks that use “legacy IT”, there are airlines, travel companies, anywhere where there is finite product and an almost infinite number of customers.

Remember, Amazon and ebay and paypal don’t do their own credit card processing as far as I’m aware, they are just merchants who pass the transaction on to a, err, legacy system.

Some background reading should include one that I used early in my career. Around the time I was advocating moving Chemical Bank, NY’s larger transaction systems to virtual machines, which we did. I was attending VM Internals education at Amdahl in Columbia, MD. One of the instructors thought I might find the paper useful.

It was written by a team at Tandem Computer and Amdahl, including the late, great Jim Gray. It was written in 1984. Early on in this paper they describe environments that support 800 transactions per second in 1984. Yes, 1984. These days, even in the current economic environment, 1000tps are common, and 2000tps are table stakes.

And finally, since I’m all about context. I’m an employee of Dell, I started work there today. What is written here is my opinion, based on 34-years IT experience and much of it garned from the sharp end, designing an I/O subsystem to support an large NY banks transactional, inter-bank transfer system, as well as being responsible for the worlds first virtualized credit card authorization system etc. but I didn’t work for Dell, or for that matter, IBM then.

About & Contact

I'm Mark Cathcart, formally a Senior Distinguished Engineer, in Dells Software Group; before that Director of Systems Engineering in the Enterprise Solutions Group at Dell. Prior to that, I was IBM Distinguished Engineer and member of the IBM Academy of Technology. I am a Fellow of the British Computer Society (bsc.org) I'm an information technology optimist.

I was a member of the Linux Foundation Core Infrastructure Initiative Steering committee. Read more about it here.