Mainframe Propeller Head

You’ve got to love the headline of this post over at the Mainframe Typepad blog: “Oracle Bids on Mainframe Tape Vendor.” Yeah….they’re a little bit more than a mainframe tape vendor. That “mainframe tape vendor” happens to be Sun Microsystems, the company that IBM was considering buying itself.

The Platform engineers had also built technology that would offload some mainframe jobs like encryption and data analysis onto separate machines such as x86-based servers or even machines using I.B.M.’s Cell chip. The idea was to speed up these jobs with more modern hardware and to create a path between industry-standard servers and mainframes.

Such technology could well appear in new mainframe systems due to arrive in late 2010 or early 2011, according to numerous people interviewed for my story.

This could tie into the System z keynote speech at Share, which was given by Karl Freund, a VP in the IBM System z group. In an interview after the keynote, Freund divulged some details. Think blade server as a mainframe LPAR:

“It will be like being able to treat those blade servers as if they were System z, from a systems management perspective,” he said. “Extending the z role to a heterogeneous environment.”

According to Freund, it would be like running a blade server as if it were another logical partition (LPAR) on your mainframe. Though there is little comparison between the hardware of a mainframe and an Intel blade, Freund said management will be easier, being able to handle one systems management console, and having failover going to the same sysplex, rather than using different backup platforms.

“I don’t see the synergy at all. Maybe in the roll out we’ll see more. From an end user perspective it’s like merging matter and anti-matter. Which is which is open for debate.”

“I do see the synergy. When I read (and wrote) of the Sun Grid offering some years ago, the big question was – are we going back to centralized computing?”

“I have a biased view of the market. Don’t many shops have Sun and IBM on the floor with no mutual annihilation? At least all the library and VSM customers I’m concerned with have IBM on the floor. I wonder what that would do for Solaris on z?”

“I periodically mention an old meeting at the Palo Alto Science Center about a proposal to do Sun machine product (by the people that would go on to form Sun). there were (at least) three different internal groups that claimed that what they were doing was better … and so IBM declined to do Sun product. Note also that in the past decade or so, SUN had acquired STK (Storage Tek) … mainframe clone storage group.”

I have 45 years in mainframes (1964 to present) and worked for IBM for 30 of them. IBM will never open source Z/OS, Z/VM, or any of its other flagship products. I used IBM’s open source code in the 60’s and remember clearly when OCO was announced. You write of the possibility of essentially customer “innovation” but innovation implies “hooks” into the nucleus and other key parts of the system loaded in the link pack areas. Yes, those hooks work on day 1, but on day 2, when IBM upgrades the source, maintenance becomes a nightmare, so much so that some customers refused to upgrade for years! Additionally, when things went awry, IBM was always blamed first – with demands that they find the bug ASAP – at no charge of course. No blame ever accrued to the purveyor of the modified hook. IBM was blamed for making the system “too complicated.”

Then there was the outright theft of intellectual property i.e., the code in those flagship products. I can write long lists of what open systems don’t have, even after 15 years of “development.” Open system vendors like to use words like “virtualization” and “data sharing” that sound like they are doing the same thing as a mainframe – but when pushed they are forced to admit they are not “really” doing what those words imply. They have had at least 15 years to copy IBM, but so far no one has come close to the reliability and availability of mainframes which have implemented true virtualization and data sharing for some 30 years now.

Don’t forget the mainframe hardware is tuned to mainframe software and vice-versa. The investment in that technology is now in the trillions of dollars – and I believe has the most patents for both hardware and software innovation in the world.

IBM is not going to give that away again!

Regards,
Dick Yezek

—

I think you let IBM off the hook too easily. I have been in the business for over 40 years as a systems programmer.

First off I was a project manager at GUIDE for about 5 years when this was announced (IBM going OCO). There was a huge swelling of anti-IBM feelings at this point. The ceiling was raised with people criticizing IBM for its position. IBM finely responded that when OCO happened IBM promised to make documentation so good that you shouldn’t need source *AND* that all programming interfaces would be fully documented.

That sort of allayed the criticism and resentment subsided to background distrust. At first IBM seemed to honor their promises with better documentation as it did seem to happen.

Then after about 5 years the opposite started to happen. What was freely available in information was cut back to the bone.

After about 10 years IBM promises of better documentation went up in smoke. Case in point IBM’s COBOL (new) compiler spit out messages that were *NOT* documented. IBM never put out a messages and codes for the damn thing. I had a field day of making IBM look bad because of this. Then the ultimate chutzpa: IBM came back and said the messages were self-documenting. Hahaha. Then IBM started to make waves and actually charging users for documenting interfaces for IBM’s routines. I have heard $100K cost for a peek at a document which should have been open to everyone that bought the manual.

That was one case now the next is a classic. IBM wanted $150K for documentation so the user could write his product to sell to the “public”. IBM sold it to him and then about 2 years later they shipped a fix out which changed the interface and did not tell the end user that what was happening. This is easily done in SMPE with hold data. The vendor got a black eye because IBM did not tell people this fix would make a product stop running. I have heard that this product is European based but has some users in the US as well.

IBM charging for looking at documentation that should be available (at nominal charge ie cost) is NOT even close to coming close to the promise kept many years ago. The newer people in the business do not have a clue as to how important source is. Example: 30 years ago there was a utility that was part of Logrec, IBM’s facility to report hardware & software errors. This utility to run the report was extremely expensive CPU wise to run (we were against the wall ie largest CPU available and fastest and we were at 100 percent 7X24). I went to the fiche and after a few minutes of poking around I found why the damn thing was CPU intensive. There was a (4K?) table that when it got a record to update the table it went through sequentially and it took forever. I sat down and thought about it and figured out how to cut down the run time to 30 seconds or less just by indexing into the table and adding to the field directly. I opened an APAR and sent the documentation in with my suggested fix. IBM refused to implement it. What I did was to replace the IBM module in question with my own.

I know this is all water underneath the bridge but people have let IBM off the hook too easily nowadays. It’s time to hold their feet to the fire and hold them to their promises that were made when they first announced OCO.

Greg Papadopoulos, Sun Microsystems chief technology officer, gave the keynote address at the AFCOM Data Center World conference in Las Vegas yesterday. And during it, he compared cloud computing to the mainframe.

What he said about cloud computing: “In some sense, it’s nothing new. This idea that we’re going to build large concentrations of computing, storage, and networking, and produce them as a service. That’s a great definition of mainframe computing. A bunch of 3270s hooked up as a professionally managed pool of resources.”

Though the idea that mainframe computing is similar to the cloud is not new (see “Cloud computing = mainframe rehash?“), it was interesting to hear it straight from a Sun executive.

Mainframe MIPS are like the stock market circa 2006, right? They never go down.

But while one going down leads to a nationwide financial crisis, the other going down might lead to a companywide standing ovation. Cutting mainframe MIPS means money that went toward exorbitant software licensing costs can now go into company coffers.

Next month, BMC Software will release the results of a MIPS reduction study it performed with mainframe users. Though an admittedly small sample consisting of 20 members of BMC’s user advisory board, the software vendor hopes the results could be a blueprint for how users out there can actually reduce mainframe MIPS.

“We go into their shop and try to cut peak MIPS load,” said Mike Moser, product management director for BMC mainframe service management, at the Share mainframe user group conference in Austin last week. “We’re trying to put a quantification on how much capacity we can give you back on various things.”

Moser cited SQL tuning and capacity management as two of the areas where BMC can help users save money. As an example, he said capacity management is crucial to keeping costs under control.

“You don’t want to overbuy, because that’s wasted capacity,” he said. “But you don’t want to underbuy either, because then you run into emergency situations where you don’t have room to negotiate, and that could be costly. People can get fired over stuff like that.”

The study will obviously (read: hopefully) have to balance the cost savings in MIPS reduction with the cost of bringing a BMC rep in to find the savings in the first place. Once the study comes out, we’ll report on it here with more details.

Oftentimes when I talk to CA users — whether they’re running a mainframe, distributed systems or most often, both — they’re not sure of everything they have. That’s one of the oft-cited criticisms of CA and other large third-party mainframe software vendors, that they’re so bulky and difficult to manage.

Over the past year, CA has taken steps to alleviate that problem with its so-called Mainframe 2.0 initiative. Forget the fact that putting 2.0 or 3.0 or whatever.0 on a product name or trend is passe and really doesn’t mean much anymore (another example: Cisco’s Data Center 3.0). what is it? Basically, it is CA’s attempt to clear up all the CA software clutter in so many mainframe shops.

Scott Fagen, CA’s vice president of enterprise systems management, said that in May, CA will come out with a “CA mainframe software stack,” a group of 45 CA software products grouped together for easier purchase, installation, patches, maintenance, etc.

“You’ll be able to log on and have a view of all the products you have from CA,” Fagen said at the Share mainframe user group conference in Austin, Tex. last week. “It will remind people that the products are there and they are there to be used.”

Fagen added that CA will continue to offer its Mainframe Value Program, which is a free service for CA customers where a CA rep will come in and tell you about all the CA software you have, what you use and what you don’t, and produce a report on how to improve performance.

Some users might say that CA talking up Mainframe 2.0 is akin to scientists at Jurassic Park bragging about finally getting rid of the velociraptors. In other words, is CA fixing a problem that it created? But at least there is now an option out there for those users I talk to who are looking to make more sense of their CA software portfolio.

What if you could run your blade server as if it were an extension of your mainframe? The concept is there, the reality not too far away, according to IBM.

At the Share mainframe user group conference last week in Austin, I got a chance to meet with Karl Freund, IBM senior VP of strategy for System z. Freund gave the System z keynote at the show on dynamic infrastructure, laying out some future trends and directions for the platform. I missed the keynote but was able to talk to Freund later, where he outlined the vision that he spoke about in his presentation.

“It will be like being able to treat those blade servers as if they were System z, from a systems management perspective,” he said. “Extending the z role to a heterogeneous environment.”

A user last week asked me if IBM would ever come out with a mainframe in a 19″ rack. This looks like the company’s answer.

According to Freund, it would be like running a blade server as if it were another logical partition (LPAR) on your mainframe. Though there is little comparison between the hardware of a mainframe and an Intel blade, Freund said management will be easier, being able to handle one systems management console, and having failover going to the same sysplex, rather than using different backup platforms.

Presumably this would make it easier for your front-end apps on your blade servers to communicate and get along with back-end database and ERP applications on the mainframe.

When the technology is coming out Freund wouldn’t divulge, but he did claim it would come in bits and pieces. Customers interested in it can contact IBM and might be able to get more details under a non-disclosure agreement. And he said IBM is unveiling this future trend so that data centers out there can start planning for it.

And how to plan for it? Freund gave an example: Suppose a company is building out its SAP platform, the data for which is on System z. Let’s say they decide to use blade servers on the front-end, and initially was leaning toward blades from Hewlett-Packard, Dell, Sun or some other blade provider.

“Now they might decide to put it on BladeCenter because it will be supported,” he said. “They also may decide to put it on Linux because it will be supported on Linux and AIX first.”

Oh. So IBM is divulging this future mainframe technology so it can sell more BladeCenters?

There’s some hubbub around the Share conference in Austin this week around a session tomorrow morning by Mantissa Corp. The session is titled “x86 Virtualization Technology for System z.” During it, the company’s CEO, Gary Dennis, is expected to unveil technology that would allow Microsoft Windows to run on top of z/VM on a mainframe.

The company hasn’t divulged many details about the product, called z/VOS, though Dennis did say back in the fall that in the first quarter of this year the company planned on delivering a system that allows unaltered x86 operating systems, including Windows, to run under z/VM. In an email, he said that by using a desktop appliance running RDC (Remote Desktop Connection), users will be able to connect to their virtual Windows images running on an IBM mainframe.

He added that Mantissa believes a z10 mainframe could “comfortably run 2,000 copies of Windows desktop systems” while still running regular z/OS workloads.

Unfortunately I won’t be able to attend the session tomorrow morning as I’m leaving Austin today, but I just spoke to Dennis. He wouldn’t reveal any more details but said we could talk next week, so I hope to connect with him on Monday. In the meantime, here’s the full abstract from the Share session tomorrow morning:

Over the last decade IBM has quietly opened a world of virtualization possibilities through changes in the System z instruction set and advances in their chip technology. These changes have made possible x86 virtualization alternatives never imagined. Find out how you can leverage System z to achieve x86 virtualization goals faster and more cost effectively than you ever thought possible. Learn how you can deploy and manage native x86 Windows® and Linux images under z/VM. Gain an understanding of how you can simplify operations and more easily reach virtualization and cost containment goals through:

JIT deployment of virtualized x86 OS images

Reductions in deployment costs

Simplified image and deployment maintenance

Reduced power and space requirements

Learn more about z/VOS, the system that makes this virtualization alternative possible. Gain first-hand knowledge of:

AUSTIN — Earlier today Share held a conference session on the status of the event here in Austin and of the group overall.

There were only 636 full-week paid attendees at the show this spring, which was below their expectations (crowds did seem light in the sessions and on the floor compared to previous shows). Some of the 900 or so sessions had zero attendees, the board reported.

For the next show in Denver, the group will cut down on the number of sessions and rooms, and there will be no concurrently running sessions during either the general session keynote or any of the themed keynotes. They’re also considering blocking out a dedicated time, perhaps Tuesday afternoon, just for the trade show, to boost attendance there.

In Denver, the group will continue to favor sessions that bring in user experiences, which it seemed they had a lot more of this year in Austin. That’s a good sign. Share is also working on building some kind of online community on its Web site. One idea tossed around was to record conference sessions and make them available afterward as podcasts on the Share site.