Recently, I engaged in several discussions on various mainframe-related LinkedIn groups about the need to try and run with up-to-date software to help the company cut costs. The comments I received in these discussions were both interesting and bothersome.

First, it’s interesting to see that people agree with what I see as anecdotal evidence from my own experience. Most respondents state that much of the new functionality and enhancements in the latest releases of the software solutions they use will help them run their mainframes more efficiently. But at the same time, the two reasons for delaying these upgrades are bothersome:

• Perceived product quality. Not many of those commenting felt they wanted to be on the front line when it came to using the latest and greatest. Many had previously had bad experiences with quality issues that ultimately were expensive to solve.
• Many stated they simply didn’t have the time to install, test, and conduct a quality assurance inspection on those new releases. They lacked the human resources to do this properly and decided they would only migrate if they were forced to do so by their vendor.

The first reason—quality—is understandable, especially on the mainframe. We’re expected to run a 24x7 operation where downtime simply isn’t acceptable. Mainframers have an “if it isn’t broke, don’t fix it” mentality, but in my honest opinion, we’ve gone too far in this regard because of the second reason raised—lack of resources.

This is a real problem, one I’ve blogged about many times. Companies (and their managers) have relied on the fact the mainframe is simply always “on” for far too long. Outsourcing, staff reductions, and lower budgets for a platform that’s more complex than ever will have an effect sooner or later. Large system failures are more common than they were five to 10 years ago, and analysts are the first to state this is understandable because of the fact the technology is “ancient,” “more than 20 years old,” “has existed for a very long time,” etc. What they fail to see is that this isn’t a technology issue; it’s a management issue. The mainframe today is as reliable as it has been for decades and is based on technology that’s as modern as any other technology in IT.

If a 20,000 MIPS shop spends 50 to 60 percent of its general processing power running DB2, an out-of-the-box savings of 5 percent (on the low side) can be achieved by simply upgrading to DB2 10; it will save them 500 MIPS. In times where every MIPS counts, this means a lot. These savings alone (or delayed investments in a MIPS upgrade, if you will) will buy you the tools you need to automate more of the migration process, ensuring you will need fewer people to perform the upgrade. That’s time and money well spent, if you ask me.

The world has changed and some of the outages I spoke about have made people aware that something needs to happen. We must use this momentum to make our point: A mainframe needs TLC, and this can be achieved by running—and using—the most recent software possible, and making sure it’s operated by the right people who have the right tools to get the job done.