The next steps for the mainframe

We discover the challenges and benefits of incorporating the modern mainframe into today’s IT infrastructure

By Graham Jarvis

August 25, 2010

CIO UK

Share

Twitter

Facebook

LinkedIn

Google Plus

The demise of the mainframe has been predicted since around 1981, but IBM is hoping to put an end to this discussion once and for all with the launch of its zEnterprise mainframe server.

The company’s latest mainframe is described by Mark Anzani, IBM’s Vice President and CTO of System Z, as being a “new workload optimised system.” Reflecting upon the new offering he says: “It is going to shift the conversation towards who can offer the best deep management of the infrastructure, and also towards the company that can drive the broadest economic benefit.”

So the mainframe is here to stay, and Anzani believes that it’s now more of a question of cross-platform workload economics, claiming that customers now have a greater choice about the platforms that they employ. That’s because zEnterprise allows mainframe customers to centralise the management of workloads that operate on different platforms and architectures, such as Java and Linux. Its introduction is therefore certainly welcomed by many industry experts. There is definitely a hope that the new system will deliver what Anzani describes as “integrated value”.

IBM is working hard to address a key customer concern with its new offering to ensure that the mainframe has a future: the perceived high cost of running, maintaining and operating mainframes.

“We have delivered technologies to make the platform more efficient, using a broad set of workloads,” explains Anzani, before adding that the conversation now needs to turn towards where a particular workload or set of workloads fit best. For example, in the past they have been shifted to mainframes and away from them onto servers. Customers need the ability to move them around whenever deemed necessary, perhaps by distributing the workloads between the two systems.

zEnterprise deserves praise

“Based on what I was discussing with IBM on the launch day of zEnterprise, I would say that it deserves to have a massive impact,” argues Clive Longbottom, head of research at analyst firm Quocirca.

“When you bring the ZBX box together (the Blade enclosure), you have an emerging cloud environment, and the software that comes with it makes it self-learning.” In other words the automation of workload management can make sure that the right workload can be used by the right platform.

Nevertheless, and even though zEnterprise’s capabilities are seen as the next big step in the ongoing evolution of the mainframe, Longbottom says that customers were certainly not overwhelmed by it. Someone listening to IBM’s sales pitch commented: “OK, it’s the next mainframe”. This lacklustre response implies that there needs to be a more persuasive argument about why people should migrate to the mainframe, or employ it as part of a hybrid IT architecture.

Mainframe misconceptions

The trouble is that there are still many folk out there who think of the mainframe as a dinosaur of technology. Nevertheless, he thinks that IBM’s main challenge is to attract customers who didn’t buy it in the past for this reason. The view that the mainframe belongs to the 1970s is disproven, according to Mark Settle, CIO of BMC Software. Analyst firm Aberdeen Group has discovered that 70 per cent of the world’s data is still processed on the mainframe.

“Mainframes are often hiding under the covers; they are behind the scenes doing a lot of the supply chain rebalancing, the settlement processing in the financial work and wherever you have massive data volumes,” he explains.

Yet there are still a number of servers and distributed systems and mainframe die-hards. Either way it’s important to remember that the mainframe is “a critical component of most large data centres,” he emphasises before adding that “there are a whole bunch of distributed systems that surround legacy mainframe systems to support activities like e-commerce, customer support and supply chain operations.”

Settle adds that the distributed systems are simply collecting and pre-processing transactional data “that is ultimately fed back into legacy mainframes.” BMC’s financial sector customers have commented that they see them as the only platform capable of securely and reliably processing vast amounts of data, and he argues that they are unlikely to change this viewpoint.

“Mainframes have historically delivered higher levels of utilisation than distributed systems environments, Settle adds. So while there have been some recent advances in server virtualisation, which has enabled Wintel platforms to reach a utilisation level of 70 per cent, mainframes have routinely achieved one of 90 per cent.

However, these days it’s not necessarily a question of using servers within a distributed environment over mainframe systems. “There is certainly a business case now for using a hybrid environment than ever before, and that’s because the modern business operation demands the scalability of distributed and highly virtualised datacentres, he explains before commenting that “for many large organisations such as those in finance or commerce, the sheer reliable processing power of the mainframe is equally as critical.”

Although Settle says that he doesn’t know of any investments in mainframe-based cloud computing, he believes that the IBM sees zEnterprise’s potential for it, and so it is building its service offerings around distributed systems.

Professor Bryan Foss, an independent board level advisor, thinks this is good news because businesses are “increasingly looking towards new operational funding models, including SaaS and Cloud, that provide speed and flexibility around the mainframe”. He implies that business CIOs want to find other options to the traditional arguments regarding alternative approaches to servers, so the coming of age of the cloud will significantly change this debate.

Evaluating ROI and TCO

While this discussion will be reinvigorated by zEnterprise, doing more with less money and getting the most out of legacy systems remains a topic that simply isn’t going to go away. There is a greater need during the recession to squeeze the most out of their existing systems, and we won’t know the true impact of IBM’s new mainframe until some time after it is first shipped in September 2010. Mainframe customers simply want to be able to defer their upgrades, but the additional capabilities and functionalities of zEnterprise might persuade them to change their minds. If that’s the case, then perhaps now is the time to evaluate the return on investment (ROI) and total cost of ownership of their existing mainframe and distributed systems in order to uncover the benefits of zEnterprise.

Whether an organisation implements a mainframe, server or hybrid systems environment, ROI and TCO are two of the measures that will frequently be used before making a buying decision. Server hardware vendors will often claim that they are cheaper. In some cases they are right, but not always. Rich Ptak, managing partner of analyst firm Ptak, Noel & Associates, believes that the calculation of these measures represents one of the most significant problems.

“There has been a failure to accurately identify and allocate the costs associated with the mainframe versus distributed systems,” he argues. Quite often the capital and operating expenses “should have been distributed across the systems,” he reveals. Sometimes the incremental management and maintenance costs of maintaining the distributed systems network are ignored too.

Misallocated costs

Marcel den Hartog, EMEA mainframe marketing director at CA Technologies, says that working out the ROI calculations is quite easy as “you only get very few bills”. However, he agrees that it becomes very difficult to determine them when they involve distributed expenses. “What we are finding is that a lot of the costs are not in the right cost centre, and quite often the choices that are made are not with the right business reasons in mind,” he explains before stressing that “the whole promise that distributive systems are cheaper and more flexible is simply not true, and a lot of the distributive tools from CA have the mainframe knowledge with them”.

According to BMC Software’s Settle, the right calculation of ROI and TCO “is in the eye of the beholder,” although he agrees that “there is some subjectivity in the way that costs and benefits are defined, and the calculations can be misleading”.

Quocirca’s Longbottom shares this view. “We tend to keep away from TCO and ROI because if you give me a spreadsheet I can prove that it is the lowest or highest expense,” he says. In order to find the right calculation for these metrics, he cites a need to know the baseline for them. “What is it that gives the best value to the organisation rather than the lowest cost?” he asks.

“So if a scaled-out distributed environment costs £100,000 but only provides you with £90,000 worth of value, it should be compared with a mainframe that costs £500,000 but which delivers £750,000 of value to the organisation.”

When determining ROI and TCO, Settle suggests focusing on the skills required to maintain and support the different systems, their flexibility in terms of their ability to move workloads across to different platforms, and consider the costs or savings that are attributed to labour and power. One CIO he used to work with once joked that servers tend to multiply like rabbits, and then comments that the mainframe has to expose itself to these new and modern platforms to remain significant.

Nathaniel Briggs, CEO of web presence experts Synthetic Magic, measures the cost-per-transaction of the deployed technology, the financial resources required to support customers, the end-user alignment time averages, the average value per transaction and the business performance per outage. “In simple terms it’s about focusing on business today and business tomorrow,” he says.

Mark Anzani discloses that IBM has its own methodology for helping customers “to calculate the costs of running an entire computing architecture – not just the mainframe, and in my experience TCO-based methods are the most complete as you have to consider many factors: the number of users, transaction volumes, the cost of managing individual servers, licensing costs, and so on.”

One particular consideration that should be thought about is the one surrounding the reduction of IT infrastructure and management complexity. It’s an important issue for CIOs because complexity equals cost. So they are very much focused on making IT as simple as they can, while making sure that it’s not the technology that drives their decision-making, but the needs of the business. More attention will therefore be focused on the management of their systems, with the aim of ensuring that they deliver the greatest ROI or economic benefits as Anzani phrases them.

Costs remain ‘prohibitive’

Mainframe pricing nevertheless remains a hot issue, and one that puts people off it as a viable platform. “We have our own mainframe and distributed servers,” says Lacy Edwards, CEO of mission-critical mainframe experts NEON Enterprise Software. He argues that it’s not just the cost of the mainframe hardware that remains prohibitive to many people, but the high expense that is attributed to software licensing. “If you look at it from the customers’ perspective, that’s why they are comparing the different systems, and software is the key reason why they say the cost of running a mainframe is too high.”

In NEON’s view the mainframe will only grow if these high costs are addressed. He says that IBM created the specialty processors in order to discourage people from moving away from the mainframe. “However, many customers were disappointed in the yield versus their promise,” he explains. So NEON developed zPrime in response to customer demand to find a better and cheaper way to exploit the zIIPS and the zAAPS, the specialty processors.

Lacy claims that zPrime, which has been described by some commentators as an ‘exploitative technology’, allows customers to make better usage of the specialty processors. “We are currently responsible for customers wanting to use more specialty processors, but unfortunately IBM is unwilling to sell them to our customers,” he alleges while suggesting that this action could in itself force people to move off the mainframe. That’s because traditional workloads are very costly, and so NEON feels duty-bound to help them to reduce these overheads.

“I am not aware of anything else that is like what zPrime is doing, and I have not personally seen any impact at this point on the way customers are making their purchases,” comments Anzani. “The only product where there are objections regarding its installation comes back to the difference of opinion between NEON and IBM regarding zPrime.” Although IBM usually welcomes the exploitation of the specialty processors, the company views any installation of zPrime as being unauthorised. NEON has therefore raised the question about whether customers need the authorisation of IBM to install ‘exploitative technologies’ like this.

Yet IBM does offer a free and approved API to allow software vendors to program improvements, and it usually welcomes any solution – even if it comes from a competitor – that helps to improve the efficiency of the mainframe. The question regarding zPrime is about whether its installation breaches IBM’s licensing agreements. Many don’t think so, but NEON’s complaints are being examined by the Department of Justice in the US, and by the European Commission. It is felt by some that IBM is abusing its dominant position, but Anzani argues that there is plenty of competition within the mainframe market from a workload management perspective.

The dispute is still putting some customers off purchasing zPrime. Jeff Cattle, head of computer services at fashion retailer JD Williams says that his company had a look at it, but delivers a warning to the warring parties. “We won’t look at zPrime again until IBM and NEON resolve their differences, and there is a long way to run on that one,” he says. In the meantime his firm is “maximising the use of the features that IBM has already presented – the opportunities such as sub-capacity pricing.” He adds that JD Williams would not consider any product that would put his organisation in a defensive position against IBM, but he would be willing to evaluate similar solutions that fall within IBM’s guidelines.

Securing the future

As a mainframe customer he would like the next steps for the mainframe to offer more business relevant applications that can be supported on this kind of system. His firm has an eye on Linux and so it has purchased the new z196 (zEnterprise), which will be delivered to his organisation next month. His commitment to the mainframe is illustrated by the fact that his company has two mainframes: one running traditional CICS, while another hosts 50 websites using Websphere and which runs on z/OS. It accounts for 40 per cent of the firm’s order values, delivering “£280m to us per year in sales,” he reveals. His next steps and discussions revolve around increasing workload capacity, and that’s where zEnterprise comes in. He says it has significantly increased zIIPS and zAAPS, delivering performance without increasing the software price tag.

By reducing the cost of the mainframe and making sure that it can interoperate with other platforms, the mainframe will be here for some time to come. However, there is another step that is being taken to secure its future. Many of the mainframe executives are described as old hacks; they have worked on the platform for decades, but most industry experts agree that the younger generation needs to become fully equipped with their skills and knowledge before it’s too late. This in itself will move the mainframe a step forward, but there is also an ongoing job to be done to educate, persuade and inform decision-making executives about the mainframe facts. The mainframe lives on. It is still relevant, but its associated costs still have to continue to fall.