Is your organization ready for DevOps? It should be coming to System z data centers almost any day now, riding in on newly announced IBM cloud-based DevOps services, software, and infrastructure designed to help large organizations develop and deliver quality software faster.

Launch of the Bluemix Garage in London

DevOps streamlines enterprise workflow by truncating the development, testing, and deployment process. It entails collaborative communications around the end-to-end enterprise workflow flow and incorporates a continuous feedback to expedite the process. DevOps evolved out of Agile methodologies over a decade ago.

Agile was intended to streamline the traditional waterfall IT development process by putting developers and business unit people and the deployment folks together to build, test, and deploy new applications fast. Agile teams would deliver agreed upon and tested functionality within a month. Each deliverable was short, addressing only a subset of the total functionality. Each was followed by the next containing yet more functionality. In the process, previously delivered functionality might be modified or replaced with a new deliverable.

IBM is streamlining the process further by tapping into the collaborative power of the company’s Cloud portfolio and business transformation experience to speed the delivery of software that supports new models of engagement. To be clear, IBM definitely is not talking about using DevOps with the organization’s systems of record—the core transaction systems that are hallmark of the z and the heartbeat of the enterprise. The most likely candidates will be systems of engagement, systems of innovation, and analytics systems. These are systems that need to be delivered fast and will change frequently.

According to IBM software-driven innovation has emerged as a primary way businesses create and deliver new value to customers. A survey of 400 business and IT executives by the IBM Institute for Business Value showed businesses that are more effective at software delivery are also more profitable than their peers nearly 70 percent of the time. DevOps provides a way for businesses to remain competitive, applying lean and agile principles to software development to speed the delivery of software that meets new market requirements.

Agile represented a radical departure from the waterfall process, which called for developers to take a full set of business requirements, disappear to two years, and return with a finished application that worked right. Except that it often took longer for the developers to return with the code and the application didn’t work as promised. By then the application was well over budget and late. System z shops know this well.

DevOps today establishes a continuous, iterative process flow between the development team and the deployment group and incorporates many Agile concepts, including the active involvement of the business people, frequent testing, and quick release cycles. As the IBM survey noted DevOps was spurred by the rise of smartphones and mobile computing. Mobile users demand working functionality fast and expect frequent updates. Two-year release cycles were unacceptable; competitors would be out with newer and better apps long before. Even six-month release cycles seem unresponsive. This is one of the realities DevOps addresses. Another reality is extreme scaling, something z data centers understand.

According to IBM, the company’s new DevOps Innovation Services help address the challenge of scaling development, enabling enterprises to shorten their software delivery lifecycle. The hybrid cloud services combine IBM’s industry expertise from hundreds of organizational change and application development projects with the industry’s leading application development portfolio, especially Bluemix, IBM’s open DIY cloud PaaS platform. They also apply the flexibility of IBM’s enterprise-grade, hybrid cloud portfolio, which was recently ranked by Synergy Research Group as the leading hybrid and private cloud for the enterprise. These services are based on SoftLayer, IBM’s cloud infrastructure platform.

In a second DevOps-related announcement last month IBM described an initiative to bring a greater level of control, security and flexibility to cloud-based application development and delivery with a single-tenant version of Bluemix. The new initiative enables developers to build applications around their most sensitive data and deploy them in a dedicated cloud environment to help them capture the benefits of cloud while avoiding the compliance, regulatory and performance issues that are presented with public clouds. System z shops can appreciate this.

Major enterprise system vendors like IBM, EMC, Cisco, and Oracle are making noises about DevOps. As far as solid initiatives IBM appears far ahead, especially with the two November announcements.

DancingDinosaur is Alan Radding, an independent IT analyst and writer. Follow DancingDinosaur on Twitter, @mainframeblog. Find more of his IT writing at Technologywriter.com and here.

Cloud computing, especially hybrid cloud computing, is going mainstream. Same is happening with the Internet of Things (IoT). For mainframe shops unsure of how to get there IBM promises to speed the journey with the two recent initiatives.

Let’s start with hybrid clouds and the z. As IBM describes it, enterprises will continue to derive value from the existing investments in IT infrastructure while looking to the cloud to bolster business agility. The upshot: organizations increasingly are turning to hybrid clouds to obtain the best of both worlds by linking on-premises IT infrastructure to public cloud.

To that end, IBM has designed and tested various use cases around enterprise hybrid architecture involving System z and SoftLayer. These use cases focus on the relevant issues of security, application performance, and potential business cost.

One scenario introduces the cloud as an opportunity to enrich enterprise business services running on the z with external functionality delivered from the cloud.

Here a retail payment system [click graphic to enlarge] is enriched with global functionality from a loyalty program that allows the consumer to accumulate points. It involves the z and its payment system, a cloud-based loyalty program, and the consumer using a mobile phone.

The hybrid cloud allows the z data center to maintain control of key applications and data in order to meet critical business service level agreements and compliance requirements while tapping the public cloud for new capabilities, business agility, or rapid innovation and shifting expenditure from CAPEX to OPEX.

Since the z serves as the data backbone for many critical applications it makes sense to connect on-premises System z infrastructure with an off-premises cloud environment. In its paper IBM suggests the hybrid architecture should be designed in a way that gives the businesses the flexibility to put their workloads and data where it makes most sense, mixing the right blend of public and private cloud services. And, of course, it also must ensure data security and performance. That’s why you want the z there.

To get started check out the uses cases IBM provides, like the one above. Already a number of organizations are trying the IBM hybrid cloud: Macy’s, Whirlpool, Daimler, and Sicoss Group. Overall, nearly half of IBM’s top 100 strategic outsourcing clients already implementing cloud solutions with IBM as they transition to a hybrid cloud model.

And if hybrid cloud isn’t enough to keep you busy, it also is time to start thinking about the IoT. To make it easier last month the company announced the IBM Internet of Things Foundation, an extension of Bluemix. Like Bluemix, this is a cloud service that, as IBM describes it, makes it possible for a developer to quickly extend an Internet-connected device such as a sensor or controller into the cloud, build an application alongside the device to collect the data, and send real-time insights back to the developer’s business. That data can be analyzed on the z too, using Hadoop on zLinux, which you read about here a few weeks ago.

IoT should be nothing new to System z shops. DancingDinosaur discussed it this past summer here. Basically it’s the POS or ATM network on steroids with orders on magnitude more complexity. IDC estimates that by 2020 there will be as many as 28 billion autonomous IoT devices installed. Today it estimates there are nine billion.

Between the cloud, hybrid clouds, and IoT, z data centers will have a lot to keep them busy. But with IBM’s new initiatives in both areas you can get simple, highly secure and powerful application access to the cloud, IoT devices, and data. With the IoT Foundation you can rapidly compose applications, visualization dashboards and mobile apps that can generate valuable insights when linked with back office enterprise applications like those on the z.

DancingDinosaur is Alan Radding, a veteran IT writer/analyst. You can follow DancingDinosaur on Twitter, @mainframeblog. Also check out my other IT writing at Technologywriter.com and here.

Many IT professionals, especially younger ones, are clueless about the mainframe. Chris O’Malley, president of the mainframe business at Compuware, has met CIOs who are versed in everything about IT and have seemingly done everything there is with computers, but “they are not literate about the mainframe.” That means the mainframe never comes to mind. IBM could give away a zEnterprise for free, which it comes close to doing today through the System z Solution Edition program and these CIOs would ignore it. O’Malley wants to address that.

In response, Compuware is following the path of the IBM System z Academic Initiative, but without the extensive global involvement of colleges and universities, with a program called Mainframe Excellence 2025, which it describes as a generational call for strategic platform stewardship. “We’re also trying to debunk a lot of issues around the mainframe,” O’Malley continues.

Chris O’Malley, Pres. Mainframe, Compuware

Compuware refers to Mainframe Excellence 2025 as a manifesto, something of a call to arms for millennials to storm the IT gates and liberate IT management from enslavement to x86 computing. Somehow DancingDinosaur doesn’t see it happening exactly that way; it envisions coexistence and synergy.

Most of the Mainframe Excellence document goes over ground DancingDinosaur and many others have covered before. It is delightful, however, to see others refreshing the arguments. And, the document adds some interesting data. For instance, over 1.15 million CICS transactions are executed on System z every second of every day! That’s more than all Google searches, YouTube views, Facebook likes, and Twitter tweets combined.

It also pays homage to what it refers to as the mainframe’s culture of excellence. It characterizes this culture by rigorous adherence to a standard of excellence demonstrably higher than that associated with other platforms, notably x86. IT organizations actually expect, accept, and plan for problems and patches in other platforms (think Microsoft Patch Tuesday). Mainframe professionals, on the other hand, have zero-tolerance for downtime and system failures and the mainframe generally lives up to those high expectations.

Ironically, the document points out that the culture of excellence has created a certain chasm between mainframe professionals and the rest of IT. In fact, this ingrained zero-failure culture of the mainframe community—including both vendors and enterprise IT staffs—can sometimes put it at odds with the very spirit of innovation that allows the mainframe to deliver the repeated advances in price/performance and new capabilities that consistently produce tremendous value.

Combat denial and hype in regards to non-mainframe platform capabilities, costs and risks.

And Compuware’s final thought should give encouragement to all those who must respond to the mainframe-costs-too-much complaint: IT has a long history of under-estimating real TCO and marginal costs for new platforms while over-estimating their benefits. A more sober assessment of these platforms will make the strategic value and economic advantages of the mainframe much more evident in comparison.

Compuware certainly is on the right track with Mainframe Excellence 2025. Would like, however, to see the company coordinate its efforts with the System z Academic Initiative, the Master the Mainframe effort, and such.

DancingDinosaur is Alan Radding, a veteran IT writer/analyst. You can follow DancingDinosaur on Twitter, @mainframeblog. Also check out my other IT writing at Technologywriter.com and here.

On Wednesday IBM introduced what it describes as the industry’s first intelligent security portfolio for protecting people, data, and applications in the cloud. Not a single product but a set of products that taps a wide range of IBM’s cloud security, analytics, and services offerings. The portfolio dovetails with IBM’s end-to-end mainframe security solution as described at Enterprise2014 last month.

Cloud security certainly is needed. In a recent IBM CISO survey, 44% of security leaders said they expect a major cloud provider to suffer a significant security breach in the future; one that will drive a high percentage of customers to switch providers, not to mention the risks to their data and applications. Cloud security fears have long been one of the biggest impediments to organizations moving more data, applications, and processes to the cloud. These fears are further complicated by the fact the IT managers feel that much their cloud providers do is beyond their control. An SLA only gets you so far.

The same survey found 86% of leaders surveyed say their organizations are now moving to cloud, of those three-fourths see their cloud security budget increasing over the next 3-5 years.

As is typical of IBM when it identifies an issue and feels it has an edge, the company assembles a structured portfolio of tools, a handful of which were offered Wednesday. The portfolio includes versions of IBM’s own tools optimized for the cloud and tools and technologies IBM has acquired. Expect more cloud security tools to follow. Together the tools aim to manage access, protect data and applications, and enable visibility in the cloud.

For example, for access management IBM is bringing out Cloud Identity Services which onboards and handles users through IBM-hosted infrastructure. To safeguard access to cloud-deployed apps it is bringing a Cloud Sign-On service used with Bluemix. Through Cloud Sign-On developers can quickly add single-sign on to web and mobile apps via APIs. Another product, Cloud Access Manager, works with SoftLayer to protect cloud applications with pattern-based security, multi-factor authentication, and context-based access control. IBM even has a tool to handle privileged users like DBAs and cloud admins, the Cloud Privilege Identity Manager.

Here is a run-down of what was announced Wednesday. Expect it to grow.

Now let’s see how these map to what the z data center already can get with IBM’s End-to-End Security Solution for the Mainframe. For starters, security is built into every level of the System z structure: processor, hypervisor, operating system, communications, and storage.

In terms of security analytics; zSecure, Guardium, AppScan, and QRadar improve your security intelligence. Some of these tools are included in the new Cloud security portfolio. Intelligence is collected from z/OS, RACF, CA ACF2, CA Top Secret, CICS, and DB2. The zSecure suite also helps address compliance challenges. In addition, InfoSphere Guardium Real-time Activity Monitoring handles activity monitoring, blocking and masking, and vulnerability assessment.

Of course the z brings its crypto coprocessor, Crypto Express4S, which complements the cryptographic capabilities of CPACF. There also is a new zEC12 coprocessor, the EP11 processor, amounting to a Crypto Express adapter configured with the Enterprise PKCS #11 (EP11) firmware, also called the CEX4P adapter. It provides hardware-accelerated support for crypto operations that are based on RSA’s PKCS #11 Cryptographic Token Interface Standard. Finally, the z supports the necessary industry standards, like FIPS 140-2 Level 4, to ensure multi-tenanted public and private cloud workloads remain securely isolated. So the cloud, at least, is handled to some extent.

The mainframe has long been considered the gold standard for systems security. Now it is being asked to take on cloud-oriented and cloud-based workloads while delivering the same level of unassailable security. Between IBM’s end-to-end mainframe security solution and the new intelligent (analytics-driven) security portfolio for the cloud enterprise shops now have the tools to do the job right.

And you will want all those tools because security presents a complex, multi-dimensional puzzle requiring different layers of integrated defense. It involves not only people, data, applications, and infrastructure but also mobility, on premise and off premise, structured, unstructured, and big data. This used to be called defense in depth, but with the cloud and mobility the industry is moving far beyond that.

DancingDinosaur is Alan Radding, a veteran IT analyst with well over 20 years covering IT and the System z. You can find more of my writing at Technologywriter.com and here. Also follow DancingDinosaur on Twitter, @mainframeblog.

With most of the over 1100 respondents (91%) reporting that the mainframe remains a viable long-term platform for them and a clear majority (60%) expecting to increase MIPS due to the normal growth of legacy applications and new application workloads the z continues to remain well entrenched. Check out the results for yourself here.

Maybe even more reassurance comes from almost half the respondents who reported that they expect the mainframe to attract and grow new workloads. Most likely these will be Java and Linux workloads but one-third of the respondents listed cloud as a priority, jumping it up to sixth on the list of mainframe priorities. Mobile was cited as priority by 27% of the respondents followed by big data with 26% respondents.

Apparently IBM’s steady promotion of cloud, mobile, and big data for the z over the past year is working. At Enterprise2014 IBM even made a big news with real time analytics and Hadoop on the z along with a slew of related announcements.

That new workloads like cloud, mobile, and big data made it into the respondents’ top 10 IT priorities for the year didn’t surprise Jonathan Adams, BMC vice president/general manager for z solutions. The ease of developing in Java and its portability make it a natural for new workloads today, he noted.

In the survey IT cost reduction/optimization tops the list of IT priorities for 2014 by a large margin, 70% of respondents, followed by application availability, 52%. Rounding out the top five are application modernization with 48%, data privacy, 47%, and business/IT alignment, 44%. Outsourcing finished out the top 10 priorities with 16%.

When asked to look ahead in terms of MIPS growth, the large majority of respondents expected growth to continue or at least remain steady. Only 9% expected MIPS to decline and 6% expected to eliminate the mainframe. This number has remained consistent for years, noted Adams. DancingDinosaur periodically checks in with shops that announce plans to eliminate their mainframe and finds that a year later many have barely made any progress.

The top mainframe advantages shouldn’t surprise you: availability (53%); security (51%); centralized data serving (47%) and transaction throughput (42%). More interesting results emerged when the respondents addressed new workloads. The mainframe’s cloud role includes data access (33%), cloud management from Linux on z (22%) and dynamic test environments via self-service (15%). Surprisingly, when it comes to big data analytics, 34% report that the mainframe acts as their analytics engine. This wasn’t supposed to be the case, at least not until BigInsights and Hadoop on z gained more traction.

Meanwhile, 28% say they move data off platform for analytics, and 14% report they federate mainframe data to an off-platform analytics engine. Yet, more than 81% now incorporate the mainframe into their Big Data strategy, up from 70% previously. The non-finance industries are somewhat more likely to use the mainframe as the big data engine, BMC noted. Those concerned with cost should seriously consider doing their analytics on the z, where the data is. It is costly to keep moving data around.

In terms of mobility, making existing applications accessible for mobile ranked as the top issue followed by developing new mobile applications and securing corporate data on mobile devices. Mobile processing increases for transaction volume came in at the bottom of mobility issues, but that will likely change when mobile transactions start impacting peak workload volumes and trigger increased costs. Again, those concerned about costs should consider IBM’s mobile transaction discount, which was covered by DancingDinsosaur here in the spring.

Since cost reduction is such a big topic again, the survey respondents offered their cost reduction priorities. Reducing resource usage during peak led the list. Other cost reduction priorities included consolidating mainframe software vendors, exploiting zIIP and specialty engines (which have distinctly lower cost/MIPS), and moving workloads to Linux on z.

So, judging from the latest BMC survey the mainframe is far from dead. But at least one recent IT consultant and commentator, John Appleby, seems to think so. This prediction has proven wrong so often that DancingDinosaur has stopped bothering to refute it.

BTW, change came to BMC last year in the form of an acquisition by a venture capital group. Adams reports that the new owners have already demonstrated a commitment to continued investment in mainframe technology products, and plans already are underway for next year’s survey.

DancingDinosaur is Alan Radding. You can follow him on Twitter, @mainframeblog. Or see more of his writing at Technologywriter.com or in wide-ranging blogs here.

Despite dumping its money-losing semiconductor business on GLOBALFOUNDRIES this week and a discouraging 3Q14 financial report, IBM appears determined to drive Moore’s Law. The law, which produced decades of price/performance gains for IT, will continue to achieve gains but it won’t be solely silicon-based.

How badly do we need Moore’s Law to continue? Dr. Bernard Meyerson, IBM Fellow, VP of Innovation, as part of a keynote presentation at Enterprise2014 put it this way: when it comes to data, there are 6-9 orders of magnitude on the horizon. Today’s zEC12 processor, the fastest commercial processor out there, hasn’t a chance of keeping up for long.

Still, Meyerson isn’t writing off silicon: “Silicon transistors will dominate Information Technology for decades to come but contribute little to its progress,” he declared. To augment the shortcomings of silicon, we will have to look to innovation and the resulting integrated solutions made up of specialized hardware, software, systems, architectures, and network functionality to compensate for lost technology benefits.

Ironically, data center managers actually may find themselves worrying about the slowness of the speed of light and the longer data paths that may result. His question to data center managers: Even at 300,000,000 meters/sec, is light fast enough to keep pace with technology? His answer: Not even close.

The answer lies in new innovative integrated solutions that include 3D integration, synaptic architectures, agile computing (autonomic acceleration), cognitive computing, neuromorphic systems and more. In the near term he suggests data centers to look into advances in FPGA (Field Programmable Gate Array) and GPU acceleration. In the new Power8 systems, FPGAs leverage CAPI to avoid a lot of overhead and delay.

IBM researchers explore new semiconductor materials

The offloading of IBM’s global commercial semiconductor technology business to GLOBALFOUNDRIES, at a cost of $1.5 billion over three years, doesn’t signal an IBM retreat from the semiconductor business. The deal commits GLOBALFOUNDRIES as IBM’s exclusive server processor semiconductor technology provider for 22 nanometer (nm), 14nm, and 10nm semiconductors for the next 10 years. You can bet the upcoming generation of System z will use 22nm chips and 14nm and 10nm chips for subsequent revs of the z, or whatever they are calling it by then.

This is combined with IBM’s previously announced $3 billion investment over five years for semiconductor technology research to lead in the next generation of computing. Between that investment and the offloading of the semiconductor fabrication business this week combined with the research described at Enterprise2014, you can be sure that IBM will stay involved in the CPU business. But one thing you should realize now, just throwing more silicon CPUs at the performance challenge, as IT has done for decades, will no longer work. Adds Meyerson: Brute force (more of the same) has run its course.

Full disclosure: DancingDinosaur is NOT a financial analyst. Still, the IBM 3Q14 financials released on Monday isn’t going to thrill IBM investors. Revenues were down across the board. Of most interest to DancingDinosaur were the results of IBM’s hardware group, STG, which dropped. Specifically, revenues from Power Systems were down 12% compared with the 2013 period. Revenues from System x were down 10%, but that’s now Lenovo’s worry. Revenues from System z fell 35% compared with a year ago while revenues from System Storage decreased 6%. The System z is due for a refresh in 2015, which will undoubtedly entail a significant price/performance gain plus whatever other goodies IBM will load on. This usually gives the z a revenue boost. Power just introduced some new POWER8 machines at Enterprise 2014, which should result in revenue increases in upcoming quarters.

Maybe not so coincidentally on the blog itjungle, a piece titled 2020 Processor Technology Could Unite Power And Mainframe Chips, by Dan Burger throws yet a few more possibilities at the processor question, especially as it pertains to IBM’s enterprise server platforms, Power and System z. Burger apparently was attending the same session as DancingDinosaur when Bernie Meyerson and others brought up the silicon question. He caught up with Ross Mauri, the general manager of System z, grabbing this interesting quote on the subject: “Now there will have to be investment in the next big thing and so it’s interesting to consider whether the Power chip and the mainframe CMOS chip will merge into one chip for both platforms,” adding “I don’t know what is next in that 2020 time frame based on the technologies we are looking at today—anything is possible.” BTW, 2020 is just a bit more than five years away; DancingDinosaur, who has been covering business and technology since 1975, may not even be retired by then.

DancingDinosaur is Alan Radding. You can follow him on Twitter, @mainframeblog. Or check out more of his technology writing at Technologywriter.com or here.

Subsequent sessions at IBM Enterprise2014 delved more deeply into big data, analytics, and real-time analytics. A particularly good series of sessions was offered by Karen Durward, an IBM InfoSphere software product manager specializing in System z data integration. As Durward noted, BigInsights is Apache Hadoop wrapped up to make it easier to use for general IT and business managers.

Specifically, the real-time analytics package for z includes IBM InfoSphere BigInsights for Linux on System z, which combines open-source Apache Hadoop with enhancements to make Hadoop System z enterprise-ready. The solution also includes IBM DB2 Analytics Accelerator (IDAA), which improves data security while delivering a 2000x faster response time for complex data queries.

In her Hadoop on z session, Durward started with the Hadoop framework, which consists of four components:

Common Core—the basic modules (libraries and utilities) on which all components are built

Hadoop Distributed File System (HDFS)—stores data on multiple machines to provide very high aggregate bandwidth across a cluster of machines

MapReduce—the programming model to support the high data volume data processing by the cluster

The typical Hadoop process sounds deceptively straightforward. Simply load data into an HDFS cluster, analyze the data in the cluster using MapReduce, write the resulting analysis back into the HDFS cluster. Then just read it.

Sounds easy enough until you try it. Then you need to deal with client nodes and name nodes, exchange metadata, and more. In addition, Hadoop is an evolving technology. Apache continues to add pieces to the environment in an effort to simplify it. For instance, Hive provides the Apache data warehouse framework, accessible using HivQL, and HBase brings Apache’s Hadoop database. Writing Map/Reduce code is a challenge so there is Pig, Apache’s platform for creating long and deep Hadoop source programs, and the list goes on. In short, Hadoop is not easy, especially for IT groups accustomed to relational databases and SQL. That’s why you need tools like BigInsights. The table below is how Durward sees the Hadoop tool landscape.

Software Needs

Other Hadoop Products

BigInsights

Open Source Apache Hadoop

Y

Y

Rich SQL on Hadoop (Big SQL)

some

Y

Tools for Business Users (BigSheets)

NA

Y

Advanced text analytics

NA

Y

In-Hadoop analytics

NA

Y

Rich developer tools

NA

Y

Enterprise workload & storage mgt.

NA

Y

Comprehensive suite

NA

Y

In fact, you need more than BigInsights. “We don’t know how to look at unstructured data,” said Durward. That’s why IBM layers on tools like Big SQL, which helps you query Hadoop’s HBase using industry-standard SQL. You can migrate a relational table to HBase using Big SQL or connect Big SQL via JDBC to run business intelligence and reporting tools, such as Cognos, which also runs on Linux on z. Similarly IBM offers BigSheets, a cloud application that performs ad hoc analytics at web-scale on unstructured and structured content using the familiar spreadsheet format.

Lastly, Hadoop queries often produce free-form text, which requires text analytics to make sense of the results. Not surprisingly, IBM offers BigInsights Text Analytics, a fast, declarative rule-based information extraction (IE) system that extracts insights from unstructured content. This system consists of a fast, efficient runtime that exploits numerous optimization techniques across extraction programs written in Annotation Query Language (AQL), an English-like declarative language for rule-based information extraction.

Hadoop for the z is more flexible than z data center managers may think. You can merge Hadoop data with z transactional data sources and analyze it all together through BigInsights.

So how big will big data be on the z? DancingDinosaur thought it could scale to hundreds of terabytes, even petabytes. Not so. You should limit Hadoop on the z to moderate volumes—from hundreds of gigabytes to tens of terabytes, Durward advises, adding “after that it gets expensive.”

Still, there are many advantages to running Hadoop on the z. To begin, the z brings rock solid security, is fast to deploy, and, through BigInsights, brings an easy-to-use data ingestion process. It also has proven to be easy to setup and run, taking just a few hours, with conversions handled automatically. Lastly, the data never leaves the platform, which avoids the expense and delay of moving data between platforms. But maybe most importantly, by wrapping Hadoop in a set of familiar, comfortable tools and burying its awkwardness out of sight Hadoop now becomes something every z shop can leverage.

DancingDinosaur is Alan Radding. Follow this blog on Twitter, @mainframeblog. Check out my work at Technologywriter.com

DancingDinosaur can’t attend a mainframe conference without checking out at least one session on mainframe software pricing by David Chase, IBM’s mainframe pricing guru. At IBM Enterprise2014, which wraps up today, the topic of choice was software licensing for Linux middleware. It’s sufficiently complicated to merit an entire session.

In case you think Linux on z is not in your future, maybe you should think again. Linux is gaining momentum in even the largest z data centers. Start with IBM bringing new apps like InfoSphere, BigInsights (Hadoop), and OpenStack to z. Then there are apps from ISVs that just weren’t going to get their offerings to z/OS. Together it points to a telltale sign something is happening with Linux on z. And, the queasiness managers used to have about the open source nature of Linux has long been put to rest.

At some point, you will need to think about IBM’s software pricing for Linux middleware. Should you find yourself getting too lost in the topic, check out these links recommended by Chase:

To begin, software for Linux on z is treated differently than traditional mainframe software in terms of pricing. With Linux on z you think in terms of IFLs. The quantity of IFLs represent the number of Linux engines subjected to IBM’s IPLA-based pricing.

Also think in terms of Processor Value Units (PVUs) rather than MSUs. For a pricing purposes, PVUs are analogous to MSUs although the values are different. A key point to keep in mind: distributed PVUs for Linux are not related to System z IPLA value units used for z/VM products. As is typical of IBM, those two different kinds of value units are NOT interchangeable.

Chase, however, provides a few ground rules:

Dedicated partition

Processors are always allocated in whole increments

Resources are only moved between partitions “explicitly” (e.g. by an operator or a scheduled job)

Shared pool:

Pool of processors shared by partitions (including virtual machines)

System automatically dispatches processor resources between partitions as needed

Maximum license requirements

Customer does not have to purchase more licenses for a product than the number of processors on the machine (e.g. maximum DB2 UDB licenses on a 12-way machine is 12)

Customer does not have to purchase more “shared pool” licenses for a product than the number of processors assigned to the shared pool (e.g. maximum of 7 MQSeries licenses for a shared pool with 7 processors). Note: This limit does not affect the additional licenses that might be required for dedicated partitions.

Any difference for different processor technologies (p, i, x, z, Sun, HP, AMD, etc—notice that the z is just one of many choices, not handled differently from the others

Number of processor cores which must be licensed (z calls them IFLs)

Price per PVU (constant per product, not different based upon technology)

Then it becomes a case of doing the basic arithmetic. The formula: # of PVUs x the # of cores required x the value ($) per core = your total cost. Given this formula it is to your advantage to plan your Linux use to minimize IFLs and cores. You can’t do anything about the cost per PVU.

Distributed PVUs are the basis for licensing middleware on IFLs and are determined by the type of machine processor. The zEC12, z196, and z10 are rated at 120 PVUs. All others are rated at 100 PVUs. For example, any distributed middleware running on Linux on z this works out to:

z114—1IFL, 100 PVUs

z196—4IFLs, 480 PVUs

zEC12—8 IFLs, 960 PVUs

Also, distributed systems Linux middleware offerings are eligible for sub-capacity licensing. Specifically, sub-capacity licensing is available for all PVU-priced software offerings that run on:

UNIX (AIX, HP-UX, and Sun Solaris

i5/OS, OS/400

Linux (System i, System p, System z)

x86 (VMware ESX Server, VMware GSX Server, Microsoft Virtual Server)

IBM’s virtualization technologies also are included in Passport Advantage sub-capacity licensing offering, including LPAR, z/VM virtual machines in an LPAR, CPU Pooling support introduced in z/VM 6.3 APAR VM65418, and native z/VM (on machines which still support basic mode).

And in true z style, since this can seem more complicated than it should seem, there are tools available to do the job. In fact Chase doesn’t advise doing this without a tool. The current tool is the IBM License Metric Tool V9.0.1. You can find more details on it here.

If you are considering distributed Linux middleware software or are already wrestling with the pricing process, DancingDinosaur recommends you check out Chase’s links at the top of this piece. Good luck.

DancingDinosaur is Alan Radding. Follow DancingDinosaur on Twitter, @mainframeblog. You can check out more of my work at Technologywriter.com

Users have always been demanding about performance. But does the 5-minute rule noted by Tom Rosamilia in the opening keynote at IBM Enterprise2014 go too far? It now seems users expect companies to respond, or at least acknowledge, their comments, questions, or problems in five minutes. That means companies need to monitor and analyze social media in real-time and respond appropriately.

Building on client demand to integrate real-time analytics with consumer transactions, IBM yesterday announced new capabilities for its System z. Specifically, IBM is combining the transactional virtues of the z with big data analytic capabilities into a single, streamlined, end-to-end data system. This real-time integration of analytics and transaction processing can allow businesses to increase the value of a customer information profile with every interaction the customer makes. It also promises one way to meet the 5-minute rule, especially when a customer posts a negative comment on social media.

With the new integrated capability you can apply analytics to social sentiment and customer engagement data almost as the transactions are occurring. The goal is to gain real-time insights, which you can do on the mainframe because the data already is there and now the real time analytics will be there. There is no moving of data or logic. The mainframe already is doing this when it is being used for fraud prevention. This becomes another case where the mainframe can enable organizations to achieve real-time insights and respond within five minutes. Compared to fraud analysis the 5-minute expectation seems a luxury.

By incorporating social media into the real time analytic analysis on the mainframe you can gain an indication of how the business is performing in the moment, how you stack up to your competitors, and most importantly, meet the 5-minute response expectation. Since we’re talking about pretty public social sentiment data, you also could monitor your competitors’ social sentiment and analyze that to see how well they are responding.

And then there are the more traditional things you can do with the integration of analytics with transactional data to provide real-time, actionable insights on commercial transactions as they occur. For example you could take advantage of new opportunities to increase sales or prevent customer churn.

According to IBM this is being driven by the rise of mobile and smartphones, numbering in the billions in a few years. The combination of massive amounts of data and consumers who are empowered with mobile access is creating a difficult challenge for businesses, IBM noted in the announcement. Consumers now expect an immediate response—the 5 minute rule—to any interaction, at any time, and through their own preferred channel of communication. Unfortunately, many businesses are trying to meet this challenge and deliver instantaneous, on-demand customer service with outdated IT systems that can only provide after-the-fact intelligence.

Said Ross Mauri, General Manager, System z, IBM Systems & Technology Group: “Off-loading operational data in order to perform analytics increases cost and complexity while limiting the ability of businesses to use the insights in a timely manner.” The better approach, he continued, is to turn to an end-to-end solution that makes analytics a part of the flow of transactions and allows companies to gain real time insights while improving their business performance with every transaction.

Of course, Mauri was referring specifically to the System z. However, Power Systems and especially the new POWER8 machines, which have a strong presence here at IBM Enterprise2014, can do it too. Speaker after speaker emphasized that the Power machines are optimized for lightning fast analytics, particularly real time analytics.

Still, this was a z announcement so IBM piled on a few more goodies for the z. These include new analytics capabilities for the mainframe to enable better data security and provide companies with the ability to integrate Hadoop big data with the z. Specifically, IBM is delivering:

New capabilities in Linux and the cloud for system z, such as IBM Elastic Storage for Linux on System z, which extends the benefits of Elastic Storage to the Linux environment on z servers, and IBM Cloud Manager with OpenStack for System z, which enables heterogeneous cloud management across System z, Power and x86 environments.

Many of these pieces are available now. You can meet the 5-minute rule sooner than you may think.

Alan Radding is DancingDinosaur. Follow him on Twitter, @mainframeblog, or check out his website, Technologywriter.com

Just in time for IBM Enterprise 2014, which starts on Monday in Las Vegas, IBM announced some new Power8 systems and a slew of new capabilities. Much of this actually was first telegraphed earlier in September here, but now it is official. Expect the full unveiling at IBM Enterprise2014.

The new systems are the Power E870 and the Power E880. The E870 includes up to 80 POWER8 cores in 32-40 nodes and as much as 4TB of memory. The Power 880 will scale up to 128 POWER8 cores and promises even more in the next rev. It also sports up to 16TB of memory, again with more coming. This should be more than sufficient to perform analytics on significant workloads and deliver insights in real time. The E880 offers also enterprise storage pools to absorb varying shifts in workloads and handle up to 20 virtual machines per core.

Back in December, DancingDinosaur referred to the Power System 795 as a RISC mainframe. It clearly has been superseded by the POWER8 E880 in terms for sheer performance although the E880 is architected primarily for data analytics. There has been no hint of a refresh of the Power 795, which hasn’t even gotten the Power7 + chip yet. Only two sessions at Enterprise2014 address the Power System 795. Hmmm.

The new POWER8 machines boast some impressive benchmarks as of Sept. 12, 2014: AP SD 2-tier, SPECjbb2013, SPECint_rate2006 and SPECfp_rate2006). Specifically, IBM is boasting of the fastest performing core in the industry: 1.96x or better than the best Intel Xeon Ivy Bridge and 2.29x better than the best Oracle SPARC. In each test the new POWER8 machine ran less than 2/3 of the cores of the competing machine, 10 vs. 15 or 16 respectively.

In terms of value, IBM says the new POWER8 machines cost less than competing systems, delivering 1000 users per core, double its nearest competitor. When pressed by DancingDinosaur on its cost analysis, IBM experts explained they set up new Linux apps on an enterprise class POWER8 system and priced out a comparably configured system from HP based on its published prices. For the new POWER8 systems IBM was able to hold the same price point, which turned out to be 30% less expensive for comparable power given the chip’s increased performance. By factoring in the increase in POWER8 performance and the unchanged price IBM calculated it had the lowest cost for comparable performance. Recommend you run your actual numbers.

The recent announcement also included the first fruits of the OpenPower Foundation, an accelerator from NVIDIA. The new GPU accelerator, integrated directly into the server, is aimed at larger users of big data analytics, especially those using NoSQL databases. The accelerator is incorporated into a new server, the Power System S824L, which includes up to 24 POWER8 cores, 1 TB of memory and up to 2 NVIDIA K40 GPU accelerators. It also includes a bare metal version of Ubuntu Linux. IBM reports it runs extracting patterns for a variety of analytics, big data, and technical computing workloads involving large amounts of data 8x faster.

Another new goodie, one based on OpenStack, is IBM Power Virtualization Center (PowerVC), billed as new advanced virtualization management that promises to simplify the creation and management of virtual machines on IBM Power Systems servers using PowerVM or PowerKVM hypervisors. By leveraging OpenStack, it should enable IBM Power System servers to integrate into a Software Defined Environment (SDE) and provide the necessary foundation required for the delivery of Infrastructure as a Service (IaaS) within the Cloud.

Finally, as part of the Power8 announcements, IBM unveiled Power Enterprise Pools, a slick capacity-on-demand technology also called Power Systems Pools. It offers a highly resilient and flexible IT environment to support of large-scale server consolidation and meet demanding business applications requirements. Power Enterprise Pools allow for the aggregation of compute resources, including processors and memory, across a number of Power systems. Previously available for the Power 780 and 795, it is now available on large POWER8 machines.

Am off to IBM Enterprise2014 this weekend. Hope to see you there. When not in sessions look for me wherever the bloggers hang out (usually where there are ample power outlets to recharge laptops and smartphones). Also find me at the three evenings of live performances: 2 country rock groups, Delta Rae and The Wild Feathers and then, Rock of Ages. Check out all three here.

Alan Radding is DancingDinosaur. You can follow this blog and more on Twitter, @mainframeblog. Also, find me at Technologywriter.com.

About DancingDinosaur author

Alan Radding, the author of DancingDinosaur, is a 20-year IT industry analyst and journalist covering mainframe, midrange, PC, web, and cloud computing. Feel welcome to check out his website -- http://www.technologywriter.com.