Posts Tagged ‘z13’

IBM’s Institute of Business Value (IBV) recently completed a massive study based 12,000 interviews of executives of legacy c-suite companies. Not just CEO and CIO but COO, CFO, CMO, and more, including the CHO. The CHO is the Chief Happiness Officer. Not sure what a CHO actually does but if one had been around when DancingDinosaur was looking for a corporate job he might have stayed on the corporate track instead of pursuing the independent analyst/writer dream.

(unattributed IBM graphic)

IBV actually referred to the study as “Incumbents strike back.” The incumbents being the legacy businesses the c-suite members represent. In a previous c-suite IBV study two years ago, the respondents expressed concern about being overwhelmed and overrun by new upstart companies, the born-on-the-web newcomers. In many ways the execs at that time felt they were under attack.

Spurred by fear, the execs in many cases turned to a new strategy that takes advantage of what has always been their source of strength although they often lacked the ways and means to take advantage of that strength; the huge amounts of data they have gathered and stored, for decades in some cases. With new cognitive systems now able to extract and analyze this legacy data and combine it with new data, they could actually beat some of the upstarts. Finally, they could respond like nimble, agile operations, not the lumbering dinosaurs as they were often portrayed.

“Incumbents have become smarter about leveraging valuable data, honing their employees’ skills, and in some cases, acquired possible disruptors to compete in today’s digital age,” the study finds, according to CIO Magazine, which published excerpts from the study here. The report reveals 72 percent of surveyed CxOs claimed the next wave of disruptive innovation will be led by the incumbents who pose a significant competitive threat to new entrants and digital players. By comparison, the survey found only 22 percent of respondents believe smaller companies and start-ups are leading disruptive change. This presents a dramatic reversal from a similar but smaller IBV survey two years ago.

Making possible this reversal is not only growing awareness among c-level execs of the value of their organizations’ data and the need to use it to counter the upstarts, but new technologies, approaches like DevOps, easier-to-use dev tools, the increasing adoption of Linux, and mainframes like the z13, z14, and LinuxONE, which have been optimized for hybrid and cloud computing. Also driving this is the emergence of platform options as a business strategy.

The platform option may be the most interesting decision right now. To paraphrase Hamlet, to be (a platform for your industry) or not to be. That indeed is a question many legacy businesses will need to confront. When you look at platform business models, what is right for your organization. Will you create a platform for your industry or piggyback on another company’s platform? To decide you need to first understand the dynamics of building and operating a platform.

The IBV survey team explored that question and found the respondents pretty evenly divided with 54% reporting they won’t while the rest expect to build and operate a platform. This is not a question that you can ruminate over endlessly like Hamlet. The advantage goes to those who can get there first in their industry segment. Noted IBV, only a few will survive in any one industry segment. It may come down to how finely you can segment the market for your platform and still maintain a distinct advantage. As CIO reported, the IBV survey found 57 percent of disruptive organizations are adopting a platform business model.

Also rising in importance is the people-talent-skills issue. C-level execs have always given lip service to the importance of people as in the cliché people are our greatest asset. Based on the latest survey, it turns out skills are necessary but not sufficient. Skills must be accompanied by the right culture. As the survey found: Companies that have the right culture in place are more successful. In that case, the skills are just an added adrenalin shot. Still the execs put people skills in top three. The IBV analysts conclude: People and talent is coming back. Guess we’re not all going to be replaced soon with AI or cognitive computing, at least not yet.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his work at technologywriter.com and here.

A global study sponsored by IBM and conducted by the Ponemon Institute found that the average cost of a data breach for companies surveyed has grown to $4 million, representing a 29 percent increase since 2013. With cybersecurity incidents continuing to increase with 64% more security incidents in 2015 than in 2014 the costs are poised to grow.

z13–world’s most secure system

The z13, at least, is one way to keep security costs down. It comes with a cryptographic processor unit available on every core, enabled as a no-charge feature. It also provides EAL5+ support, a regulatory certification for LPARS, which verifies the separation of partitions to further improve security along with a dozen or so other built-in security features for the z13. For a full list of z13 security features click here. There also is a Redbook, Ultimate Security with the IBM z13 here. A midsize z, the z13s brings the benefits of mainframe security and mainframe computing to smaller organizations. You read about the z13s here on DancingDinosaur this past February.

As security threats become more complex, the researchers noted, the cost to companies continues to rise. For example, the study found that companies lose $158 per compromised record. Breaches in highly regulated industries were even more costly, with healthcare reaching $355 per record – a full $100 more than in 2013. And the number of records involved can run from the thousands to the millions.

Wow, why so costly? The researchers try to answer that too: leveraging an incident response team was the single biggest factor associated with reducing the cost of a data breach – saving companies nearly $400,000 on average (or $16 per record). In fact, response activities like incident forensics, communications, legal expenditures and regulatory mandates account for 59 percent of the cost of a data breach. Part of these high costs may be linked to the fact that 70 percent of U.S. security executives report they don’t even have incident response plans in place.

The process of responding to a breach is extremely complex and time consuming if not properly planned for. As described by the researchers, the process of responding to a breach consists of a minimum of four steps. Among the specified steps, a company must:

Work with IT or outside security experts to quickly identify the source of the breach and stop any more data leakage

Set up any necessary hotline support and credit monitoring services for affected customers

And not even included in the researchers’ list are tasks like inventorying and identifying the data records that have been corrupted or destroyed, remediating the damaged data, and validating it against the last known clean backup copy. Am surprised the costs aren’t even higher. Let’s not even talk about the PR damage or loss of customer goodwill. Now, aren’t you glad you have a z13?

That’s not even the worst of it. The study also found the longer it takes to detect and contain a data breach, the more costly it becomes to resolve. While breaches that were identified in less than 100 days cost companies an average of $3.23 million, breaches that were found after the 100-day mark cost over $1 million more on average ($4.38 million). The average time to identify a breach in the study was estimated at 201 days and the average time to contain a breach was estimated at 70 days. The cost of a z13 or even the lower cost z13s could justify itself by averting just one data breach.

The researchers also found that companies with predefined Business Continuity Management (BCM) processes in place found and contained breaches more quickly, discovering breaches 52 days earlier and containing them 36 days faster than companies without BCM. Still, the cheapest solution is to avert breaches in the first place.

Not surprisingly, IBM is targeting the incident response business as an up and coming profit center. The company increased its investment in the Incident response market with the recent acquisition of Resilient Systems, which just came out with an updated version that graphically displays the relationships between Indicators of Compromise (IOCs) and incidents in an organization’s environment. But the z13 is probably a better investment if you want to avoid data breaches in the first place.

Running Syncsort’s Ironstream and leveraging Splunk Enterprise, Medical Mutual of Ohio has now implemented mainframe security in real time through the Splunk® Enterprise platform. One goal is to help protect customer information stored in DB2 from unauthorized access. Syncsorts’s Ironstream, a utility, collects and forwards z/OS log data, including security data, to Splunk Enterprise and Splunk Enterprise Security.

z/OS security data, courtesy of Syncsort

“We’ve always had visibility. Now we can get it faster, in real time directly from the mainframe,” said the insurer’s enterprise security supervisor. Previously, the company would do a conventional data transfer, which could take several hours. The new approach, sometimes referred to as a big iron-to-big data strategy, now delivers security log data in near real time. This enables the security team to correlate all the security data from across the enterprise to effectively and quickly gain visibility into user-authentication data and access attempts tracked on the mainframe. And they can do it without needing specialized expertise or different monitoring systems for z/OS.

Real-time analytics, including real-time predictive analytics, are increasingly attractive as solutions for the growng security challenges organizations are facing. These challenges are due, in large part, to the explosion of transaction activity driven by mobile computing, and soon, IoT, and Blockchain, most of which eventually finds its way to the mainframe. All of these present immediate security concerns and require fast, nearly instant security decisions. Even cloud usage, which one would expect to be mainstream in enterprises by now, often is curtailed due to security fears.

With the Ironstream and Splunk combination, Medical Mutual Medical Mutual can see previously slow-to-access mainframe data alongside other security information it was already analyzing in Splunk Enterprise. Splunk Enterprise enables a consolidated enterprise-wide view of machine data collected across the business, which makes it possible to correlate events that might not raise suspicion alone but could be indicative of a threat when seen together.

The deployment proved to be straightforward. Medical Mutual’s in-house IT team set it up in a week with Syncsort answering deployment questions to assist. Although there are numerous tools to capture log data from the mainframe, the insurer chose to go with the Splunk-Ironstream combination because it already was using Splunk in house for centralized logging. Adding mainframe security logs was an easy step. “This was affordable and it saved us from having to learn another product,” the security supervisor added. Medical Mutual runs a z13, model 409 with Ironstream.

According to the announcement, by having Ironstream leverage z/OS log data via Splunk Enterprise, Medical Mutual has enables the organization to:

Track security events and data from multiple platforms including IBM z/OS mainframes, Windows and distributed servers and correlate the information in Splunk Enterprise for better security.

Diagnose and respond to high severity security issues more quickly since data from across the entire enterprise is being monitored in real time.

Provide monthly and daily reporting with an up-to-the-minute account of unusual user activity.

Real time monitoring with analytics has proven crucial for security. You can actually detect fraud while it is taking place and before serious damage is done. It is much harder to recoup loses hours, days, or, what is often the case, months later.

The Splunk platform can handle massive amounts of data from different formats and indexes and decipher and correlate security events through analytics. Ironstream brings the ability to stream mainframe security data for even greater insights, and Ironstream’s low overhead keeps mainframe processing costs low.

To try the big iron-to-big data strategy organizations can download a free Ironstream Starter Edition and begin streaming z/OS Syslog data into Splunk solutions. Unlike typical technology trials, the Starter Edition is not time-limited and may be used in production at no charge. This includes access to the Ironstream applications available for download on Splunkbase.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

The z13 is the most powerful general purpose computer IBM has ever made. The key to capturing the maximum value from the z13, however, lies in how you plan, design, configure, and optimize your systems and software for everything from COBOL and Java to process parallelization and analytics. What you do in this regard will have significant impact on not only the price/performance you experience but on your success at achieving the business outcomes you are expecting.

IBM System z13

This really becomes a software configuration challenge. By tapping approximately 600 internal processors IBM already has optimized the hardware, input, output, memory, and networking/communications about as much as it can be. Your job is to optimize the software you are running, which will require working closely with your ISV.

The place to start is by leveraging the z13’s new compiler technology, parallelism, zIIP and assist processors. This will enable you to save significant money while boosting workload performance. You will literally be doing more for less.

Similarly, in the not too distant past Moore’s Law would virtually guarantee a 15-20% price/performance gain automatically just by taking a new machine out of the box and plugging it in. That’s no longer the case. Now you will have to partner with your ISV to exploit advanced software to maximize the hardware payback and continue the ride along the favorable Moore’s Law price/performance slope.

Then look at the latest COBOL V5.x and its compiler on the z13. Out of the box it is better optimized than previous compilers. In general, the strategic value of COBOL V5.x comes from migrating high CPU usage programs as quickly as possible, effectively saving organizations considerable money by running optimized code.

Some organizations report a 15% on average reduction of CPU time, which adds up to significant savings in monthly CPU charges. How significant? Up to $150k less on a $1 million bill, with some reporting even higher percentage reductions producing even greater savings. Just migrate to COBOL V5.2 (or at least V5.1) to achieve the savings. In general, staying on the software curve with the latest releases of the OS, languages, and compilers with applications optimized for them is the best way to ensure your workloads are achieving top performance in the most cost-effective way.

For example, the new z13 processor leverages a new Vector Facility for certain COBOL statements and expands the use of Decimal Floating Point Facility for packed decimal calculations. Well-structured, compute-intensive batch applications running on z13 and compiled with the Enterprise COBOL V5.2 compiler have shown CPU reduction usage of up to 14% over the same applications running on zEC12 (compiled with the GA release of Enterprise COBOL V5.1), according to IBM. The result: improved workload price/performance.

Enterprise COBOL V5.2 also includes new features to improve programmability, developer productivity, and application modernization. Supporting JSON, for instance, will provide mobile applications easy access to data and the processing they need from business critical production applications written in COBOL.

The z13 and its z sister, the latest LinuxONE dedicated Linux models, were designed and optimized from the start for cloud, mobile, and analytics. They were intended to run alongside traditional mainframe workloads with z/OS or Linux running on the appropriate models.

Finally, plan to take advantage of the new assist processors and expanded memory capacity to further boost performance and lower cost. With the z13, there is a mandatory migration of all zAAP-enabled applications to zIIP. Expect the usage of the zIIP assist processors to surge when all those Java applications move from the zAAP. ISVs like Compuware should be able to help with this. In addition, if you enable SMT on the z13, you’ll immediately get more Java capacity. Applications that run under IBM WebSphere (WAS) on z/OS will benefit too.

The z13 and especially the LinuxONE are breaking new ground. IBM has established, in conjunction with the Linux Foundation, an Open Mainframe Project to support and advance ongoing open source Linux innovation on the mainframe. IBM also is breaking with its traditional mainframe pricing model by offering a pay-per-use option in the form of a fixed monthly payment with costs scaling up or down based on usage. It also offers per-core pricing with software licenses for designated cores. See DancingDinosaur here.

An upcoming DancingDinosaur will look at more of the enhancements being added to these machines, including some of the latest LinuxOne enhancements like support for Google’s Go language and Cloudant’s NoSQL services. The message: the new z System can take you to the places you will want to be in this emerging cloud-mobile-analytics era.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

The last z System that conformed to the expectations of Moore’s Law was the zEC12. IBM could boast that it had the fastest commercial processor available. The subsequent z13 didn’t match it in processor speed. The z13 chip runs a 22 nm core at 5 GHz, one-half a GHz slower than the zEC12, which ran its 32nm core at 5.5 GHz. Did you even notice?

In 2007 an IBM scientist holds a 3-D integrated stacked chip

In 2015, the z13 delivers about a 10 percent performance bump per core thanks to the latest tweaks in the core design, such as better branch prediction and better pipelining. But even one-half a Ghz slower, the z13 was the first system to process 2.5 billion transactions a day. Even more importantly for enterprise data centers, z13 transactions are persistent, protected, and auditable from end-to-end, adding assurance as mobile transactions grow to an estimated 40 trillion mobile transactions per day by 2025.

IBM clearly isn’t bemoaning the decline of Moore’s Law. In fact, it has been looking beyond silicon for the processing of the future. This week it announced a major engineering breakthrough that could accelerate carbon nanotubes for the replacement of silicon transistors to power future computing. The breakthrough allows a new way to shrink transistor contacts without reducing the performance of carbon nanotube devices, essentially opening a path to dramatically faster, smaller, and more powerful computer chips beyond the capabilities of traditional semiconductors. Guess we can stop worrying about Moore’s Law.

Without Moore’s Law, IBM optimized just about everything on the z13 that could be optimized. It provides 320 separate channels dedicated to drive I/O throughput as well as such performance goodies as simultaneous multithreading (SMT), symmetric multiprocessing (SMP), and single instruction, multiple data (SIMD). Overall about 600 processors (in addition to your configurable cores) speed and streamline processes throughout the machine. Moore’s Law, in effect, has been bypassed. As much as the industry enjoyed the annual doubling of capacity and corresponding lower price/performance it doesn’t need Moore’s Law to meet today’s insatiable demand for processing power.

The company will be doing similar things with the POWER processor. Today we have the POWER8. Coming is the POWER9 followed by the POWER10. The POWER9 reportedly will arrive in 2017 at 14nm, feature a new micro-architecture, and be optimized with CAPI and NVLINK. POWER10, reportedly, arrives around 2020 optimized for extreme analytics.

As IBM explains its latest breakthrough, carbon nanotubes represent a new class of semiconductor materials that consist of single atomic sheets of carbon rolled up into a tube. The carbon nanotubes form the core of a transistor device whose superior electrical properties promise several generations of technology scaling beyond the physical limits of silicon.

The new processor technology, IBM reports, overcomes a major hurdle that silicon and any other semiconductor transistor technologies face when scaling down. In the transistor, two things scale: the channel and its two contacts. As devices become smaller, the increased contact resistance of carbon nanotubes hindered performance gains. The latest development could overcome contact resistance all the way to the 1.8 nanometer node – four technology generations away.

Carbon nanotube chips could greatly improve the capabilities of high performance computers, enabling, for example, big data to be analyzed faster, increasing the power and battery life of mobile devices, and allowing cloud data centers to deliver services more efficiently and economically. Even cognitive computing and Internet of Things can benefit.

Until now, vendors have be able to shrink the silicon transistors, but they are approaching a point of physical limitation, which is why Moore’s Law is running out of steam. Previously, IBM demonstrated that carbon nanotube transistors can operate as effective switches at channel dimensions of less than ten nanometers. IBM’s new contact approach overcomes the contact resistance by incorporating carbon nanotubes into semiconductor devices, which could result in smaller chips with greater performance and lower power consumption.

As transistors shrink in size, electrical resistance within the contacts increases, which limits performance. To overcome this resistance, IBM researchers gave up traditional contact schemes and created a metallurgical process akin to microscopic welding that chemically binds the metal atoms to the carbon atoms at the ends of nanotubes. This end-bonded contact scheme allows the contacts to be shrunken below 10 nanometers without impacting performance. This brings the industry a step closer to the goal of a carbon nanotube technology within the decade, says IBM.

Let’s hope this works as expected. If not, IBM has other possibilities already in its research labs. DancingDinosaur is Alan Radding, a veteran IT analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

Meet the new IBM z System; called LinuxONE Emperor (named after the Emperor Penguin.) It is a z13 running only Linux. Check out the full announcement here.

Courtesy of IBM, LinuxONE Emperor, the newest z System

DancingDinosaur is excited by several aspects of this announcement: IBM is establishing, in conjunction with the Linux Foundation, an Open Mainframe Project; the company is breaking with its traditional mainframe pricing model; it also is putting KVM and Ubuntu on the machine; and it is offering a smorgasbord of app-dev options, including some of the sexiest in the industry today. DancingDinosaur never believed it would refer to a mainframe as sexy (must be time to retire).

Along with LinuxONE Emperor IBM announced an entry dedicated Linux machine, the LinuxONE Rockhopper. (BTW; notice the new playfulness in IBM’s product naming.) Rockhopper appears to be very similar to what IBM used to call a Business Class z, although IBM has stepped away from that designation. The closest you may get to a z13 business class machine may be LinuxONE Rockhopper. Rockhopper, according to IBM, is designed for clients and emerging markets seeking the speed, security and availability of the mainframe but in a smaller package.

The biggest long term potential impact from the announcement may come out of the Open Mainframe Project. Like many of IBM’s community project initiatives, IBM is starting by seeding the open community with z code, in effect creating the beginning of an open z System machine. IBM describes this as the largest single contribution of mainframe code from IBM to the open source community. A key part of the mainframe code contributions will be the z’s IT predictive analytics that constantly monitor for unusual system behavior and help prevent issues from turning into failures. In effect, IBM is handing over zAware to the open source community. It had already announced intentions to port zAware to Linux on z early this year so it might as well make it fully open. The code, notes IBM, can be used by developers to build similar sense-and-respond resiliency capabilities for other systems.

The Open Mainframe Project, being formed with the Linux Foundation, will involve a collaboration of nearly a dozen organizations across academia, government, and corporate sectors to advance development and adoption of Linux on the mainframe. It appears that most of the big mainframe ISVs have already signed on. DancingDinosaur, however, expressed concern that this approach brings the possibility of branching the underlying functionality between z and Linux versions. IBM insists that won’t happen since the innovations would be implemented at the software level, safely insulated from the hardware. And furthermore, should there emerge an innovation that makes sense for the z System, maybe some innovation around the zAware capabilities, the company is prepared to bring it back to the core z.

The newly announced pricing should also present an interesting opportunity for shops running Linux on z. As IBM notes: new financing models for the LinuxONE portfolio provide flexibility in pricing and resources that allow enterprises to pay for what they use and scale up quickly when their business grows. Specifically, for IBM hardware and software, the company is offering a pay-per-use option in the form of a fixed monthly payment with costs scaling up or down based on usage. It also offers per-core pricing with software licenses for designated cores. In that case you can order what you need and decrease licenses or cancel on 30 days notice. Or, you can rent a LinuxONE machine monthly with no upfront payment. At the end of the 36-month rental (can return the hardware after 1 year) you choose to return, buy, or replace. Having spent hours attending mainframe pricing sessions at numerous IBM conferences this seems refreshingly straightforward. IBM has not yet provided any prices to analysts so whether this actually is a bargain remains to be seen. But at least you have pricing option flexibility you never had before.

The introduction of support for both KVM and Ubuntu on the z platform opens intriguing possibilities. Full disclosure: DancingDinosaur was an early Fedora adopter because he could get it to run on a memory-challenged antiquated laptop. With the LinuxONE announcement Ubuntu has been elevated to a fully z-supported Linux distribution. Together IBM and Canonical are bringing a distribution of Linux incorporating Ubuntu’s scale-out and cloud expertise on the IBM z Systems platform, further expanding the reach of both. Ubuntu combined with KVM should make either LinuxONE machine very attractive for OpenStack-based hybrid cloud computing that may involve thousands of VMs. Depending on how IBM ultimately prices things, this could turn into an unexpected bargain for Linux on z data centers that want to save money by consolidating x86 Linux servers, thereby reducing the data center footprint and cutting energy costs. LinuxONE Emperor can handle 8000 virtual servers in a single system, tens of thousands of containers.

Finally, LinuxONE can run the sexiest app-dev tools using any of the hottest open technologies, specifically:

And run the results however you want: single platform, multi-platform, on-prem and off-prem, or multiple mixed cloud environments with a common toolset. Could a combination of LinuxONE alongside a conventional z13 be the mainframe data center you really want going forward?

DancingDinosaur is Alan Radding, a veteran IT analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

Although the financial markets may be beating up IBM the technology world continues to acclaim IBM technology and products. Most recently, IBM ranked on top in the CRN Annual Report Card (ARC) Survey recognizing the best-in-class vendors in the categories of partnership, support, and product innovation. But the accolades don’t stop there.

Courtesy of IBM (click to enlarge)

IBM was named a leader in four key cloud services categories—hosting, overall cloud professional services, cloud consulting services, and systems integration—by the independent technology market research firm Technology Business Research, Inc. (TBR). This summer Gartner also named IBM as a leader in Security Information and Event Management (SIEM) in the latest Gartner Magic Quadrant for SIEM, this for the seventh consecutive year. Gartner also named IBM as a Leader in the 2015 Magic Quadrant for Mobile Application Development Platforms, specifically calling out the IBM MobileFirst Platform.

The CRN award addresses the technology channel. According to IBM, the company and its business partners are engaging with clients in new ways to work, building the infrastructure, and deploying innovative solutions for the digital era. This should come as no surprise to anyone reading this blog; the z 13 was designed expressly to be a digital platform for the cloud, mobile, and big data era. IBM’s z and Power Systems servers and Storage Solutions specifically were designed to address the challenges these areas present.

Along the same lines, IBM’s commitment to open alliances has continued this year unabated, starting with its focus on innovation platforms designed for big data and superior cloud economics, which continue to be the cornerstone of IBM Power System. The company also plays a leading role in the Open Power Foundation, the Linux Foundation as well as ramping up communities around the Internet of Things, developerWorks Recipes, and the open cloud, developerWorks Open. The last two were topics DancingDinosaur tackled recently, here and here.

The TBR report, entitled Hosted Private & Professional Services Cloud Benchmark, provides a market synopsis and growth estimates for 29 cloud providers in the first quarter of 2015. In that report, TBR cited IBM as:

The undisputed growth leader in overall professional cloud services

The leader in hosted private cloud and managed cloud services

A leader in OpenStack vendor acquisitions and OpenStack cloud initiatives

A growth leader in cloud consulting services, bridging the gap between technology and strategy consulting

A growth leader in cloud systems integration services

According to the report: IBM’s leading position across all categories remains unchallenged as the company’s established SoftLayer and Bluemix portfolios, coupled with in-house cloud and solutions integration expertise, provide enterprises with end-to-end solutions.

Wall Street analysts and pundits clearly look at IBM differently than IT analysts. The folks who look at IBM’s technology, strategy, and services, like those at Gartner, TBR, and the CRN report card, tell a different story. Who do you think has it right?

DancingDinosaur is Alan Radding, a veteran IT analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

For years organizations have been putting their analytics on distributed platforms thinking that was the only way to get fast, real-time and predictive analytics. Maybe once but not anymore. Turns out the IBM z System, especially the z13 not only is ideal for real time, predictive analytics but preferable.

IBM today is so bullish on analytics, especially predictive analytics, that last month it introduced 20 pre-built industry-specific predictive analytics solutions. To build these solutions IBM tapped its own experience working on 50,000 engagements but also an array of outside organizations with success in predictive analytics, including Urban Outfitters, National Grid, Deloitte, Bolsa de Santiago, Interactive Data Managed Solutions, and Bendigo and Adelaide Bank, among others.

Courtesy of IBM (click to enlarge)

The truth of the matter is that without efficient real time, predictive analytics managers get it wrong most of the time when it comes to making operational decisions, said Paul DiMarzio, IBM z Systems Big Data and Analytics Worldwide Portfolio Marketing Manager. He spoke at IBM Edge2015 in a session titled When Milliseconds Matter: Architecting Real-Time Analytics into Operational Systems. His key point: you can do this completely within the IBM z System.

The old notion of sending data to distributed systems someplace else for analytics now appears ridiculous, especially with the introduction of systems like the z13 that can handle operations and perform real time analytics concurrently. It performs analytics fast enough that you can make decisions when the action is still going on. Now the only question is whether we have the right business rules and scoring models. The data already are there and the tools are ready and waiting on the z13.

You start with the IBM SPSS Modeler with Scoring Adapter for zEnterprise. The real time predictive analytics capability delivers better, more profitable decisions at the point of customer impact. For business rules just turn to the IBM Operational Decision Manager for z/OS, which codifies business policies, practices, and regulations.

In addition to SPSS and the Operational Decision Manager the z13 brings many capabilities, some new for the z13 at this point. For starters, the z13 excels as a custodian of the data model, providing an accurate, secure, single copy of information that, according to IBM, ensures veracity of the data necessary for reliable analytics and provides centralized control over decision information.

Specifically, the machine brings SIMD (single instruction multiple data) and the MASS (mathematical acceleration subsystem) and ATLAS (automatically tuned linear algebra software) libraries for z/OS and Linux on z. SIMD enables the same operation to be performed on several data elements at the same time rather than sequentially. MASS and ATLAS help programmers create better and more complex analytic models.

In addition, increases in memory to as much as 10 TB, faster I/O, and simultaneous multi-threading (SMT) generally boost overall throughput of the z13, which will surely benefit any analytics being run on the machine, especially real time, predictive analytics. In addition, analytics on the z13 gains from deep integration with core systems, the integrated architecture, and its single pane management view.

The latest IBM Red Book on analytics on the z13 sums it up as such: z Systems analytics enables organizations to improve performance and lower cost by bringing the analytic processing to where the data resides. Organizations can therefore maximize their current IT investments while adding functionality and improved price and performance with the z13. And with the new z13 features, applications can gain increased throughput for operational business intelligence (operational BI) and DB2 query workloads, which saves money (hardware, software, labor).

The Red Book suggests the following example: a user with a mobile application signs on and initiates a transaction flow through an IBM MobileFirst Platform Server running on Linux on z. The event goes to an LDAP server on z/OS to validate the user’s sign-on credentials. After successful validation, the transaction then proceeds through the z/OS transaction environment where all of the data resides in DB2 z/OS. IBM CICS transactions also are processed in the same z environment and all of the analysis is performed without moving any data, resulting in extremely fast performance. Sweet.

DancingDinosaur is Alan Radding, a veteran IT analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing on Technologywriter.com and here.

In October, IBM introduced a new range of POWER systems capable of handling massive amounts of computational data faster at nearly 20 percent better price/performance than comparable Intel Xeon v3 processor-based systems, delivering to clients a superior alternative to closed, commodity-based data center servers. DancingDinosaur covered it last October here. Expect this theme to play out big at IBM

Edge2015 in Las Vegas, May 10-15. Just a sampling of a few of the many POWER sessions makes that clear:

Courtesy of Studio Stence, Power S824L (click to enlarge)

(lCV1655) Linux on Power and Linux on Intel: Side By Side, IT Economics Positioning; presenter Susan Proietti Conti

Based on real cases studied by the IBM Eagle team for many customers in different industries and geographies, this session explains where and when Linux on Power provides a competitive alternative to Linux on Intel. The session also highlights the IT economic value of architecture choices provided by the Linux/KVM/Power stack, based on open technologies brought by POWER8 and managed through OpenStack. DancingDinosaur periodically covers studies like these here and here.

Since the announcement of POWER8 and building momentum of the OpenPOWER consortium, there are new reasons for cloud service providers to look at Power technology to support their offerings. As an alternative open-based technology to traditional proprietary technologies, Power offers many competitive advantages that can be leveraged for cloud service providers to deliver IaaS services and other types of service delivery. This session illustrates what Power offers by highlighting client examples and the results of IT economics studies performed for different cloud service providers.

(lSY2653) Why POWER8 Is the Platform of Choice for Linux; presenter Gary Andrews

Linux is the platform of choice for running next generation workloads. With POWER8, IBM is investing heavily into Linux and is adding major enhancements to the Power platform to make it the server of choice for running Linux workloads. This session discusses the new features and how they can help run business faster and at lower costs on the Power platform. Andrews also points out many advanced features of Linux on Power that you can’t do with Linux on x86. He shows how competitive comparisons and performance tests demonstrate that POWER8 increases the lead over x86 latest processor family. In short, attend this session to understand the competitive advantages that POWER8 on Linux can deliver compared to Linux on x86.

(pBA1244) POWER8: Built for Big Data; presenter William Starke

Starke explains how IBM technologies from semiconductors through micro-architecture, system design, system software, and database and analytic software culminate in the POWER8 family of products optimized around big data analytics workloads. He shows how the optimization across these technologies delivers order-of-magnitude improvements via several example scenarios.

This session presents a set of best practices that have been tried and tested in various application domains to get the maximum performance of an application on a POWER8 processor. Performance improvement can be gained at various levels: the system level, where system parameters can be tuned; the application level, where some parameters can be tuned as there is no one-size-fits-all scenario; and the compiler level, where options for every kind of application have shown to improve performance. Some options are unique to IBM and give an edge over competition in gaming applications. In cases where applications are still under development, Ravindar presents guidelines to ensure the code runs fastest on Power.

Here you get to examine decision points for how and when to use an existing Power infrastructure in a cloud environment. This session covers on-premises and off-premises, single vs. multi-tenant hosting, and security concerns. You also review IaaS, PaaS, and hybrid cloud solutions incorporating existing assets into a cloud infrastructure. Discover provisioning techniques to go from months to days and then to hours for new instances.

One session DancingDinosaur hasn’t found yet is whether it is less costly for an enterprise to virtualize a couple of thousand Linux virtual machines on one of the new IBM Power servers pictured above or on the z13 as an Enterprise Linux server purchased under the System z Solution Edition Program. Hmm, will have to ask around about that. But either way you’d end up with very low cost VMs compared to x86.

Of course, save time for the free evening entertainment. In addition to Penn & Teller, a pair of magicians, and rocker Grace Potter, here, there will be a weird but terrific group, 2Cellos as well.

DancingDinosaur is Alan Radding, a veteran IT analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing on Technologywriter.com and here. Please join DancingDinosaur at IBM Edge2015. You will find me hanging out wherever people gather around available power outlets to recharge mobile devices.

There are so many interesting z Systems sessions at IBM Edge2015 that DancingDinosaur can’t come close to attending them all or even writing about them. Edge2015 will be in Las Vegas, May 10-15, at the Venetian, a huge hotel that just happens to have a faux Venice canal running within it (and Vegas is in the desert, remember).

The following offers a brief summation of a few z Systems sessions that jumped out at me. In the coming weeks Dancing Dinosaur will look at sessions on Storage, Power Systems, cross-platform sessions, and middleware. IBM bills Edge2015 as the Infrastructure Innovation Conference so this blog will try at least to touch on bits of all of it. Am including the session numbers and presenters but please note that session and presenters may change.

DancingDinosaur starting following mobile on z in 2012 and was reporting IBM mobile successes as recently as last month, click here. In this session Simmonds observes organizations being driven to deliver more insight and smarter outcomes in pursuit of increasing revenue and profit while lowering business costs and risks. The ubiquity of mobile devices adds two important dimensions to business analytics, the time and location of customers. Now you have an opportunity to leverage both via the mobile channel but only if your analytics strategy can respond to the demands of the mobile moment. At this session you’ll see how customers are using IBM solutions and the z to deliver business critical insight across the mobile community and hear how organizations are setting themselves apart by delivering near real-time analytics.

Session zBA1822; Hadoop and z Systems; presenter Alan Fellwock

DancingDinosaur looked at Hadoop on z as early as 2011. At that point it was mainly an evolving promise. By this past fall it had gotten real, click here. In this session, Fellwock notes that various use cases are emerging that require Hadoop processing in conjunction with z Systems. In one category, the data originates on the z Systems platform itself—this could be semi-structured or unstructured data held in DB2 z/OS, VSAM or log files in z/OS. In another category, the data originates outside z Systems –this could be social media data, email, machine data, etc.—but needs to be integrated with core data on z Systems. Security and z Systems governance becomes critical for use cases where data originates on z Systems. There are several z Hadoop approaches available, ranging from Hadoop on Linux to an outboard Hadoop cluster under z governance to a cloud model that integrates with SoftLayer.

Session zAD1876; Bluemix to Mainframe – Making Development Accessible in the Cloud; presenter Rosalind Radcliffe

Cloud capability and technology is changing the way enterprises go to market. DancingDinosaur interviewed Radcliffe for a posting on DevOps for the mainframe in March. DevOps is about bringing the entire organization together, including development and operations, to more efficiently deliver business value be it on premise, off premise, or in a hybrid cloud environment. This session promises to explore how IBM DevOps solutions can transform the enterprise into a high quality application factory by leveraging technology across platforms and exploiting both systems of record and systems of engagement applications. It will show how to easily expose your important data and customer applications to drive innovation in a nimble, responsive way, maintaining the logic and integrity of your time-tested systems.

The emergence of APIs has changed how organizations build innovative mobile and web applications, enter new markets, and integrate with cloud and third party applications. DancingDinosaur generally refers to this as the API economy and it will become only more important going forward. IBM z Systems data centers have valuable assets that support core business functions. Now they can leverage these assets by exposing them as APIs for both internal and external consumption. With the help of IBM API Management, these organizations can govern the way APIs are consumed and get detailed analytics on the success of the APIs and applications that are consuming them. This session shows how companies can expose z Systems based functions as APIs creating new business opportunities.

What happens when you combine the most powerful commercially available machine on the planet with the latest iteration of the most popular programming language on the planet? An up to 50% throughput improvement for your generic applications and up to 2x throughput improvement for your security-enabled applications – that’s what! This session covers innovation and performance of Java 8 and IBM z13. With features such as SMT, SIMD and cryptographic extensions (CPACF) exploitation, IBM z Systems is once again pushing the envelope on Java performance. Java 8 is packed with features such as lambdas and streams along with improved performance, RAS and monitoring that continues a long roadmap of innovation and integration with z Systems. Expect to hear a lot about z13 at Edge2015.

Of course, there is more at Edge2015 than just z Systems sessions. There also is free evening entertainment. This year the headliner act is Penn & Teller, a pair of magicians. DancingDinosaur’s favorite, however, is Grace Potter, who delivers terrific hard rock and roll. Check her out here.

DancingDinosaur is Alan Radding, a veteran IT analyst and writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing on Technologywriter.com and here. And join DancingDinsosaur at IBM Edge2015. You will find me hanging out wherever people gather around available power outlets to recharge mobile devices.

About DancingDinosaur author

Alan Radding, the author of DancingDinosaur, is a 20-year IT industry analyst and journalist covering mainframe, midrange, PC, web, and cloud computing. Feel welcome to check out his website -- http://www.technologywriter.com.