Richard Lewis is an IBM employee who is part of the IBM Washington System Center, Advanced Technical Support Organization supporting z/VM and Linux on System z.

In August of this year IBM announced the acquisition of CSL International, a leading provider of virtualization management technology. As part of the sales enablement strategy for this new IBM technology, a team at the International Technical Support Organization (ITSO) in Poughkeepsie, NY, is beginning work to produce a Redbooks publication designed to help customers implement this new technology, formerly called CSL-WAVE and now named IBM Wave for z/VM. IBM Wave is a provisioning and productivity management solution for simplifying the control and usage of virtual Linux servers running on the IBM z/VM operating system.

This is the first week of the project, so the team has been spending time building a table of contents for the book, and becoming familiar with the technology. Wednesday we installed IBM Wave in the first member of a four member z/VM 6.3 Single System Image cluster.

We began by executing the z/VM and Directory Maintenance Facility (DIRMAINT) commands necessary to prepare for the installation of CSL-WAVE. Our environment had much of the preliminary setup already done, such as configuration of the z/VM System Management API (SMAPI), and configuration of the DIRMAINT EXTENT CONTROL file. In addition, the Linux virtual machine to host the IBM Wave knowledge base, web server and background task scheduler was already created for us. The Linux distribution was SUSE and the level was SLES 11 SP2.

Once all of the preliminary steps were completed we proceeded to install the IBM Wave rpm file. This went very smoothly and lasted only a couple of minutes. At the end of the rpm install we had the web server, knowledge base and background task scheduler up and running.

The next step was to point a browser at the IP address for the WAVESRV virtual machine (running the knowledge base, web server, and background task scheduler) to download the Java web start GUI client. The web page displayed provides a link for the GUI client; however, it also provides a really convenient tool to test whether or not the z/VM SMAPI is properly configured. This is a capability that many z/VM customers have requested for quite some time. When configuring the z/VM SMAPI there isn’t any convenient method of checking that all is running properly. The tool from IBM Wave provides just that capability.

As can be seen in Figure 2, the tool will connect to the z/VM System Management API and test the network connectivity as well as the capability to execute API requests that use the DIRMAINT facility.

After running the test tool, we invoked the CSL-WAVE GUI. The GUI prompts for the definition of a userid and password that will become a super user for further configuration. From this point we needed to configure the license, and define the processor and systems for CSL-WAVE to manage. Details of that process are best left for another installment. Even with plenty of time to discuss what was happening we were completely installed in less than 3 hours. Quite remarkable.

A year and a half ago, IBM Wave for z/VM came onto the scene to provide a simplified and cost effective way for companies to harness the consolidation capabilities of the IBM z Systems platform and its ability to host workloads of tens of thousands of commodity servers. In December 2015, an IBM Redbooks residency started running to make important updates to the IBM Redbooks publication, IBM Wave for z/VM Installation,Implementation, and Exploitation, SG24-8192. IBM Wave Release 2 further expands the capabilities by delivering increased support for Linux distributions and devices, as well as additional enterprise-grade security and performance enhancements.

Some of the updates in this book include instructions on how to do a bare metal installation from Red Hat Enterprise Linux Servers using IBM Wave for z/VM.

Additionally, this IBM Redbooks publication includes a new chapter that describes IBM Wave / BTS parameters that might influence performance and resource usage. This chapter discusses:

The IBM Wave Parameters window

General parameters

The BTS Manager window

How to restart the Background Task Scheduler (the BTS)

And how to produce a dump of the BTS

We’ve also included an appendix in this version of the IBM Redbooks publication that includes, among other things, IBM Wave for z/VM flow charts that can assist you in planning, preparation, installation, and setup of your IBM Wave for z/VM system.

There have been many changes in the past 25 years in our IT world that have led to the need for autonomics in our database environment, especially in DB2 on z/OS. But while always talking about the solutions, the question arises – how do you actually implement them?

Each company may have different priorities which dictate the order of the implementation steps. Company A may need to apply intelligence to their reorg utilities as their top priority while Company B may need to address utility standards because of the impending retirement of the support person for their homegrown DB2 utility generator. Regardless of the order in which you start, IBM provides the software for a comprehensive autonomic environment that addresses the business problems that most companies face: limited expertise, greater application availability, or the need to control costs by moving work to off peak hours.

Here we will show you how to move from the traditional steps into a modernized autonomic environment, by implementing an active strategy for your DB2 Maintenance Tasks, following these five steps:

Step 1: Collect the metrics and related statistics for utility maintenance
First of all, you have to collect all relevant statistical data on your DB2 objects. This data can be used to filter out objects that are physically disorganized. Your goal is to execute run maintenance by exception and filter out wasting of resources attributed to running of utilities. IBM provides two DB2 stored procedures that collects statistics about objects we have defined via a profile and will generate an alert if the statistics exceed criteria, placing the alert into a table, and performs the RUNSTATS for your optimizer needs.

Step 2: Group your objects
Grouping your DB2 objects can be achieved in several ways. DB2 Automation Tool provides a function called Object Profiles which provides more flexibility and functions for object grouping – called Object Profiles. Using these Object Profiles, you can include objects on which you want to run utilities, as well as exclude objects that you want the utilities to ignore. Object Profiles are similar to DB2 TEMPLATEs. They allow table spaces and index spaces to be chosen for processing in much the same way.

Step 3: Create exceptions and thresholds for utilities
The next step to implement an active autonomic strategy is to run all your maintenance by exception filtering. The DB2 Automation Tool provides a function called the Exception Profile. This definition contains the conditions under which users want to run utilities. When combined with Object Profiles and Utility Profiles, the Exception Profiles act as a filter against the objects specified in the Object Profile.

Step 4: Build optimized utility JCL and jobs
Before execution, first you have to build the optimized utility JCLs and jobs. Regarding this, Job Profiles are used to connect the different profiles which are created in the DB2 Automation Tool. A Job Profile is the master profile and associates all the profiles - Utility Profiles, Object Profiles, and Exception Profiles - together. The combined profiles, which are headed by the Job Profile, form the basis of a DB2 Automation Tool task. We can submit this task manually or schedule it by using the DB2 administration task.

Step 5: Execute the jobs in a predefined maintenance window
Today, a typical maintenance strategy has pre-defined jobs in a job scheduler. These jobs are run in maintenance widows weekly, monthly and quarterly. With the Autonomics Framework, you can leverage your own batch scheduler for spawning evaluation jobs as well as starting the Autonomics Director procedure at any time during your maintenance window.

After following these steps, transforming your passive into an active autonomic environment, the corrective actions are taken automatically by the system – i.e. monitoring and analyzing the related metrics to pro-actively make recommendations and even execute them. These are tasks typically done by a DBA. With these automated basic administration tasks you give DBAs freedom to work on higher business value tasks. And more important they do not rely on old, homegrown processes which are difficult to maintain and keep up with new DB2 versions.

And how about you - have you already moved from passive to active strategy in your environment? What benefits have you seen? Tell us which experiences you gained regarding the change process.

The combination of versatility of the connectivity options, the use of open standards, and the separation of data processing and I/O operations.

Data is produced on an unprecedented scale and access to that data must be quick, safe and guarantee integrity. Data is an organization's most valuable asset in the digital age. As the growth continues it is essential that the technology improves to keep the organization competitive.

The most efficient IT infrastructures that handle today's workloads usually have well balanced systems with superior data processing and I/O capabilities that are responsive and reliable. Such I/O capabilities have been a standard component of the IBM mainframe architecture since first introduced with the IBM S/360 in 1964. Over the past 50+ years I/O technologies have advanced significantly and so have the mainframe I/O capabilities. From the original parallel channels, where I/O devices were connected directly using two copper cables (called bus cable and tag cable) to today's Fibre Connection (FICON®), where optical transmitters, Fiber Channel switches/directors, and fiber-optic cables transport data at link rates of 2 Gbps, 4 Gbps, 8 Gbps, or 16 Gbps. A bus cable only carried one byte of information each way and a tag cable indicated the meaning of the data on the bus cable.

In addition, z Systems platforms have a unique channel subsystem that delivers high I/O bandwidth. The channel subsystem (CSS) was added to the IBM mainframe architecture to provide a pipeline through which data can be exchanged between systems or between a system and external devices via storage area networks and local area networks. The CSS is the channel path management layer that enables communication to and from system memory and peripheral devices at very fast rates.

The z Systems platforms have dedicated system assist processors (SAPs) in addition to the general purpose and specialty processors. I/O requests are handled by the SAPs, freeing up the general purpose and specialty processors to do other work. The IBM z13 offers up to 24 SAPs, supporting millions of I/O operations per second. This is also possible because all I/O features offered on the z Systems platforms offload some of the I/O operations to the hardware, using licensed internal code (LIC). The result is a significant improvement in both latency and bandwidth for transporting data.
In the z13, I/O features are plugged into an industry standard PCIe I/O drawer with PCIe Gen3 interconnects, delivering throughput speeds of 16 GBps to and from the I/O features. Different types of I/O features are available for each channel or link type and they can be installed or replaced concurrently, so no disruption to the production environment. Each PCIe I/O drawer can support up to 320 FICON channels, which provide unmatched bandwidth to back-end storage systems, while up to 96 OSA-Express ports allow for direct high-speed Ethernet connectivity.

Other I/O features used for system-to-system communications are Integrated Coupling Adapter (ICA SR) and 10GbE RoCE Express (for IBM z/OS®-to-z/OS communications), while HiperSockets technology can be used for communications between logical partitions within the z Systems platform.

If you would like to learn more about z Systems connectivity options, typical uses, coexistence, and relative merits of the available I/O features, go to:

Eduardo Simoes Franco is a Consulting IT Specialist. His focus is on supporting and promoting IT Solutions
on the IBM System z platform. He has fifteen years of experience with IT Solutions and Linux,
starting in IBM as Linux on system z Specialist. He has CLA and LPI Certification.

Many people today ask whether the Mainframe -- now called IBM System z -- is ready for new technologies such as Cloud and Big Data. Only a little analysis is needed to see that Linux on System z is a perfect match for both Cloud and Big Data. Now 50 years young, this incredible machine -- or it's better to say "technology" -- gives us surprises day after day. Even the most skeptical IT managers can see the value of Linux on System z.

But what makes the System z so fantastic, what makes such a big difference? Here's a short list:

RACF security

Built-in hardware redundancy providing 99.999% availability

Ultra fast specialized processors

The fastest I/O in the industry -the CPUs and fast I/O being able to process many terabytes of data per second

Hipersocket infinity network band that uses memory to communicate with virtual servers

A proactive management environment

Flexibility brought by the zBX hybrid solution

The list goes on and on, but those are some of my favorites. In addition to these characteristics, System z has something truly innovative : the ability to
reinvent itself.

In the early 2000's, the mainframe was joined with the most promising operating system on the market - Linux. Together, they have evolved and learned to work as a team and today make an awesome pair! System z processors have improved to a clock speed of 4.2 GHZ for zBC12 and 5.5 GHz for zEC12, making Linux even more comfortable. The Penguin, meanwhile, has brought hundreds of open source applications to System z, making the mainframe even more versatile. With the stability, performance, safety and scalability of System z, and with the efficiency, effectiveness and resilience of Linux, we have an ideal platform to meet the demands of the ever-changing IT world. Whether you're a young IT professional or have years of experience, whether you're interested in Cloud or Big Data or both, Linux on System z will meet your needs.

IBM invests extensively in z/VM, the rare jewel created in the 70's as a virtualization environment. Linux feels right at home on z/VM, as if it was born to work on System z hardware . z/VM is highly optimized and stable due to the fact that its hypervisor is running on microcode, without overlapping layers of software. But if z/VM is so old, it probably has an old fashioned interface, right? Wrong! IBM Wave for z/VM, the newest member of the System z family of products, complements z/VM with a graphical and intuitive interface which facilitates the daily management of the IT environment. With IBM Wave, it's easy to create new servers and maintain the health of existing ones.

Here's the big question: Can I save money running Linux on System z from a Total Cost of Ownership (TCO)1 perspective?
If we examine the background of the System z platform, with z/VM and Linux, we see that the number of software licenses required is reduced because System z has an incredible specialized processor to run Linux, known as an Integrated Facility for Linux (IFL), that is equivalent to multiple x86 cores in a single core.

Due to the large number of software products today that are licensed according to the number cores that are used, you will notice the difference when, for example, the number of software licenses can be reduced from 10 cores for your x86 systems to a single license on System z (IFL 1 = 1 core = 1 license).
_____________________________________________________________________________________________________________________________

I typically start many logical screens when using ISPF. I reckon most system programmers also do so and tend to use the same set of logical screens for their sessions. So up to now we have to start each screen manually. ISPF allows a user to have up to 32 logical screens but there is no automation for creating these logical screens at ISPF startup.

Now with z/OS V2R1 ISPF allows you to define a set of logical screens that are automatically started when ISPF is invoked.

To enable this support you have to at z/OS V2R1. When you start ISPF you specify a name of a varialbe on the ISPF start command.

You can:

Define your own variable

Use the default variable ZSTART

The variable must contain the identifier ISPF, followed by the command delimiter, followed by the command stack used to start the logical screens.

For example if I choose a variable name MYSTART then I could define the variable and assign the following values:

The name of the variable is specified as an option with the ISPF or ISPSTART command, for example:
ispf mystart

If a variable name is not specified with ISPF or ISPSTART, the default profile variable ZSTART is used for the initial command stack.
If ZSTART is not found or does not contain the ISPF identifier, ISPF starts normally.

You can add a variable or modify ZSTART from Dialog Test -> Variables (7.3)
For example, here I'll update the ZSTART variable to start the following screens: DSLIST, SDSF, Command Shell and switch to DSLIST

Now when I start ISPF (specifiying no variables on the start command) from the TSO READY screen using just ISPF, all the logical screens defined in the ZSTART variable are started.

You can bypass the start up of any defined logical screens defined in the ZSTART variable by using the new BASIC keyword when starting ISPF.

New XALL command

I've almost forgotten about the new XALL command (thanks Yves Colliard for the suggestion to include this new feature). Take a look at the comments section to see how Yves has started ISPF sessions using REXX.

At the end of the day you're ready to logoff ISPF and end all of you logical screens. This could take many keystrokes and being the lazy sysprog that I am, a bit of patience. However with z/OS V2R1 there's a handy new command, XALL, which will attempt to terminate all of the ISPF logical screens for you.

New =XALL command provided to help terminate all logical screens with one command.

=X command is propagated to every logical session to terminate each application that supports =X

If =X not supported, termination process halts on that logical screen

Once that logical screen is terminated =XALL processing can be continued for each remaining logical screen

So if I have several logical screens open and I want a fast exit, I type =XALL on the command line, like so:

And with a bit of luck, all the screens support the =x command, then I am dumped back out to TSO.

For more information on System z and the z/OS operation system see the following IBM Redbooks publications:

There are times when you have no MCS consoles available and your only option is to use the HMC systems console. Unfortunately, the systems console is not very intuitive to use and doesn't present an interface similar to a traditional MCS console. z/OS Version 2 Release 1 comes to the rescue with support for the HMC integrated 3270 console, offering us a new type of MCS-like console called the HMCS. Let's take a look at this new HMCS console type.

HMCS

The HMCS console is available during IPL and before and after SMCS availability

Today’s security requires consistent protection against threats and malware. Enterprises must be flexible while having a secure infrastructure to protect effectively the most valued asset of a company (the data), and their access through the cloud. Running many distributed servers involves much effort to install, manage, maintain, and provide security for them. To contain this effort, many enterprises are consolidating these servers on z Systems or LinuxONE by using the z/VM as the hypervisor, taking advantage of the virtualization technologies to use the hardware effectively and to simplify administration tasks.

It is generally held that “security through obscurity” is not a valid method. Using open, well-established security methods implemented correctly provides the best defense. For example, instead of developing your own cryptographic libraries, you should instead use open, established ones that have been vetted for many years. Hiding information creates more system administration work and any mistakes may fail to protect against attacks.

Implementing the enterprise security policy and following the least privilege principle increases the strength of security in your enterprise cloud.

In a LinuxONE environment, the building blocks of the Cloud environment could include:

The z/VM Directory Manager (DirMaint),

SMAPI,

Extreme Cloud Administration Toolkit (xCAT),

z/VM Cloud Manager Appliance

CMA allows the usage of OpenStack to deploy Linux guests on z/VM, and for the integration of z/VM into larger environments. The CMA version is upgraded to OpenStack Liberty and is fully supported as a z/VM component without additional license requirements. CMA only manages z/VM platforms and it does not deploy guests onto non-z/VM platforms. The CMA changes provide several different options for using CMA, either as stand-alone cloud or integrated with another OpenStack environment.

Lydia Parziale is a Project Leader for the ITSO team in Poughkeepsie, New York, with domestic and international experience in technology management including software
development, project leadership, and strategic planning. Her areas of expertise include business development and database management technologies. Lydia is a certified PMP and an IBM Certified IT Specialist with an MBA in Technology Management and has been employed by IBM for 25+ years in various technology areas.

Before you begin creating and provisioning virtual machines for your business applications, ensure your KVM hypervisor is capable of sustaining these applications. All the virtual machines will rely on the hypervisor's integrity and availability. The three key areas to initially address are:

Managing and monitoring resources, and offer interfaces for configuring or modifying virtual machines as needed

Backing up data and executing data recovery aligned to the application's needs.

Protecting data and resources

KVM hypervisor security is critical because it typically has access to all of the virtual machines' resources under its control. If the hypervisor is compromised, an unauthorized user could potentially gain access to confidential data. Good security practices are essential for establishing business trust. How do you do this? Several open source and commercial tools can help effect good security practices and policies. To establish consistency, the same tools available to secure a KVM environment can also secure Linux virtual machines.

Some of the key software components to secure the KVM hypervisor are:

FirewallD for network security

LDAP for centralized authentication

SELinux for access control policies that confine access to data

Linux Audit to provide detailed audit trail information you might not find in the system log.

In addition, there is support for cryptographic hardware in the IBM z Systems platform that can perform DES, TDES, AES, RSA, SHA-1, and SHA-2 cryptographic operations. CP assist for cryptographic functions (CPACF) instructions are available to KVM for IBM z and its Linux virtual machines when the kernel modules are loaded.

Managing and monitoring resources

The Linux ecosystem offers open source and commercial monitoring tools by which the KVM for IBM z resources can be managed and monitored. There are three primary methods that can be used:

The Linux shell in KVM for IBM z is available to handle most any resource configuration. CPUs can be configured on or off, memory can be enabled or disabled and storage devices and network interfaces can be added or removed.

Utilizing the IBM z Systems HMC, either with DPM mode or with standard PR/SM mode additional processors or memory can be dynamically added to a logical partition. With DPM mode, additional storage devices and network interfaces can be added or configured dynamically.

KVM for IBM z provides a number of built in open source monitoring packages such as nagios monitoring plugs, snmp agents, standard libvirt APIs, sar, systemtap, and many more. And if you find what was provided does not exactly fit your needs, KVM for IBM z Systems does provide an SDK. The SDK has the compilers and development libraries need to build perform builds of additional software projects.

Libvirt is a library of open source APIs that includes a daemon and management tools, that are installed with KVM for IBM z. You can create, delete, run, stop, and manage your virtual servers using the virsh command. Besides virsh, there is a graphical tool called Virtual Machine Manager or more commonly “virt-manager”. Virt-manager can handle most of the common lifecycle functions of a virtual server, including installation. It also has basic monitoring, console access, and resource management of the virtual server and some KVM host resources.

Many open source tools are typically included in Linux distributions, and if they are not included you can build them from source. To maintain a consistent approach, chose tools that manage both the KVM hypervisor and it's virtual machines.

Backing up data and executing data recovery

A KVM for IBM z environment can be backed up in a number of ways, therefore when designing your backup and recovery strategy consider the following questions:

Should the virtual machines to be up and running or require them to be shutdown during the backup and recovery?

How is the disk storage provisioned to the virtual machine?

What is the recovery point objective (RPO)?

What is the recovery time objective (RTO) ?

The KVM hypervisor and virtual machine backups can be categorized as:

The core operating system disk needed for boot

The additional storage used to host image files and system logs

Key configuration files such as for networking and virtual machine definitions

There are multiple ways to back up each of these categories. The core operating system disk could in its most basic form be backed up via Linux dd commands from another system. You might want to do this right after installation. You could also utilize FlashCopy or disk mirroring technologies to create a consistent point in time copy without taking down the KVM hypervisor or virtual machine. To exploit FlashCopy or similar technology, there typically is a requirement to install some command line interface program to direct the FlashCopy operation and to have network connectivity to the console of the storage subsystem.

The additional storage used to host image files could also use FlashCopy or disk mirroring, but other options exist as well. A QCOW2 snapshot or a LVM snapshot are examples of other options that may help you minimize downtime.

Key configuration files such as the KVM hypervisor network definitions, Open vSwitch definitions, zipl.conf, zfcp.conf and others could be backed up via file based tools such as rsync. The amount of storage these files take is relatively small.

It may also be useful to have partition, volume group, LVM, and file system information captured and recorded in the event you need to perform a recovery. This information could be easily gathered on a regular basis and transmitted to a remote archive.

Another option would be to utilize file level backups either with open source tools like rsync or commercial tools like IBM Tivoli® Storage Manager (TSM). If a virtual machine were destroyed one approach might be to provision a new base Linux and restore all the files from the most recent backup, rather than using disk image level backups and restores.

Part of the planning for backup and recovery also needs to consider the middleware. For example a database would typically utilize its own utilities in order to provide backups without any or minimal down time. A comprehensive backup and recovery strategy typically involves multiple backup methods and the recovery from those backups should be regularly tested.

To help make you plan and deploy a successful and effective environment, read Getting Started with KVM for IBM z Systems, SG24-8332 at:

Bill White is an IBM Redbooks Project Leader for IBM z Systems. He works with technical experts from around the globe to produce technical enablement content.

This week's guest blogger is Ravi Kumar. Ravi is a Senior Managing Consultant at IBM (Analytics Platform, North American Lab Services). Ravi is a Distinguished IT Specialist (Open Group certified) with more than 23 years of I/T experience. He has a Masters degree in Business Administration (MBA) from University of Nebraska, Lincoln. He had contributed to 7 other redbooks in the areas of Database, Analytics Accelerator and Information Management tools. His social profile can be
viewed at: http://www.linkedin.com/in/ravikalyanasundaram

IBM SPSS Modeler is a powerful analytic tool that supports all phases of data analytics process, including data preparation, model building, deployment, and model maintenance. You can leverage SPSS Modeler to build analytical models, which can be used in statistical analysis, data mining and machine learning. The data scientists can work with user-friendly SPSS Modeler client interface to access mainframe data with the same level of ease as that of data from any other platform they are accustomed to. SPSS Modeler can also take advantage of in-database transformation and in-database modeling using IBM DB2 Analytics Accelerator for z/OS (IDAA) as the data analytics hub on z/OS.

Until recently, z Systems did not offer an efficient solution in the area of complex mathematical processing. So, in the past, you may have resorted to the idea of offloading operational data (that is a snapshot from a prior point in time) from z Systems to a distributed platform in order to implement machine learning, and those solutions often resulted in obsolete and unreliable results in addition to the unwanted security exposures.

Now, with IBM DB2 Analytics Accelerator you can enable Machine Learning on your OLTP applications that produce and consume z Systems data, simultaneously accelerating the execution of data transformation and analytical modeling processes with the power and performance of MPP (Massively Parallel Processing) architecture in IBM Netezza appliance. All without offloading data from z Systems to distributed environments (which by the way, also eliminates a potential data breach situation).

In-transactional scoring using the Predictive models created with the above approach can scale with your DB2 for z/OS transactional environment. This is accomplished through in-database scoring using SPSS Scoring Adapter for DB2 for z/OS, which perform real-time scoring on your predictive models to quickly reveal what's interesting in your data. When the predictive model is published in SPSS, the Scoring Adapter for DB2 z/OS uses PACK/UNPACK functions for efficient parameter move and can create an SQL statement with HUMSPSS.SCORE_COMPONENT UDF. This generated SQL statement can be embedded in your OLTP application. The other popular alternative is to generate scoring model in open-standard PMML (Predictive Model Markup Language) format. The score can then be combined with your business rules to make real-time decisions on your DB2 for z/OS data from within your mainframe applications. You may also resort to vendor tool called Zementis that uses the generated PMML to implement in-application scoring in CICS and Java applications accessing DB2 for z/OS.

Unsupervised Learning algorithms like K-Means and Two-step uses descriptive statistics to analyze the natural patterns and relationships that occur within your operational data on DB2 for z/OS. Unsupervised learning models can identify clusters of similar records and/or relationships between different fields within an accelerated DB2 for z/OS table. For example, K-Means and Two-Step clustering algorithms (available through stored procedures like INZA.KMEANS and INZA.TWOSTEP) can enable Machine Learning in areas like market segmentation, geostatistics, market basket analysis (by association learning) and so on.

Supervised Learning uses historic/training data to construct decision trees and the constructed tree is then used to predict future values. Classification technique can be used to identify which group or type a new record, that is being inserted into your DB2 for z/OS table, belongs to based on key characteristic values on its fields. Regression technique can be used to predict future values for a given field based on past historic values. Algorithms like Naive Bayes, Decision Tree, and Regression Tree can be used to solve classification and regression problems. Thus the predictive models using supervised learning algorithms (available through stored procedures like INZA.DECTREE, INZA.REGTREE, and INZA.NAIVEBAYES) can be used to predict whether a customer will buy or leave, credit card fraud, up-selling opportunities, voters responsiveness to different types of election campaigns and so on.

Summary: Neuroscientists say that pattern recognition and emotional tagging help humans with quick decision making. Algorithms are a big part of machine learning and these algorithms can aid the executives with more and more evidence based decision making using hot operational data on z/OS. The executives can now combine modern machines' processing power with their own ingenuity to avoid flawed decisions that are sometimes caused by emotional tagging.

Today, we’re delighted to share the latest member of the IBM z Systems family: the IBM z13s. We think you will like it. A lot.

The z13s delivers many exciting possibilities over its predecessor, the IBM zBC12.

The short list includes:

Accelerated data and transaction serving

Integrated analytics for insight

Access to the API economy

An agile application development and operations environment

Efficient, scalable, and secure cloud services

End-to-end security for data and transactions

The high levels of virtualization provide options for cloud deployment to assist with such areas as application development and testing. The hypervisor is key for virtualization and the z13s supports both hardware and software hypervisors (PR/SM, KVM, and z/VM).

The underlying architecture has expanded to enable new solutions such as integrated analytics to bring valuable opportunities for your business while support existing applications.

For enterprises aiming to move their IT infrastructures in closer alignment with their business plan, the z13s offers unparalleled levels of flexibility through virtualization, analytical insight, security. Enterprise-wide agility will help you embrace the challenges of the exploding on-demand digital age.

In the course of an IT career, many of us may have sat at our desks looking at a sluggish application and wondered, "If I increase the amount of memory here or there, will this improve performance?" And, hopefully, your next thoughts would have been about the impact on I/O operations and cost, CPU usage, and transaction response times.

Although the magnitude of these changes can vary widely based on a number of factors, including potential I/Os to be eliminated, resource contention, workload, configuration, and tuning, you should carefully consider whether your environment could benefit from the addition of more memory to your software functions.

Significant performance benefits can be experienced by increasing the amount of memory assigned to various functions in the IBM® z/OS® software stack, operating system, and middleware products. IBM DB2® and IBM MQ buffer pools, dump services, and large page exploitation are just a few of the functions whose ease of use and performance can be improved when more memory is made available to them.

Recently, an IBM Redbooks Redpaper was published that can help you to examine the performance implications of increasing memory in the following areas:

DB2 buffer pools

DB2 tuning

IBM Cognos® Dynamic Cubes

MDM with larger DB2 buffer pools

Java heaps and Garbage Collection tuning and Java large page use

MQ v8 64-bit buffer pool tuning

Enabling more in-memory use by IBM CICS® without paging

TCP/IP FTP

DFSort I/O reduction

Fixed pages and fixed large pages

Different environments, of course, may experience a wide range of performance benefits but there does seem to be enough evidence to suggest that configuring more memory could be a positive enhancement for many installations due to reduced I/O rates, improving transaction response times, and in some cases, reduced CPU time.

To read more about this and see some examples, read the IBM Redbooks Redpaper :

Bill White is an IBM Redbooks Project Leader for z Systems Hardware, Networking, and Connectivity. He works with technical experts from around the globe to produce books, papers, guides, and blogs.

The IBM z Systems platform offers a framework for standards and open source, which are key to making virtualization effective, from creating and managing virtual machines through building and automating a cloud environment.

Kernel-based virtual machine (KVM) is an open source virtualization technology that turns the Linux kernel into an enterprise-class software hypervisor. KVM for IBM z Systems uses hardware virtualization support that is built into the z Systems platform, known as IBM Processor Resource/Systems Manager™ (PR/SM™). This means that KVM for IBM z can do things such as scheduling tasks, dispatching CPUs, managing memory, and interacting with I/O resources (storage and network) within the z Systems platform.

1. What is the importance of KVM for IBM z?

KVM for IBM z uses the common Linux-based tools and interfaces, while taking advantage of the robust scalability, reliability, availability, and high throughput that are inherent to the z Systems platform. And those strengths have been developed and refined on the z Systems platform over several decades.

The z Systems platform also has a long history of providing security for applications and sensitive data in virtual environments. It is the most securable platform in the industry, with security integrated throughout the stack (in hardware, firmware, and software).

In addition, KVM for IBM z is capable of managing and administering multiple virtual machines, which allows thousands of Linux-based workloads to run simultaneously on a single z Systems platform.

2. What is the advantage of using KVM for IBM z?

KVM for IBM z is an easy-to-deploy and simple-to-use hypervisor that integrates virtualization capabilities to the IT infrastructure, this includes:

Enabling the sharing of CPU and I/O (storage and networking) resources by virtual machines

Allowing for the over-commitment of CPU, memory, and swapping of inactive memory

KVM for IBM z Systems provides standard Linux and KVM interfaces for management and operational control of the environment, such as:

The command line interface (CLI) is a common, familiar Linux interface environment used to issue commands and interact with the KVM hypervisor. The user issues a series of successive lines of commands to change or control the environment.

Libvirt is open source software that resides on KVM and many other hypervisors to provide low-level virtualization capabilities that interface with KVM through a CLI called virsh.

An open source tool called Nagios can be used to monitor the KVM for IBM z environment.

4. What is the high-level architecture of KVM for IBM z?

KVM for IBM z runs in a z Systems logical partition (LPAR) and creates virtual machines as Linux processes. The Linux processes use a modified version of another open source module, known as a quick emulator (QEMU). QEMU provides I/O device emulation and device virtualization inside the virtual machine.

The KVM for IBM z Systems kernel provides the core virtualized infrastructure. It can schedule virtual machines on real CPUs and manage their access to real memory. QEMU runs in a user space and implements virtual machines using KVM module functionality.

QEMU virtualizes real storage and network resources for a virtual machine, which in turn uses drivers (virtio_blk and virtio_net) to access these virtualized storage and network resources as shown in Figure 1.

Figure 1. KVM for IBM z Systems reference architecture

5. What are some key design points when designing a KVM for IBM z infrastructure?

With KVM for IBM z Systems, you will need to plan and design the virtualized environments in which you build and run the virtual machines. Things to consider include:

KVM supports CPU and memory over-commitment, so using Nagios to monitor virtual CPUs and memory usage is important as the virtual machines increase in numbers.

A common preferred networking practice is to isolate management traffic from user traffic to ensure sensitive data is kept separate and secure.

Different storage infrastructures and protocols are supported with KVM for IBM z, you will need to design the storage architecture to complement your environment.

KVM for IBM z provides standard Linux and KVM interfaces for management. The way in which your management tools will interact with the virtualized pool of resources needs to be planned out.

The biggest reason to split the books is that this will allow us to update books as new versions come along instead of waiting. It will allow for our resident teams to work more in depth on each volume to provide a deeper dive into the content of each volume. Additionally, if you only want to learn more about one of the volumes, you can just download that volume. It's a more streamlined way of getting and finding the content you need, when you need it.

What are your thoughts on going forward with this publication? Should we merge them back together in the next iteration or keep them separate?

And by the way, if you are looking for the previous version of the IBM Redbooks publication, The Virtualization Cookbook for z/VM 6.3, RHEL 6.4, and SLES 11 SP3? You can now find it here:

When running in a virtualized environment, any reasonable administrator tries to reduce the time needed for standard tasks. In the early days of Linux on z/VM, this resulted in a procedure using golden images and cloning. This procedure simplified the deployment of Linux to new z/VM guest systems and has served many administrators well for a long time. However, over time, the Linux systems changed. With the introduction of newer technologies such as systemd on Linux, a number of problems came about that made the once so nifty feature of cloning golden images more and more difficult.

Problem: Make the image golden

During first bootup, Linux creates unique data at lots of locations. The number and location depends on the installed software. It requires detailed knowledge about the software used to make sure, that all these strings are
recreated during the first bootup of the cloned machine.

Unfortunately, there is no means to detect the needed changes available in the system. However having some of those places not updated can result in security issues and data corruption of the involved clones later on. A clone
that works in the first place is not necessarily done right.

This issue is not new, it already existed with SLES11 and RHEL6, however it became worse with the introduction of systemd and its machine id. It is therefore recommended, to move away from deploying clones to use either automated installation or the imaging software kiwi.

Solution: do not create the unique data in the first place

The actual problem exists only, because cloning relies on the configuration of a readily booted system. This system then is cleaned up and prepared for the actual cloning process. After cleanup, it is also called "golden image". All of the files needed within the production system are already created during the first startup of this system. The cleanup process must take care to remove all data from the system that should be uniqe. This data has then to be recreated during the first bootup of the clone.

The only reliable solution to accomplish this is, to avoid the creation of the unique data in the first place. This means, the golden image never should have been booted before cloning new virtual machines. To avoid issues, you may want to use automated installations as described in "The Virtualization Cookbook for z/VM 6.3, RHEL 7.1 and SLES 12". However if you have to rely on readily build images, the creation of virtual appliances is the way to go.

This is where the imaging software KIWI steps in.

Instead of creating a golden image to clone, a virtual appliance is created. This virtual appliance is never booted during the image creation process. The deployment of the virtual appliance is very similar to the one of a golden image: It is copied to a new disk, and given several parameters to finalize its configuration during the first startup.

If your business processes requires you to test a readily built image, this is also possible with the virtual appliance. However, needed changes to the image must be done the the KIWI configuration, and will only be available with the next iteration of a newly created image of the virtual appliance. You don't apply the changes to the live system, but to the configuration of
the virtual appliance.

This procedure can simplify automations. For example, to provide an image with all updates installed, you will just need to provide the update repositories during the image creation. After new updates are available that you need in your golden image, just repeat the building process, and the resulting image will contain all the updates. This also results in more
secure systems at the time of redeployment compared to deploying the updates only after starting the original image.

Our IBM Redbooks blogger, Berthold Gunreben, is a Build Service Engineer at SUSE in Germany. He has 14 years of professional experience in Linux and is responsible for the administration of the mainframe system at SUSE. Besides his expertise with Linux on z Systems, he is also a Mainframe System Specialist certified by the European Mainframe Academy: http://www.mainframe-academy.de. His areas of expertise include High Availability on Linux, Realtime Linux, Automatic Deployments, Storage Administration on the IBM DS8000®, Virtualization Systems with Xen, KVM, and z/VM, as well as documentation. Berthold has written extensively in many of the SUSE manuals.