Kubernetes Container Control Comes To Power Systems

The moment that Google created a clone of parts of its internal Borg cluster and container management system and open sourced it as the Kubernetes project, the jig was pretty much up.

Google had done a lot of the fundamental work to bring containers to the Linux platform starting way back in 2005, and had shared its techniques with the open source community, leading directly to the Docker container format and the engine that runs it atop the Linux kernel. While Docker, the company, got a jump start with its Docker Swarm container orchestrator and then its fuller Docker Enterprise container management system, the world quickly shifted from the Docker stack to Kubernetes in its raw and myriad commercialized formats. Much as Linux users had shifted their loyalties away from the Eucalyptus cloud controller to OpenStack in a heartbeat because of the completely open nature of OpenStack and the backing of Rackspace Hosting and NASA as founders and then a slew of open source developers and commercial entities piling on. Kubernetes has emerged as the de facto standard for container orchestration, and it is supported on Linux and Windows Server, the two dominant operating systems in the datacenter these days.

The Docker Enterprise stack can be loaded on bare metal Linux running on the Linux-only versions of Power Systems based on either Power8 or Power9 processors; it can also run on standard Power Systems machines that support Linux, AIX, and IBM i atop the PowerVM hypervisor – but only on Linux partitions. So IBM i and AIX shops can run containerized applications on those Linux partitions.

Had this been another, earlier era, and had the IBM i platform been generating the very high revenues that it enjoyed in its heyday two and three decades ago, IBM might have been talking about moving the IBM i operating system to a Linux kernel and containerizing the whole operating system and its related systems software into Docker containers. That has not happened yet and we do not think it ever will given the cost and the relatively low (compared to historical highs) return on that investment. But as we discussed a year and a half ago, IBM could create quasi-native Docker containers using a PASE runtime environment, but it would have to be based on a Linux kernel instead of the AIX kernel. Even if IBM had did all of that, it is not clear how to containerize RPG or COBOL applications. Java, PHP, Node.js, and any other open source programming language could have its applications run in these quasi-native Docker containers. But RPG and COBOL present an interesting obstacle. IBM could create a clone runtime for RPG and COBOL that looks and smells like the Docker Engine but that runs on a baby IBM i kernel or passes directly through the microcode to the actual IBM i kernel.

Even if applications running on IBM i written in RPG and COBOL can’t be containerized, that doesn’t mean IBM i shops should not benefit from Linux and containerized applications. That Db2 for i database is the real asset, and there are definitely ways to use Node.js and Java to extract data from that database and pass it off to containerized applications running on Linux partitions that in turn support Docker containers that are orchestrated by Kubernetes. IBM i would be extended, much as integrating the OS/2 High Performance File System inside of the platform gave us the Integrated File System, for instance. That didn’t negate the value of native applications written in RPG and COBOL that had native access to the integrated database in OS/400 and IBM i. This is no different, in concept, even if it is quite different in implementation.

Even if the combination of Docker containers and Kubernetes orchestration for those containers is not native in IBM i, there are a number of ways to get containers running on Power Systems on whole Linux machines or Linux partitions on hybrid IBM i-Linux machines.

The first one, as we have discussed, is Docker Enterprise Edition for Power. IBM grabbed the open source GCC Go compiler and created a native Power-Linux Docker daemon and runtime and offers its own support contracts for Docker. Big Blue does Level 1 and Level 2 support, with backing from Docker itself for Level 3 support. The stack includes the core Docker Engine and the Docker Trusted Registry, a private version of the public Docker Hub container registry. Prices range from $750 to $2,000 per node per year for support.

The second method also comes from IBM. Last year, Big Blue launched IBM Cloud Private, an on-premises variant of its IBM Cloud public cloud platform and container orchestration frameworks, based on Cloud Foundry and Kubernetes. These container orchestration and platform cloud layer services are available on subscription-based virtual machines as well as dedicated hosts on the IBM Cloud, and as of nearly a year ago, were made available on Power, X86, and System z machines in on-premises datacenters.

IBM says that it has 400 customers, most of them very large enterprises, that typically buy its System z and Power Systems machines so far for the private version of IBM cloud. The cloud setups also include a hybrid on-premises/cloud and multiple cloud development environment called Microclimate that brings together the integrated development environments as well as hooks into the Jenkins continuous application/continuous integration tool as well as the Kubernetes container orchestrator.

The IBM Cloud Private stack also includes a tool called Transformation Advisor, which pulls information out of existing WebSphere environments and suggests the means to break those applications into microservices and snaps into the tool to let developers begin that process. (It is not clear what Transformation Advisor would suggest if it saw Java applications hitting WebSphere on the IBM i platform.) There is also another tool in the private cloud called Vulnerability Advisor that examines access control and other aspects of security for potential vulnerabilities and suggests ways to fix them. In the past two weeks, IBM announced a new cross-platform management tool called Multi-Cloud Manager that weaves together the Helm application manager for Kubernetes with the Terraform public cloud provisioning tool, the Prometheus system management and monitoring tool, and the Grafana visualization tool to create a whole new management layer for this open software container stack.

Again, all of this can run on the Linux partitions on any IBM i machine, but it remains to be seen how these tools can interface with the IBM i partitions, if at all.

The last way to bring commercially supported Kubernetes container control to Power Systems machines was just revealed last week, in announcement letter 218-391, as Red Hat has ported its OpenShift Container Platform 3.10 to Power Systems machines – once again, of course, running atop Linux. In this case, Power8 and Power9 systems running the new Enterprise Linux 7.5 distribution from Red Hat. The offering is available from either IBM or Red Hat, and it is being restricted to any L, LC, or AC class Power Systems machine – those are the Linux-only boxes – but that is nonsense. There should be a way to run it on the PowerVM hypervisor as well as the custom KVM hypervisor if customers really want to do it.

The OpenShift Container Platform is the Kubernetes container management system, akin to Docker Swarm and Enterprise or the IBM Multicloud Manager and also stack packages up a whole bunch of things into Kubernetes containers, including:

The Red Hat Enterprise Linux operating system itself

The Apache and Nginx Web servers

The MySQL, PostgreSQL, and MariaDB relational databases

The MongoDB document datastore

The Node.js, Ruby, PHP, Perl application development languages

OpenShift Container Platform does not include the continuous integration/continuous development tools or virtualized networking and virtualized storage needed for a full container environment, but there are hooks to plug many different variations of these into the stack.

Having brought these different Kubernetes stacks to Power Systems, now IBM needs to go the extra step and tell IBM i shops how they can integrate them. Integration is, of course, the hallmark of the IBM i platform.

How to Determine the BEST Disaster Recovery Plan for YOUR organization

There are many options for disaster recovery and there is a great deal of variation in the requirements of organizations. These options range from a second recovery site for tapes, remote recovery systems for cloud backups, and high availability replication for immediate fail-over.

With all of these options available, it can be confusing as to what each means and what is best for the organization. Specifically, we are seeing high availability being incorrectly evaluated as the only disaster recovery option. High availability allows organizations to role-swap their production system to a second system with a less than one hour RTO and RPO. For certain industries, this disaster recovery plan is absolutely necessary. Trucking, banking, and healthcare are good examples of these industries.

For most companies, the immediate RTO/RPO is not required and can be overkill for a disaster recovery plan. 24 hour and 12 hour RTO options on remote hardware can provide an efficient disaster plan and are more appropriate for most organizations. In this scenario, companies would have cloud backups to a remote location as well as a dedicated system or LPAR to recover those cloud backups in the event their production system goes down. These options are typically 1/3 to 1/2 of the price high availability and can be fully managed by a partner.

It is always best to start by defining the RTO and RPO required for the business and then develop a disaster recovery plan to achieve that objective. UCG Technologies can help work through this process and ultimately, design a cost-effective disaster recovery plan to ensure proper business continuity.

Subscribe to the UCG email newsletter to stay current with data protection best practices.