Virtualization@IBM

Blog Authors:
IBM Software Defined2700052JD4Virtualization+IBM2700039S5CNitin_Gaur12000056JBJean Staten Healy2700025BBUJohn_Foley0600026N82SamVanAlstyne110000DM6Balicia_wood270003DW0M
Virtualization combined with Integrated Service Management helps you
use your resources effectively, manage your infrastructures
efficiently and gain the flexibility to meet ever changing business
demands.
This blog is for the open exchange of ideas relating to
virtualization across the entire infrastructure. Articles written
by IBM's virtualization experts serve as conversation starters.
Topics can range from latest technologies for server consolidation
and tools for simplified systems management and monitoring to
automating IT systems to respond to changing business conditions and
cloud-based solutions for the "virtual" enterprise.

IBM and Red Hat have been teaming up for years. Today, Red Hat and IBM are announcing a new collaboration to bring Red Hat Enterprise Virtualization to IBM’s next-generation Power Systems through Red Hat Enterprise Virtualization for Power.

A little more than a year ago, IBM announced a commitment to invest $1 billion in new Linux and open source technologies for Power Systems. IBM has delivered on that commitment with the next-generation Power Systems servers incorporating the POWER8 processor which is available for license and open for development through the OpenPOWER Foundation. Designed for Big Data, the new Power Systems can move data around very efficiently and cost-effectively. POWER8’s symmetric multi-threading provides up to 8 threads per core, enabling workloads to exploit the hardware for the highest level of performance.

Red Hat Enterprise Virtualization combines hypervisor technology with a centralized management platform for enterprise virtualization. Red Hat Enterprise Virtualization Hypervisor, built on the KVM hypervisor, inherits the performance, scalability, and ecosystem of the Red Hat Enterprise Linux kernel for virtualization. As a result, your virtual machines are powered by the same high-performance kernel that supports your most challenging Linux workloads.

Enterprise organizations seeking to optimize their virtualization environments must be able to centrally manage the full range of virtual machine dependencies, including storage and networking. Red Hat Enterprise Virtualization includes Red Hat Enterprise Virtualization Manager, a centralized management console that can manage hundreds of hosts and tens of thousands of virtual machines. Through the management interface, Red Hat Enterprise Virtualization provides the flexibility of managing a mixture of x86 and Power Systems. While the Red Hat Enterprise Virtualization management server runs on an x86 architecture platform, it can now manage clusters of Power architecture hosts, as well as separate clusters of x86 architecture hosts – all from a single pane of glass.

In addition to the benefits of centralized management of the virtualized infrastructure, the availability of Red Hat Enterprise Virtualization for Power provides simplified use to some of the more advanced functionality of KVM:

High Availability – If one host were to go down or lose the ability to use virtual machines, Red Hat Enterprise Virtualization quickly migrates those virtual machines to other hosts within the environment to minimize downtime.

Live Migration and Storage Live Migration – Red Hat Enterprise Virtualization can move a virtual machine from one host to another for preventive maintenance without downtime. This means end users continue to enjoy access the virtual machines if it is necessary to deploy patches or install updates to a host.

Intelligent Load Balancing – Because of the shared nature of virtualization￹, users want to avoid one VM affecting the performance of another. In the event that one VM begins to impact the performance of another, Red Hat Enterprise Virtualization balances the workloads to mitigate the potential impact so that operations continue smoothly.

Centralized Template Management – Red Hat Enterprise Virtualization provides the ability to build and manage templates for virtual machines and provision them to another host with a few mouse clicks, enhancing the ability to provision new virtual machines rapidly.

Self-Service Portal for Quick Provisioning – Users who consume the virtual infrastructure services – particularly in a lab or test and development environment – need to be able to spin up virtual machines quickly. Red Hat Enterprise Virtualization’s full self-service portal allows these users to log in, provision their virtual machines, shut them down, and have control over the part of the environment that they have been allocated without having to go through IT staff requests for provisioning.

Together, the integration of IBM POWER8 – with its capabilities for high performance – and Red Hat Enterprise Virtualization’s enterprise virtualization and management features provide a strong combination – particularly for larger enterprise deployments and mission-critical applications.

The Value of Red Hat Enterprise Virtualization and Power Systems

For those IBM Linux on Power Systems customers that have not yet fully virtualized their infrastructure, they will be able to now deploy Red Hat Enterprise Virtualization and easily leverage the opportunities that virtualization provides. And, for users that move into open applications and frameworks with Red Hat Enterprise Linux, this provides a great opportunity to have access to Power and the flexibility that the next-generation POWER8 architecture provides. All software for Red Hat Enterprise Virtualization for Power will be provided through Red Hat, with tier 1 through tier 3 support available.

In addition, Red Hat has just released the beta of Red Hat Enterprise Linux 7.1 (hyperlink to blog) which includes a version for Power processors running in little endian mode. This enables users and ISVs to easily move Linux on x86 applications to Linux on Power Systems with minimal or no porting, and is just another example of Red Hat and IBM working closely to provide better features and functionality to our joint customers. To learn more or sign up for the beta, visit https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/.

And stay tuned for more in 2015. We expect the adoption of Red Hat Enterprise Virtualization for Power as a supported platform to evolve and expand over time. We see this as only the beginning of a larger Red Hat collaboration with IBM around POWER8.

When IBM announced the new POWER8 processor and next-generation scale-out Power Systems earlier in 2014, PowerKVM was also introduced. The introduction was notable because it marked the first time IBM was providing open hypervisor technology – beyond the proprietary IBM PowerVM – on Power. By supporting KVM, IBM is removing a potential hurdle in adoption for users familiar with Linux and KVM on x86 to adopt the enhanced hardware platform.

KVM Advantages

The open source hypervisor KVM (Kernel-based Virtual Machine) offers many advantages in terms of cost, security and simplicity, and as a result, is gaining ground in the enterprise, particularly among organizations that already have Linux servers deployed in their data centers, or are interested in consolidating workloads or building a flexible infrastructure. In addition, this trend is also part of a larger movement toward deploying more than one type of hypervisor in a data center. This has been termed “hyperversity” by the Gabriel Consulting Group

Benefits of PowerKVM

With the rollout now of KVM on Power, IBM is committed to making it as easy as possible for both Power users who have not used Linux or KVM before, as well as existing Linux and KVM users – who are familiar with KVM on x86 or System z but not Power - to gain all the benefits of PowerKVM. In addition to the favorable economics of KVM virtualization, PowerKVM virtualization fully leverages POWER8’s symmetric multi-threading to achieve the highest performance possible with the hardware.

Kimchi and Ginger Help New Users Move to PowerKVM:

Providing an intuitive web panel with common tools for configuring and operating Linux systems, Kimchi and Ginger are add-on tools that are not required to manage a host or guests, but make the PowerKVM experience more user-friendly. In fact, Kimchi can also be used on x86 systems as well. Kimchi and Ginger for IBM PowerKVM 2.1.0 were released in June 2014.

Kimchi – For PowerKVM users, Kimchi is one of the most important tools they should know about. An open source project hosted by GitHub, Kimchi provides an easy transition for people who would like to start using KVM but are concerned that it may be too difficult. Kimchi helps users who are more familiar with VMware, since the user interface is similar and pretty simple to use, or Windows because only a supported web browser is necessary to interact with Kimchi, thus, supporting a smooth transition.

Kimchi also makes it easier for users familiar with Linux to move to Power as well.

Kimchi runs in all major Linux distributions and requires only the common virtualization packages, like QEMU and Libvirt, provided by the distributions. In essence, Kimchi allows the user to create virtual machines and manage their system as simply as possible. From this perspective, Kimchi plays a major role in the PowerKVM software stack.

Ginger – Another open source project also hosted by GitHub is Ginger. It provides an open source host management plug-in to Kimchi. In the old world of Power Systems you had to have a separate management console, HMC, to do firmware updates on Power and to manage the physical system itself. Now, Ginger replaces some of that functionality and, by taking away the dependency on HMC, users now need to buy one hardware system, not two, resulting in further cost savings and simplicity.

And, even for experienced Power users who are familiar with leveraging HMC to manage Power Systems servers, Ginger provides a far easier approach with fewer steps. As a result, existing Power users will find that firmware updates can be deployed much more quickly.

Both Kimchi and Ginger come bundled with PowerKVM so when a client buys a PowerKVM box, these two tools are included. What IBM provides is a version of Kimchi and Ginger that is very close to the open source code.

Easier and More Cost-Effective to Adopt PowerKVM

The bottom line is that Kimchi and Ginger make it easier to adopt PowerKVM whether users have had prior Linux experience or not. That is the whole idea – to make it easier for users who do not have experience with Linux and virtualization to get on board and manage and use virtual machines using PowerKVM.

One of the main goals for both projects is to have the same user experience whether on x86 or on Power, so users can migrate to PowerKVM without frustration. Using Kimchi and Ginger makes the transition from x86 to PowerKVM not only easier, but better.

IBM supports many open source projects. One of the newest is Docker, a project for container-based virtualization that allows developers to encapsulate applications and dependencies and deploy them on Linux-based virtual machines.

People are using Docker for its packaging technology and deployment model. With KVM (Kernel-based Virtualization Machine), you get a virtual machine (VM), but with Linux containers, you get a virtual Linux kernel and you share everything else with the rest of the machine - but you don’t see the other users of the kernel unless you want to.

Currently Docker, which provides an alternative to virtual machines, is being used as a system administration tool and eventually, it is expected that it will be used by many cloud providers who want to provide Linux containers as a service. Containers give you bare metal performance and lower overhead than virtual machines, although compared with KVM that difference is not very much.

The key details about Docker are that:

It provides an automatic deployment model for applications. You can package applications in a single format and then deploy them automatically all at once. In addition, the deployment technology is distributed so you can deploy applications to remote machines.

The packaging technology is smart so that you can take a single package and run it on multiple Linux distributions - whether it is Red Hat Enterprise Linux (RHEL) or SUSE Linux Enterprise Server (SLES). It is not necessary to package the application separately for RHEL or SLES, which means that Docker solves a big problem for application providers and also makes it easy for administrators to deploy and maintain their software applications on Linux hosts.

There is an ecosystem built around Docker packaged applications and packaged technology.

Because the packaging technology is smart, has an execution language, and is based on source code control principles, you can incrementally add or take away features from a packaged application. This means you can modify packages that someone else has created. As a result, there are many packages of software that are ready to be deployed by Docker and they are all public. Docker tools can grab a package from a public repository and deploy it very easily.

It is also possible for users to maintain a private repository of packages.

Because every application uses the same kernel, you don’t get the ability to run older kernels for older software and you don’t get the ability to run other operating systems. With KVM, you can run a Windows operating system and then run an old version of RHEL along with a new version of RHEL and that is useful if you have older software that is not being maintained for new operating systems for example.

There are security implications for Linux containers. A hypervisor like KVM provides an additional level of security over containers. With containers, you only have the Linux kernel providing isolation and if there is privilege escalation attack or vulnerability in the kernel that is exploited, then there are no further defenses. There are many things you can do to configure the host and the kernel to be safe and isolated but the one thing you cannot do is have a second level of defense. With a hypervisor, you have that additional layer of protection. First, there is the operating system’s kernel and then if the attacker manages to break into the kernel and obtain root access, they are still contained within the virtual machine. The virtual machine has the attacker contained by a combination of hardware instructions and software so an attacker then has to start all over and break out of the virtual machine which may or may not be more difficult. As a result, some people believe that web services providers, for example, should not use containers for public cloud services and should instead use virtual machines instead – and that view is shared widely.

The idea is that with Docker is that developers don’t have to worry about any application dependencies and can deploy containers for all the Linux machines that have Docker.

Docker Open Source Community
IBM is participating in the Docker Foundation Board, and it has approval to submit code into the upstream Docker repositories. Right now, Docker runs only on x86 platforms and can also work as a component of IBM Bluemix. It is our intention to get Docker supported on Power and System z in upstream Linux editions. We think that Docker will be better on Power and System z from a security standpoint. In addition, support for Docker on Power and System z will make it as easy for developers to port their applications to these platforms as it is to the x86 platform.

The ecosystem around Docker is vibrant and growing. In the past, the significant features came from employees at the company, which was originally named dotCloud and is now Docker, Inc. Additional contributors worked on bug fixes and trivial patches. That has changed over the last year. The community is becoming broader. This is something that is required when a project such as Docker becomes an essential component of the Linux ecosystem, and is an indication of the project’s increasing maturity.

PowerKVM provides hypervisor technology that is familiar to proprietary x86 virtualization users as well as committed Linux and KVM users.

You may have heard the news that IBM recently introduced the new POWER8 processor and next-generation scale-out Power Systems servers. Rolled into that release was also the launch of PowerKVM. This means that for the first time IBM is offering open hypervisor technology in addition to its proprietary PowerVM on Power.

The new Power Systems run Linux along with other operating system choices or run Linux-only. And, with this next generation, Kernel-based Virtualization (PowerKVM) is available on all POWER8-based systems that run Linux exclusively .

First discussed at Red Hat Summit 2013 by Arvind Krishna, GM Development and Manufacturing, IBM STG, the introduction of PowerKVM is an open alternative to IBM’s proprietary PowerVM technology which has been offered on Power systems. The addition serves the dual purpose of furthering IBM’s support of Linux and open source technologies, and also providing more choice to Power customers. With PowerKVM, Linux-centric administrators can very quickly get up to speed. If they know KVM or any kind of virtualization that is x86-focused, they can rapidly configure the system and administer it as if it were another Linux or KVM instance.

Making the Switch to Power Easier

This is a big change. Our goal with PowerKVM is to make it as simple as possible for someone that is non-Power-oriented to switch to Power and very easily pick up our systems, manage them and configure virtualization and get their Linux scale-out workloads running. The whole user experience has been very much aligned with what x86 provides from an administrator perspective. And, importantly, support for KVM allows users to select a single cross-platform virtualization technology, simplifying management.

If you are familiar with KVM running on x86 or System z or any other environment for that matter, this is just another KVM instance. This gives users the ability to potentially standardize on a hypervisor and manage it all either through IBM tools or any OpenStack, libvirt and open Linux tools from the community.

Support for the OpenPOWER Foundation

We have taken the total PowerKVM offering and made it completely open source – all the way down to the actual firmware that is required to run PowerKVM.

In keeping with this open approach, we are contributing the specifications back into the OpenPower Foundation, an industry foundation based on the POWER architecture, enabling an open community for development and opportunity for member differentiation and growth. We are opening up Power and we want people to have the facilities to understand what we are doing and make it an extensible infrastructure.

Exploiting the POWER8 Hardware

In addition to the well-known cost advantage of KVM virtualization – which is considerable – as well as being completely open, the new PowerKVM virtualization exploits the unique features of the new POWER8 servers. For example, it exploits POWER8’s symmetric multi-threading with up to 8 threads per core. By leveraging the unique capabilities of the POWER8 servers, we allow workloads to get the highest performance possible from the hardware.

Workloads that will have the most to gain from the multi-threaded architecture include heavy scientific workloads, traditional OLTP and database processing.. While we can provide these 8 threads to a particular workload, we can also split them up to support lots of tiny, varied workloads. The bottom line is that the virtualization is flexible enough to support massive workloads very effectively, but can also be optimized for a bunch of tiny workloads. This agility is built into the architecture, and the hardware and the virtualization can work together for effective resource allocation to accommodate a range of different scenarios.

Opening Power to More Users

Over time, we will add support for more devices and add new features, continuing IBM’s tradition of a commitment to open technologies that dates back to the late 1990s. With this initial introduction of PowerKVM, our goal is to provide the simplicity and ease of use that is familiar to x86 virtualization users as well as committed Linux users. This is just the beginning.

Kimchi is a spicy Korean side dish. It is also the code name for a new open source virtualization management project that offers sweet familiarity.

Kimchi is a new open source project aimed at providing an easy on-ramp for people who would like to start using KVM (Kernel-based Virtual Machine) but believe it will be too difficult. Kimchi is targeted at users who may have avoided the open source hypervisor because they don’t have experience with Linux or don’t have the ability to install a management server, or simply don’t have time to invest in Linux administration.

But unlike the spicy side dish Kimchi, the open source project Kimchi offers a taste of something sweet - a familiar user interface for virtualization management. Put simply, that is what Kimchi is all about - removing barriers to using KVM for a set of potential users.

Open Source Tool Designed to Appeal to VMware and Windows Administrators

There are certainly people in the enterprise who are Linux administrators and are perfectly comfortable with the way KVM is today. They regularly work with Linux admin tools and KVM fits right in to their day-to-day practice.

But there are also VMware administrators and Windows administrators who are not familiar with Linux admin practices and are not comfortable with the KVM tools. These people in particular will benefit from Kimchi, since the user interface is similar to that of VMware and Windows tools, thus helping to ease the transition to KVM.

Kimchi’s Role in the KVM Ecosystem

If you have one Linux server, then installing Kimchi on that server is quick and easy. Kimchi puts a thin layer over what is already there with KVM and Linux. You don’t need to install a separate management server. All you have to do point your browser to the KVM host and with just a couple of clicks, you can install your first guest and start running it.

While it does not come as part of KVM yet, it is hoped that Kimchi will be mature enough to be packaged up with some of the community Linux distributions in 2014, and then be included in some enterprise Linux distributions after that. The beauty of the Kimchi interface is that it boils management features down to their essence, simplifying everything, without a requirement that users have any Linux skills. And, it is rendered using HTML5 so there is total independence of both device and operating system, meaning that you can use Kimchi from a Windows or Linux work station, or a tablet or a phone.

Kimchi Reaches a Functional Milestone

Because it is a simple point-to-point management tool, it is not able to provide clustering or resource pooling. Users are limited to managing a few hundred virtual machines at a time, one host at a time.

Kimchi reached a functional milestone in October 2013 with the release of Version 1. Although it is still early in the development process for the project, it is now at the point where we think it has enough functionality for people to try it. The clear advantage is that users don’t need to maintain any management infrastructure - and they can get started using KVM right away.

IBM’s Commitment to Kimchi

IBM supports Kimchi because it represents another way to promote KVM adoption and remove barriers to open source virtualization, which IBM believes is a smart choice. Kimchi is a sound, multi-platform management tool. We, at IBM, are also using it to manage KVM on Power. It will come bundled with KVM on Power, available later in 2014.

Future Development Plans for Kimchi

At this point, the focus for Kimchi going forward is on community building and additional feature development. The input from the community will determine the future direction for Kimchi, which is an Apache-licensed project hosted on GitHub, and incubated by oVirt.org.

If you would like to learn more about Kimchi and get involved, go here.

Mike Day

IBM Distinguished Engineer and Chief Virtualization Architect Open Systems Development

KVM (Kernel-based Virtual Machine) is technically excellent as a hypervisor across the board. The performance, scalability and efficiency, device support, ability to run different types of guests, and hardware support are all first-rate – and it is also integrated with Linux. At this point, the upstream development focus for KVM is on two fronts: first, to exploit new hardware (and that is not just an x86 proposition any longer) and second to make KVM easier to use and smart enough to take care of itself so that it requires less attention from the user to get the best performance. Here are five important upstream KVM features coming in future enterprise distributions that will help make that happen:

VFIO –Virtual Function Input/ Output: This is a Linux technology that makes it easier for users and vendors to provide native device support in KVM guests, which is important for performance reasons.

virtio – dataplane: This is a new block I/O (input/output) infrastructure that allows the KVM guest to do block I/O directly with block device support in the host Linux kernel. This is the technology that was behind the IOPS benchmarks that we published with Red Hat in the spring of 2013. The performance was about 30% better than any other hypervisor has been able to achieve in a guest for block I/O performance.

ivshhmem – Nahanni shared memory transport: This is a shared memory device and host kernel driver. It is split between the host Linux kernel and the QEMU (Quick EMUlator) virtual machine environment. It provides a number of different ways for guests to use fast host memory as a communications medium for messaging. You can take different servers, consolidate them on a single KVM host and use the shared memory as a transport for HPC applications instead of a high performance network: with resulting application performance that is at least as good if not better because the applications are exchanging messages over in-host memory instead of an inter-host network.

RDMA – Remote Direct Memory Access: It is easier to access RDMA functionality from within a KVM guest now than ever before, and that is partially due to the VFIO infrastructure. In combination with that, there is a memory transport being worked on upstream in QEMU to do live guest migration over RDMA. This is a big feature for high performance database managers. It will pave the way for more high performance database applications that utilize a lot of block I/O or page-related I/O over RDMA devices today.

Gluster FS – Integration, new translators: Gluster FS provides a general framework for clustered file system and block I/O infrastructure but the actual work is done by “translators.” If you want a specific type of clustered file system, you may write a Gluster translator for it. There are two developments here. Gluster FS is now integrated into QEMU 1.4 and later versions, which means that you have access to it automatically. And, new translators give you specific features with specific types of storage devices. This means we have an integrated file system for KVM that works well with block devices, which has been a missing feature from KVM for quite a while. (We have had all the separate shared file system features but they haven’t been integrated.)

You can expect these new capabilities to be introduced into Enterprise Linux distributions sometime around the end of 2013 since it generally takes about 6 months for an upstream feature to get into an enterprise distribution. These are high-end features that go well beyond what can be done now with commercial hypervisors. And, importantly, they will be easy to use.

With the help of a robust ecosystem, open source technologies such as KVM become a force to be reckoned with.

What is it that causes some new technologies to gain wide acceptance while others simply fall by the wayside? It’s a given that in order to be meaningful, new technologies must be enterprise-grade, they must be cost-effective, and they must address a real need. And, at least in the open source world, the endorsement of a robust community is the other critical factor. KVM (Kernel-based Virtualization Machine) is a case in point.

KVM has made great progress since its inclusion in the Linux kernel in 2007, observes analyst Gary Chen in a recent IDC white paper. In addition, he notes, the strength of KVM as well as its ecosystem makes KVM an increasingly attractive virtualization choice for customers that rely on Linux and beyond.

The point is: You may have a product, but if you don’t also have an ecosystem, you will hit the “so what” factor. In essence, there is not a complete solution – at least, not until there is a community around it. And the more individuals and companies that contribute code to an open source initiative, include the technology in their products, and provide services related to it, the more polished the solution stack becomes.

Take a look at the ecosystem around KVM and you will find a range of robust communities that aim to address a specific area or requirement. IBM, which has backed open standards and open source technologies for a long time, is a founding member of each. And of course KVM itself is developed by an open source community.

The OpenStack Foundation, for example, is a recent entrant into the open source ecosystem around KVM. Launched as an independent foundation in 2012, the goal of the OpenStack Foundation is to foster cloud interoperability. The OpenStack Foundation serves developers, users, and the entire ecosystem by providing a set of shared resources to grow the footprint of public and private OpenStack clouds. To date, the foundation has more than 9,800 individual members from 87 countries – and has also secured more than $10 million in funding.

The Open Virtualization Alliance, launched in May 2011, is a consortium committed to fostering the adoption of open virtualization with KVM. To date, the OVA counts more than 250 vendors from all over the world among its membership. The consortium advances awareness and understanding of KVM, drives adoption of KVM-based solutions, and helps promote interoperability and best practices to accelerate the expansion of the ecosystem of third-party solutions around KVM – giving enterprises improved choice, performance and price through open virtualization with KVM.

Modeled after the Apache Foundation, Eclipse, LVM, and many other open source communities, the oVirt Project, was launched in December 2011. oVirt develops and distributes an open source virtualization management platform that combines the KVM hypervisor with capabilities for hosts and guests. In this way it supports organizations looking for open alternatives to traditional virtualization technology, both for the hypervisor and virtualization management.

Some individuals and organizations – like IBM – are involved with all three of these groups. Others select the one that meets their own unique interests or needs. But while there is an open invitation to participate, make no mistake – open source communities are merit-based systems. This is a good thing – the communities provide a stimulating combination of competition and cooperation – creating what we call “a friction of ideas.” And this is what ultimately results in high-quality, well-vetted products.

Before we jump directly to the headline, let me explain one of the most important metrics of virtualization: VM density - the number of virtual machines running on a host. Virtualization is often used to reduce the hardware required to operate a data center, and the more servers one can consolidate on to a virtualization host, the fewer host systems one needs. This of course results in much lower operating costs due to fewer required software licenses, less energy consumed, considerably less space needed, and fewer administrators. So, in order to decide which solution is best for you, it's vitally important to be able to compare VM density among different virtualization solutions. One of the ways you can do that is with the industry standard virtualization benchmark, SPECvirt_sc2013. You may have heard of its predecessor, SPECvirt_sc2010, which was released in [of course] year 2010. And you may be aware that KVM quickly dominated the results produced with that benchmark earning the top score in every server configuration that has been used to produced results: 2, 4, and 8 socket systems.

SPECvirt_sc2010 has been a good representative workload, but like many benchmarks, its relevancy can erode over time as technology and user's actual workloads change. The IT industry is continually evolving, and newer benchmarks are needed to approximate the workloads that are relevant. The evolution of SPECvirt from sc2010 to sc2013 is an example of staying relevant. So, how is sc_2013 different? Three ways set it apart from sc_2010:

Each virtual machine requires more resources. The workloads are used to simulate either more users or users with higher request rates. The resources a VM may need can be nearly six times compared to sc_2010. This was done to match what's happening in many data centers: many users are moving their more demanding workloads to be virtualized.

Many of the VMs in sc_2013 have much higher variability in the resource usage. The workload load levels vary quite a bit more than sc_2010. This requires the hypervisor to react quickly and correctly, providing the right hardware resource to the right VMs at the right time.

The idle VM is now a "batch" VM. On sc_2010, some of the VMs were simply idle, to represent some portion of VMs which were available, but not in use. Sc_2013 enhances that scenario by having periods for which the VM is idle, but also has periods that it must handle a batch processing job. You can think of these as servers which may have been idle all day, but perhaps processed "end-of-day" jobs which must be completed before the next day.

So, how well does KVM do with sc2013? Let's take a look at two results, one using IBM Flex System x240 server with Red Hat Enterprise Linux 6.4 with its Kernel-based Virtual Machine (KVM) hypervisor [1], and another using HP ProLiant DL380p Gen8 server with VMware ESXi v5.1 [2]. Both systems under test are equipped with Intel E5-2690 processors, 256 GB of memory, and SSD for storage.

Care to guess how much higher the VM density was for IBM KVM solution?

5%?

10%?

20%!?!

30%!?!?!

No, even higher... 37% more VM density!

That is a "stop what we are doing and choose a different strategy" result. This IBM - Red Hat Enterprise Linux 6.4 with KVM solution consolidated 37 virtual machines, while the HP - VMware result only consolidated 27 VMs. Imagine the reduction in both hardware and virtualization licensing cost you could achieve by moving from VMware to KVM. Learn more about IBM open virtualization and KVM here.

5 reasons your IT Infrastructure may leave you asking yourself, well, how did I get here? Or, a few other annoying questions.

In a past life, long before becoming a Marketing professional, I was a DJ, spinning and mixing records to pay my way through college (yeah, records!). During this period I became a huge Talking Heads fan! The lyrics from their critically acclaimed song, “Once in a Lifetime,” often interpreted as dealing with mid-life with crisis, sacrifice and questionable choices could honestly be questions posed by many IT professionals about the state of many current IT infrastructures. Let’s queue this up.

“You may ask yourself, well, how did I get here?”

Let’s face it; traditional infrastructures have grown increasingly complex and inflexible, making it difficult, in most cases, to be responsive to the fast changing business needs of many enterprises. Datacenter sprawl, multitudes of heterogeneous hardware platforms, hypervisors, operating systems and applications; all with their own management systems, make it difficult to address changing business requirements, get accurate insight from data, or deliver new offerings or services. It simply takes too long to manually build, set up, deliver and tear down servers, storage, network devices the old fashion way. Factor in unpredictable occurrences, like a sudden spike in traffic or transactions and, “You may ask yourself, well, how did I get here?”

“Same as it ever was, same as it ever was”

“You may ask yourself, how do I work this?”

Highly developed and specialized skills are often required to install, provision, monitor and operate the wide variety of systems, storage, network devices and operating systems found in traditional IT infrastructures. Such complexity can hinder responsiveness, business agility and flexibility. In the event, IT organizations attempt to share cross-platform responsibilities the inevitable question arises “How do I work this?” Or worse, as captured in another verse, “You may ask yourself, my God, what I have done?”

“Same as it ever was, same as it ever was…..”

“Into the blue again after the money's gone”

We know the demands placed on IT to deliver services faster, cheaper and better are becoming more extreme. On the other hand, IT budgets are shrinking at an even faster pace. Many enterprises are pouring bucketfuls of money into their IT infrastructures, attempting to keep up with the need to crunch numbers faster, store more-and-more data, connect with more networks and float more clouds! Nice in theory, however, an inefficient, poorly optimized infrastructure typically requires additional people to manage, monitor and run! Costs money! Money’s gone, leaving some saying “Into the blue again after money’s gone, water flowing underground.” Which leads to my next point.

“Same as it ever was, same as it ever was…..”

“Remove the water, carry the water. Remove the water from the bottom of the ocean”

We all know the intelligence behind doing something just because it’s always been done a certain way, right? Yet, somehow we refuse to recognize or change the approach. And all along, we expect the results to somehow magically change. Maybe a refreshing, new, approach to deploying, provisioning, managing, monitoring and orchestrating IT resources could yield better business results while improving IT productivity. It’s funny, two of our C-suite studies revealed the Top 3 concerns of CEO’s and CIO’s were exactly the same: Develop greater insight and intelligence, improve client intimacy, while improving skills of employees. It might be a wee bit difficult to achieve better results with the same old methodologies, processes, procedures and infrastructures. Potentially, an optimized, dynamic and optimized infrastructure could be the answer. Hmmmm, “Remove the water, carry the water. Remove the water from the bottom of the ocean.” Really? Is this productive?

“Same as it ever was, same as it ever was…..”

“You may ask yourself, where does that highway lead to?”

Ha! Maybe, that’s the question we should be asking ourselves. How do we bridge from where our IT infrastructures are today, to where they will become simplified, responsive and adaptive? It really doesn’t have to be “Same as it ever was, same as it ever was.”

What’s your “Once in a Lifetime” IT issue? I would love to hear what “You may ask yourself?”

Hope you enjoyed this departure from the typical technology blog post. Thanks for reading. Come back soon!

At the Linux Technology Center, our focus has shifted over the years. While initially, the LTC’s emphasis was largely centered on Linux, the scope has expanded over the years. When we started, we spent a lot of time working to make sure that all of IBM’s products worked with Linux so that Linux ran well on our different families of servers - x86, Power, and mainframes, helping the IBM Software Group take advantage of Linux for their hundreds of software products, and sometimes stepping in with services to make sure that they could deploy Linux in their engagements.

From that, we became involved in helping Linux move into new areas. We worked with customers that were interested in deploying Linux for scale-out file systems and utilizing real-time Linux, and helped make enterprise requirements like Linux high performance and scheduling a reality. Over the years, the LTC has worked on open source development well beyond the kernel in areas as diverse as RAS (reliability, availability, serviceability), device support, networking, systems management, security, Samba networking protocol, the toolchain, standards, test and quality. Now that Linux features are mature, we are turning our attention to the new frontiers of open source innovation – big data, cloud, and mobility.

Big Data - Hadoop is an open source software project that enables the distributed processing of large data sets. Its focus is big data and big data analytics. It is something we have strong platforms for - both in the x86 world and storage, as well as in IBM Power Servers. Think of IBM InfoSphere BigInsights. It is a software-led initiative but it using Linux and Hadoop under the covers and the LTC is doing the Hadoop development.

Cloud - We are also heavily involved in open cloud computing, working with the OpenStack Foundation, which provides a set of shared resources to grow the footprint of public and private OpenStack clouds.

Mobility - More recently, we have also become involved in mobile computing and we are now learning about the back-end server needs for mobile computing type workloads. It is a completely different programming model – and one that is still emerging.

Over the course of our involvement in open source, we have helped launch consortiums as a way to bring companies together and get projects moving quickly – probably more quickly than they would have if they had developed organically. For example, we were involved in the formation of Linaro, which was focused on Linux for ARM processors that are used in cell phones, cars, and embedded in other devices. And, most recently, we helped kick-start OpenDaylight, a project under The Linux Foundation focused on a common software-defined networking platform. The result of all this work with different open source paradigms is that inside IBM, as well as externally, we are recognized for our expertise both technically and organizationally.

Because of the LTC, IBM is known as being good at working with open source initiatives – we know how to leverage it, the proper way to partner, and, when there is new open source technology that is emerging, people often come to us for help in pulling the project together in a cohesive way. The LTC has become a locus for people to gain assistance in solving their own problems or “scratching their own itch.” Ultimately, that is good for IBM – and something we all can benefit from. That’s what “community” means.

At the IBM Linux Technology Center (LTC), we sometimes forget – because we have been around so long – that for some, the LTC is “new” news. Thanks to the success of Linux and other open source projects, there are people continually joining the open source technology ecosystem. Often, they don’t know our history, so we want to explain how we act as a resource for not only IBM but also for our partners and customers.

In the late 1990s, IBM had begun using open source software in a number of areas - especially the Apache Web Server which IBM was using internally and considering using in its products. IBM’s research teams were doing more and more with open source software and Linux, and our high performance computing customers were beginning to become interested in open source software and Linux, as well.

In 1998, Dan Frye, Vice President, IBM Open Systems Development, took the lead in ascertaining what the company’s participation in open source software should be. Through that effort, the plan to make a substantial commitment to Linux for IBM products and for Linux itself came to fruition. In 2000, IBM decided to invest $1 billion in Linux, and to help improve the operating system by working within the community. The Linux Technology Center was born out of that investment, and I am happy to say, many other companies subsequently became involved and there was an explosion of development around Linux.

The LTC provides a Linux operating system development team for IBM, supporting all IBM server platforms, all IBM server software, and acting as the technical liaison to our Linux distribution partners. IBM is part of the Linux open source community, and works directly with Linux distributors.

The team of developers working with the LTC grew fairly quickly from just a dozen, to 50, to a hundred, to several hundred developers today. Initially, we were looking at basically understanding open source and trying to make meaningful contributions. We were working to make Linux a better operating system for the kinds of things that we knew our IBM customers would want. In those days, that was reliability, scalability, better testing, performance, I/O support – even documentation – and as we did that, we began to understand Linux better and started to use it more widely internally at IBM.

The announcement of IBM’s $1 billion investment and the early work we did enabled Linux to gain acceptance by many large enterprise customers that might have been slower to come to Linux had IBM not aggressively supported it. Today, the Linux focus for the LTC is evolving. For example, we initially worked on the printing subsystem because that was an inhibitor to open source adoption, but that is a done deal now. The things we have to spend time on have completely changed and our efforts tend to be much more strategic these days.

While we continue to channel our efforts to some of the same areas such as making sure Linux supports IBM Power Systems and IBM System z, we are also becoming involved in new open source efforts. It is part of a natural evolution. Linux has grown up.

The open source hypervisor KVM (Kernel-based Virtual Machine) is gaining ground in the enterprise. KVM adoption echoes the early days of Linux since organizations, by now familiar with server virtualization, are evaluating not only hypervisors from the current market leaders but also open source approaches. According to data from IDC, KVM is growing at 150% year over year in terms of unit shipments, with over 100,000 servers already using it worldwide for virtualization.(1)

Expanded use of KVM is also occurring as part of a broader trend in which organizations are opting to deploy more than one hypervisor in their data centers. Termed “hyperversity” by Gabriel Consulting Group, organizations are avoiding standardization on a single hypervisor, and are increasingly willing to select the right tool at the right price.

A strong area for KVM is among organizations that already have Linux servers deployed in their data centers, and who are looking to consolidate workloads or build a flexible infrastructure. The reasons for KVM’s early adoption among current Linux users are varied, but can be distilled down to three main considerations – cost, security, and simplicity.

Cost

Since 2007, when KVM was first distributed as a core part of the Linux kernel, it has been considered a mainstream feature of Linux by enterprise users. Today, KVM is shipped with the major enterprise Linux distributors, including Red Hat, SUSE, and Canonical. This enables Linux shops to reduce the cost of ownership of virtualization, since they do not have to purchase a separate hypervisor. KVM can also support high server utilization, resulting in greater asset utilization, which in turn also results in cost efficiency.

Security

Security is a concern for all organizations and KVM has distinct strengths in this area. SELinux (Security Enhanced Linux) enables Mandatory Access Control which delivers advanced, need-to-know security. Explicit permission is required for access to specific data and functions, rather than permissions being role-based. In addition, EAL4+ certification means that KVM is ready for adoption by governments and other organizations where security certification is required. This isolation is critical, for example, if a malicious program is trying to break out of its own virtual machine to access the host or another virtual machine. With the combination of both SELinux and its EAL4+ certification, KVM provides strong enterprise-level security.

Simplicity

Out of the box integration: Since KVM comes with Linux distributions, it is pre-integrated and pre-tested so Linux customers do not have to implement a separate hypervisor on their own.

Linux Skills: Fororganizations that are already deploying Linux in their data center, KVM will be familiar. The KVM tool chain is integrated with the Linux tool chain and many of the commands that one would use to manage the lifetime of a virtual machine are the same commands that would be used to manage processes in a Linux server. This means organizations can relyon one set of skills.

KVM Support for Windows Guests: Despite the misconception that it can only run Linux guests, KVM is in fact a first-class hypervisor for Windows guests as well. In fact, KVM was created originally to support virtualized Windows desktops. The ability to run both Linux and Windows workloads supports enterprise flexibility.

Proven KVM Success

In the six years since it became a core part of Linux, KVM has had time to earn users’ trust. When a technology is new it tends to be mistrusted, but greater acceptance is building now as more enterprise use cases for KVM are documented and shared.

As virtualization has grown to become a reliable mainstream approach to reducing costs, maintaining or even expanding performance and delivering flexibility to support business needs, it has become strategic to IT organizations around the world. At the same time, Red Hat and IBM have become leaders in Kernel-based Virtual Machine (KVM) development and promotion, and Red Hat has distinguished itself by delivering the KVM hypervisor and corresponding management tools in Red Hat Enterprise Virtualization.

According to a recent whitepaper by analyst firm IDC entitled “KVM : Open Virtualization Becomes Enterprise Grade”, sponsored by IBM and Red Hat, KVM has made impressive progress since its inclusion in the Linux kernel in 2007, and adoption has grown especially in key use cases such as Linux server consolidation and cloud computing. The IDC whitepaper states that virtual servers outshipped physical servers by a ratio of more than 2:1 in 2012. The firm’s numbers also report that 55% of all installed workloads as of the end of 2011 were virtualized and new workloads are being virtualized at a rate of 67%. IDC also finds that hypervisors competitive to VMware, such as KVM, are offering enterprise customers more and more choice.

Red Hat and IBM’s long collaboration, originally formed around Red Hat Enterprise Linux, has expanded to focus on virtualization as well. The two industry leaders began collaborating around open virtualization many years ago, and this has continued to evolve with the fast pace of innovation delivered through KVM. Both organizations play a leadership role in the Open Virtualization Alliance (OVA), which they helped to form in 2011 as founding members. The OVA promotes the growth of KVM’s ecosystem in the marketplace, and as membership in the OVA has grown and become more diverse, it has opened opportunities for KVM deployment in areas such as: server, storage, networking, management, OS, security, business applications and has established itself as one of the most popular foundations for Infrastructure-as-a-Service (IaaS) clouds. IBM has also utilized KVM and Red Hat Enterprise Virtualization as the underpinnings of its own public cloud offering, IBM SmartCloud Enterprise.

In mid 2012, Red Hat Enterprise Linux 5 and Red Hat Enterprise Linux 6, in conjunction with the KVM hypervisor on IBM Systems, were each also separately awarded Common Criteria Certification at Evaluation Assurance Level 4+. These certifications paved the way for the KVM hypervisor and open virtualization to be used in homeland security projects, command-and-control operations and throughout government agencies.

Graphs in the IDC whitepaper also show that many users would like to combine multiple hypervisors, with as many users choosing to deploy an open source secondary hypervisor as those who would deploy a proprietary one. Over half of those surveyed said they would choose to build a cloud on a new hypervisor, as opposed to their existing system.

These proof points signal a bright future for KVM, as open virtualization takes its place in the enterprise.

Several key areas of strong adoption have emerged for the open source hypervisor.

Over the past year, we have seen a marked shift in the conversation around KVM (Kernel-based Virtual Machine). Questions early on focused on whether the open source hypervisor could be trusted as an enterprise-grade virtualization solution. We think that question has been answered with a resounding “Yes, KVM is ready for business!” Most recently, we demonstrated with a first-ever virtualized x86 TPC-C benchmark result that even the most demanding and complex workloads can be virtualized – with KVM. Nothing though speaks better to adoption in the enterprise than clients actually using it. Today, many IBM clients have deployed KVM with IBM hardware and/or software and you can read their success stories here.

Now, the questions around KVM have changed. Today, clients want to understand where KVM is being used, who is using it, and why it is being used. In answering their questions, we have identified several areas in which KVM is frequently adopted. Here, a brief look at a few use case scenarios leading the way in KVM adoption.

Companies with Linux servers in their data centers – KVM is the natural choice for companies that already have Linux servers since:

KVM is an integrated feature of any current enterprise distribution of Linux, including Red Hat Enterprise Linux, SUSE Linux Enterprise Server, and Canonical Ubuntu LTS;

It is less expensive to deploy KVM virtual machines on existing Linux servers than to purchase new proprietary virtualization technology;

The KVM administration interface is very Linux-like; and KVM can support both Linux and Windows workloads.

Cloud service providers and organizations building their own private clouds – At its core, a move to the cloud is about cutting costs and enabling flexibility. Both cloud service providers and organizations building private clouds need:

The ability to achieve a high level of virtual machine density, which KVM also enables, as demonstrated in a recent SPECvirt benchmark.

Multi-hypervisor environments – After organizations have become familiar with server virtualization, they are often open to the idea of a second hypervisor, particularly if it can provide them with expanded benefits and lower costs. According to the Gabriel Consulting report, “‘Hyperversity’ Rages On,” based on Gabriel’s annual and independent x86 Data Center Survey, two-thirds of the respondents are using two or more hypervisors. Many organizations have a second-source policy for their major IT components, including hypervisors.

Virtual Desktop Infrastructure (VDI) – KVM’s strengths shine in the virtual desktop arena because of the weight placed on sharing resources, high reliability, security, and performance. Vissensa, a managed service provider, successfully provisions flexible desktops using a virtual desktop solution with a KVM implementation of Virtual Bridges VERDE.

Business-Critical Applications –Organizations that want to create a responsive business infrastructure for the future are increasingly seeing benefits in expanding virtualization to their mission-critical systems. For example, using Red Hat Enterprise Virtualization, Casio Computer Company has not only decreased its costs but also addressed business management challenges, and laid the groundwork for a future cloud environment.

KVM’s Place in the Enterprise

While there is still room for further growth in terms of KVM’s market penetration, these five use cases represent a base in the enterprise on which KVM is building a strong presence. We will explore each of these use cases in more detail in future blogs.

"What if...you could virtualize your mission-critical applications while ensuring or improving service levels?"

While virtualization has made significant penetration into the data center, there are still workloads which have yet to fully exploit it's benefits. The previous blog entry identifies some of these workloads as database, OLTP, analytics, and ERP. In many cases these workloads have yet to be migrated to virtualized environments due to logistical issues or performance concerns. In order to demonstrate that virtualized environments can host these types of enterprise workloads without incurring significant performance sacrifices, sound, realistic proof points need to be established that highlight these type of capabilities.

In an effort to define such a proof point, the IBM Linux Technology Center (LTC) recently completed the first ever formal publication of the TPC-C benchmark, which showcases an OLTP workload, in an x86 virtualized environment. In this proof point, a two socket Intel Xeon system (IBM x3650M4) was able to achieve 1,320,082 transactions per minute (tpm-C) while performing in excess of 300,000 I/O operations per second. This level of performance exceeds 94.8%* of the posted two socket TPC-C publication at this time and is only 12.2% lower than a separate IBM publication published just last year that obtained a score of 1,503,544 tpm-C on a similarly configured non-virtualized two socket Intel Xeon system. The virtualized TPC-C publication also achieved a price/performance ratio of $0.51/tpm-C which is lower than the comparable non-virtualized system which had a ratio of $0.53/tpm-C and the lowest price/performance ratio ever achieved by IBM.

To achieve this exceptional level of performance and price/performance, this publication exploited the integrated KVM virtualization technology found in Red Hat Enterprise Linux 6.4 and the virtualization friendly x3650 M4. IBM’sSystem x3650 M4 is designed to support virtualization of customer's most important business workloads as it is designed to deliver outstanding uptime, performance, scalability, and I/O flexibility and rock-solid reliability.

With the enterprise advancements now available in KVM as demonstrated by this proof point, customers can and should begin the transition of their mission critical applications to virtual environments. As the virtualization platform used to produce the first ever virtualized x86 TPC-C publish, the KVM technology in Red Hat is ideally suited for this role.