Virtualization@IBM

Blog Authors:
IBM Software Defined2700052JD4Virtualization+IBM2700039S5CNitin_Gaur12000056JBJean Staten Healy2700025BBUJohn_Foley0600026N82SamVanAlstyne110000DM6Balicia_wood270003DW0M
Virtualization combined with Integrated Service Management helps you
use your resources effectively, manage your infrastructures
efficiently and gain the flexibility to meet ever changing business
demands.
This blog is for the open exchange of ideas relating to
virtualization across the entire infrastructure. Articles written
by IBM's virtualization experts serve as conversation starters.
Topics can range from latest technologies for server consolidation
and tools for simplified systems management and monitoring to
automating IT systems to respond to changing business conditions and
cloud-based solutions for the "virtual" enterprise.

KVM (Kernel-based Virtual Machine) is gaining traction in the enterprise as a virtualization solution that provides high performance, scalability, and cost efficiency. But misconceptions still abound about this open source hypervisor. Some falsehoods continue to be perpetuated by organizations offering competing products, and others because KVM is maturing quickly and the up-to-date, correct information is not yet widely known. Here, we tackle some of the most persistent myths about KVM - because it’s time to set the record straight.

Myth #1: KVM is type 2 hypervisor that is hosted by the operating system, and isn’t a bare metal hypervisor.This is a persistent myth, but the truth is that KVM actually does run directly on x86 hardware. People assume it is a type 2 hypervisor because one of the ways that it is packaged is as a component of Linux - so you can be running a Linux distribution and then, from the command-line shell prompt or from a graphical user interface on that Linux box, you can start KVM. The interface makes it look like it is a hosted hypervisor running on the operating system, but the virtual machine is running on the bare metal - the host operating system provides a launch mechanism for the hypervisor and then engages in a co-processing relationship with the hypervisor. . In a sense, it is taking over part of the machine and sharing it with the Linux kernel.

On x86 hardware, KVM relies on the hardware virtualization instructions that have been in these processors for seven years. Using these instructions the hypervisor and each of its guest virtual machines run directly on the bare metal, and most of the resource translations are performed by the hardware. This fits the traditional definition of a “Type 1,” or bare metal hypervisor.

You can also get KVM packaged as a standalone hypervisor - just like VMware ESX is packaged - but initially KVM was not available in that package. One way of doing this is with Red Hat Enterprise Virtualization (RHEV).

Myth #2: KVM only runs Linux workloads.This myth is also pretty persistent, probably because KVM is so closely associated with Linux, was developed as part of the Linux kernel, and is offered by Linux distributors. A lot of people assume it runs Linux virtual machines and not others, but in fact KVM runs all types of workloads. It runs any type of workload that will run on an x86 box, including three different versions of Windows, several versions of Linux, as well as other operating systems, like BSD and Mac OS, and even NetWare. On x86, KVM as a virtual machine looks like an x86 computer. It runs Windows and Linux equally well and continues to get even better, even better than some competitors.

Myth #3: KVM is only available for x86 platforms.This is a reasonable assumption because when KVM was first merged upstream into Linux, it was directly associated with the x86 processors from Intel and AMD. It was a couple of years before anybody started thinking about porting KVM to another platform although it was not very long until somebody developed a para-virtual implementation for Linux that would run on older hardware for KVM. Although commercially supported only on x86 today, there has been upstream work for other platforms, and there are the beginnings of additional platform support. KVM like Linux itself is perfectly capable of running on many different platforms and someday soon we would like to see it ported to the ARM platform.

Myth #4: KVM is only available from Red Hat.Not true. KVM was available first from Debian and the first supported release was from Ubuntu. It is also available now from SUSE in SLES, from the Fedora Project, and from a number of other distributions. Red Hat is the leading distributor of KVM right now, but it will continue to be available from many sources.

Myth #5: KVM is only available as part of enterprise Linux distributions.Again, not true. It is also available as a purpose-built standalone hypervisor with just enough packages and user space to run virtual machines, and a restricted shell and a very restricted user interface just to allow remote management of the host running virtual machines. The entire host image including the kernel is stateless and it is downloaded to the host every time the host boots up - and that is available as RHEV-H (Red Hat Enterprise Virtualization - and H for hypervisor). It is a very specific hypervisor-only distribution and it doesn’t even look like Linux. That has an appeal to people who are not familiar with Linux as well as people who want a locked-down hypervisor and don’t want all the extra things that come with enterprise Linux.

Myth #6: KVM is not secure.Of course, this is a myth. KVM has all the security features that VMware has plus some more – such as mandatory access control turned on by default. In fact, Red Hat Enterprise Linux 5 with the KVM hypervisor on IBM Systems has just been awarded Common Criteria Certification at Evaluation Assurance Level 4+.

But the myth about security persists because of the fact that KVM is based on Linux - and that has a whole bunch of baggage with it. There are several reasons for this.

One is that some people think that open source code is not secure because people can audit the code and find security entry points and potential bugs where they can crack the code and escalate into a security issue. However, auditing source code has an overwhelming benefit to security. When more people audit code, that code becomes more secure. When you use proprietary hypervisor technology with closed source code, you never get to review that code so you have no idea what has been audited for security and what hasn’t. And, furthermore, anybody with a disassembler can disassemble the binary image and start looking at the assembly code to find security holes.

The second reason people say it is not secure is that when KVM is packaged as part of an enterprise Linux distribution, the distribution can include additional components such as an HTTP server, more than one shell, programming languages such as Perl and Python, and almost too many tools to mention. In this case, you have to take the Linux distribution - even if it is an enterprise Linux distribution - and spend some time to lock it down yourself or get something like RHEV-H which is a much smaller component and it is locked down by default.

The bottom line is that KVM is not necessarily insecure because it is based on enterprise Linux, but you might want to remove some packages that might have some issues in a Linux distribution – or simply get the RHEV-H version.

Myth #7: There are no virtualization management tools available for KVM.This was actually largely true until a year ago. But it has changed dramatically since then. From Red Hat, there is RHEV-M (Red Hat Enterprise Virtualization-Management), which runs on Linux and Windows. There is also IBM System Director VMControl which became available in December, IBM Smart Cloud Provisioning, and a number of other tools such as xCAT (Extreme Cloud Administration Toolkit) an open sourcemanagement software tool developed by IBM.

Whether you’re just testing the waters or have already dove head first into server virtualization, cost reduction continues to be a primary driver of data center optimization. Although virtualization is helping to reduce capital expenses, IT managers are getting stung with increasingly expensive virtualization software licensing costs.

One of the key value propositions for KVM is the potential to reduce the overall cost of ownership for virtualization solutions. To assess the potential savings, you can do a pricing analysis of a KVM stack compared to its competitors (i.e. VMware and Microsoft). Sounds easy, but unfortunately it’s more complex than it seems because virtualization software vendors have different pricing terms and conditions, software is rarely sold at list prices and each data center is configured differently with various platforms, workloads, etc. to account for.

There isn’t a standard formula for calculating or comparing competitive hypervisor stacks, but one way to view the cost differences is through an analysis of publically available pricing information. Although this comparison won’t give you the exact cost differences for your particular environment, it will show you the overall magnitude of cost differences between the hypervisor solutions and how each solution is priced.

Pricing Analysis MethodologyThe first step is to identify a standard server configuration to price each of the software solutions. This is important because each vendor has significantly different terms and conditions that are based on the server configuration. For this analysis we chose a standard IBM System x server with 2-sockets and 48GM of RAM (model: IBM x3550 M3, 2x6C, 2x500GB HDD, 48GB RAM, 2x 6Gb HBA, ServerRAID).

The next step is to identify the hypervisor stacks to compare, and determine the price and licensing structure for each solution. The figure below shows how we’ve isolated the hypervisor and hypervisor management solutions along with their associated license, subscription and support prices. To get as close to an accurate comparison, we included the server, operating system, hypervisor, hypervisor management tool and systems management tool for each stack. You’ll see that the KVM stack utilizes the version of KVM that comes with Red Hat Enterprise Linux combined with IBM management tools - IBM Systems Director and IBM Systems Director VMControl.

Pricing Analysis ResultsNow that we have all of the pricing data, let’s analyze different operating scenarios. For the purposes of simplicity, the following basic parameters were selected:

Lastly for Scenario #3: 100% Windows-based workloads the results were a bit different:

Conclusion and takeawaysBefore we summarize the results please keep in mind that this analysis showcases the overall magnitude of cost differences between the different stacks in a general setting. It is always important to consider the configuration of your data center, your existing enterprise software agreements and workload needs when conducting a detailed cost analysis.

Now in terms of the pricing analysis, the results above show:

For 100% Linux-based environments there is an especially strong case for attaining significant cost advantages for KVM, upwards of 42% savings compared to VMware.

For 100% Windows-based environments, Microsoft is less expensive, but it is important to note that it if you’re running a 100% Windows shop, it is unlikely you would be using KVM to manage that environment.

Lastly, you will notice that VMware is the most expensive solution across every scenario.

As you consider your virtualization needs, the bottom line is that KVM can help protect you from the sting of increasingly expensive virtualization licensing costs especially for Linux and Mixed workload environments.

Red Hat is excited to announce today that the Kernel-based Virtual Machine (KVM) hypervisor, which is incorporated in both Red Hat Enterprise Linux and Red Hat Enterprise Virtualization, has again achieved top performance results. This latest performance mark was achieved on the IBM® System x3850 X5 host server with Qlogic® QLE 256x Host Bus Adapters, Red Hat® Enterprise Linux® 6.3 hypervisor and Red Hat Enterprise Linux 6.3 guests. During testing by IBM, KVM demonstrated its ability to handle I/O rates at the storage performance levels required by enterprise workloads, with four guests handling more than 1.4 million I/Os per second (IOPS). The results are further proof that virtualized workloads can maintain consistent high performance as compared with baremetal deployments.

The relationship between the hypervisor and its Linux kernel allows it to run on a dual design, unifying the host and hypervisor modes. Red Hat Enterprise Linux supports multiple virtualization use cases, allowing customers to choose when and where to use virtualization. By leveraging the Linux operating system, KVM virtualization overhead is minimized, but not to the detriment of performance. The Red Hat Enterprise Linux 6.3 release also supports up to the leading 160 virtual CPUs per virtual machine, allowing even large workloads to be virtualized.

These tests, run on the Red Hat and IBM technology combination described above, have demonstrated that enterprise workloads can be efficiently migrated into a virtualized environment while still delivering high performance results. The KVM host server, consisting of an IBM System x3850 X5 with four Intel Xeon® E7-4870 processors (sockets) and 256 GB of memory, ran on a storage back-end capable of delivering at 1.4 million IOPS.

Single and multiple virtual machines were tested, using Red Hat Enterprise Linux 6.3 on all guests and on the host. Both reads and writes were included in the test workload in order to more accurately simulate the demands of an enterprise workload. Using only four guests, KVM was able to achieve up to 1.4 million IOPS for random I/O requests of 8KB in size and more than 1.6 million for random requests of 4KB in size. The KVM performance matched the physical operating system performance of this setup and KVM was bounded by the test storage back-end performance. Using a single guest, KVM was able to achieve about 800,000 IOPS for random I/O requests of 8KB in size, and more than 900,000 IOPS for random requests of 4 KB or less. It should be noted that VMware recently indicated that it could achieve one million IOPS for a single host running six virtual machines running on a vSphere™ 5.0 host.1

Average latency rates for both tests remained low and constant across different I/O request sizes, demonstrating that block I/O performance on KVM can remain predictable, even with a changing number of guests. As the number of guests and I/O requests increases, block I/O performance on the KVM hypervisor is able to scale to match demand load.

Red Hat Enterprise Virtualization for Desktops and Servers is the first enterprise-ready, fully open source virtualization platform. Red Hat Enterprise Virtualization offers industry-leading performance and scalability for real-world enterprise applications including Oracle, SAP and Microsoft Exchange, and includes enterprise virtualization management features such as live migration, high availability, load balancing and power saving. Because Red Hat Enterprise Virtualization is available through Red Hat’s software subscription model, users benefit from lower acquisition ownership costs for the same or better feature set when compared to other solutions. The platform recently entered beta for its upcoming 3.1 release.

Because Red Hat Enterprise Virtualization and Red Hat Enterprise Linux incorporate the same KVM hypervisor, those systems using Red Hat Enterprise Virtualization are gaining the same virtualization technology that achieved the top performance posted by the Red Hat Enterprise Linux KVM and IBM systems used for this performance trial.

Enterprise adoption of KVM is growing, and KVM features are continually being updated and expanded. Development in KVM is focused not only on high performance – the must-have for enterprise adoption – but also on support for application developers and Systems Administrators storage, usability, high availability, disaster recovery, and security. As an active participant in the KVM development community, IBM continues to dedicate its considerable expertise to open virtualization with KVM. (Learn more about the IBM KVM commitment.) Here is a look at some of the KVM features we expect to see in upcoming enterprise Linux releases – and why they will matter to enterprise users.

Support for Application Developers and SystemsAmong the new features that have emerged upstream in KVM is a new tracing framework for Qemu-KVM environment subsystem that is important for developers and administrators who want to trace their workloads. It allows tracing all the way from the guest into the host kernel and back. For developers, it is a way to debug issues, and for administrators, it is useful for debugging issues in the field. For example, if they are experiencing poor performance or something unexpected in the field they can turn tracing on and then send the trace logs to the developers for remediation. Or, if administrators want to understand how an application is working in their data center, they can turn tracing on and evaluate that themselves. It provides a lot of information on the speed of execution proceeds through different components. If they write a trading application they can see how much time is spent processing the packet, and how much time is spent in the database.

Why it mattersThis is really good for troubleshooting anything in the data center, where you can have a complicated situation, with interaction between networking, storage, hypervisors, and applications.

StorageIBM is working on something called KVM FS. This is an integrated clustered file system. It is based on the Gluster File System that Red Hat acquired recently. KVMFS exports a block device directly to the guest as a file. The guest treats it as virtual block image. And then, when the guest sends a block command (usually a SCSI command) to the file, the host benefits from storage offload capability, meaning that KVM FS will pass that block command directly to the hardware. VMware calls that capability VAAI or storage offload integration or storage offload API. And the second thing that KVM FS enables is a shared file system so when you migrate the guest that block device is still available on the new host. This is scheduled to be done by the end of this year.

Why it mattersThis will be important to everybody except the most high-end customers. On the high end, we already have enterprise storage and clustered storage like GPFS and SONAS and we have NFS filers that do most of those things and as well as other things not directly associated with virtualization. But those are big, expensive products that are hard to install that are designed for large data centers.

There are four things about KVM FS that are important. The first is that it is integrated so you don’t need to install it and clustered file systems are typically really difficult to install. Second, it is designed to serve up virtual disk images - it knows that it is working with virtual disk images that represent a virtual machine and it treats them that way specifically. When you back up a file, it knows that you are backing up a virtual machine and it knows that you need to do something like take a snapshot of the virtual machine and then back up the base image. Third, it also allows you to migrate that virtual machine and have access to the virtual machine image file. That is why it’s so notable and such a good feature. And fourth, VMware has a feature called VM FS which is integrated clustered file system that does the same thing as KVM FS, so KVM FS is significant because it will give KVM something that is directly analogous to VMware’s VM FS.

Usability and Device SupportVFIO (Virtual Function I/O) is a new subsystem for passing I/O devices directly through to guests. Right now, we do PCI pass-through where you can take an I/O device and give control of that device to a guest - and we do that with some of our benchmarks because you can get very good performance on that I/O device when the guest is interacting directly with the device. However, it’s difficult to do that and not all devices have support for that. VFIO makes it really simple to pass a device to a guest and it broadens the support significantly because of the way it is architected – so if a device is supported in the kernel it will almost certainly be supported in the guest.

Why it mattersIf you are a developer doing PCI pass-through today to improve I/O device performance, this is going to simplify your life quite a bit.

High Availability and Disaster RecoveryThere is a feature upstream now that does continuous replication for mirroring and disaster recovery in virtual environments. It has been submitted by a company called Zerto which has a product for VMware. Zerto needs to refactor its patches to get them accepted but I think they will do that. It is really good to see them submitting this for KVM.

Why it mattersZerto has a good product. This patch set continuously mirrors the storage. This will very helpful because the Zerto product, combined with live migration provides a failover scenario that is really effective.

SecurityAnother improvement on the way is support for the Trusted Computing platform module that provides encryption key storage and support for encryption. This enables the storing of encryption keys in a tamper-resistant piece of hardware and then using those keys to validate software images.

The specific feature on the way is called a “static root of trust,” and it is the first step in Trusted Computing. It means that the first thing you do is validate the boot block to make sure it has not been tampered with, and then you validate the boot loader - and if the boot loader is good, it validates the kernel that it boots. And then, at that point you can validate other software that you load, extending the trust chain. The reason it is static is that it has to start at boot up and you can’t re-establish that chain of trust until you boot the machine up again.

Why it mattersIBM has been shipping this Trusted Computing module on its x86 hardware for several years, and the U.S. government is going to be support for Trusted Computing in purchased computer systems. We are using it to prevent routeroute, which are foreign code routines that modify the kernel or some piece of trusted software maliciously. However, some developers in the kernel community are skeptical because a vendor can use this to prevent modifying the software on their computer or on their device, so a vendor could lock down a device or protect digital media. Both of those things are true. A vendor could do both of those things. This argument has been going on for a number of years. But now with Red Hat’s support, Trusted Computing is going into upstream Linux and it is being added to KVM.

Last month I blogged about the surprising level of hypervisor diversity that we’re seeing in use by customers – as shown by a report published by Gabriel Consulting and based on a survey of hundreds of IT professionals.

Now I want to discuss what’s behind this – why are so many customers mixing x86 hypervisors, and what are their reasons?

In essence, it comes down to three factors – lower cost, technical differences, and customer ability to manage multiple hypervisors.

The first and most obvious factor is cost. We’re seeing the familiar cycle of high-priced proprietary technologies being challenged by lower-cost open source innovation – the same situation that played out with Linux, Eclipse and Apache.

Although the Gabriel Consulting report shows that customers value proprietary hypervisor technology, it also shows that the costs of implementing this everywhere can often be too high and half of the respondents agreed with the statement “Cost issues make standardizing on one suite too expensive…”

We’re also hearing this from our customers, from large banks to cloud providers. Cost is one of the main reasons they’re adopting KVM.

But according to the report, while cost is a driver for hyperversity, it isn’t the major driver. Intriguingly, that’s technical factors.

Technical differences matter even more

71% of respondents agreed with the statement “Technical differences between various solutions drive hypervisor diversity”.

The first and most obvious factor behind this is affinity between the hypervisor and the operating system. This is clearly a major factor for KVM and Linux, as well as for Hyper-V and Windows. Hypervisors and operating systems need to perform many of the same tasks – starting processes, managing memory, accessing devices. If the Linux comes with the hypervisor already included, and integrated, and tested, then that’s a strong reason for adopting KVMr.

The second factor is that hypervisors such as KVM which are based on an existing operating system don’t need to reinvent the wheel and can exploit the scalability, security and device support that’s already there. This is one of the reasons why KVM holds the top seven SpecVirt performance benchmarks – it’s leveraging Linux which already has the scalability.

The final factor is how suitable the hypervisor is to supporting cloud computing. The Gabriel Computing report saw a data correlation between KVM and private cloud projects, and speculated on whether there is something about KVM that makes it more amenable to driving private clouds.

We think that the scalability and high VM density provided by KVM, along with its open approach and low cost, makes it a great choice for cloud computing. This is why IBM uses KVM as the hypervisor for both its public cloud, IBM SmartCloud Enterprise, and also its largest internal private cloud, the IBM Research Compute Cloud.

Managing multiple hypervisors

Of course, the adoption of multiple hypervisors, like the prevalence of multiple operating systems, means that customers have to be able to manage the hypervisor diversity successfully.

In the early days of adoption, IT shops are likely to use the virtualization management tools most closely connected with the hypervisors – VMware’s vCenter, Microsoft’s Systems Center, and Red Hat’s Enterprise Virtualization – Management.

As the hypervisor diversity trend continues, this means having multiple management tools and multiple skill sets.

The idea of managing a mixed hypervisor environment from a single pane of glass then becomes increasingly attractive – whether from ISVs in the Open Virtualization Alliance such as Zenoss and ManageIQ, or enterprise systems management vendors such as IBM with Tivoli and IBM Systems Director VMControl.

Whatever happens, it looks like hypervisor diversity is here to stay for at least the next few years – and that promises to make for interesting times.