Safe Harbor Statement: The information on IBM products is intended to outline IBM's general product direction and it should not be relied on in making a purchasing decision. The information on the new products is for informational purposes only and may not be incorporated into any contract. The information on IBM products is not a commitment, promise, or legal obligation to deliver any material, code, or functionality. The development, release, and timing of any features or functionality described for IBM products remains at IBM's sole discretion.

Tony Pearson is a an active participant in local, regional, and industry-specific interests, and does not receive any special payments to mention them on this blog.

Tony Pearson receives part of the revenue proceeds from sales of books he has authored listed in the side panel.

Inside System Storage -- by Tony Pearson

Tony Pearson is a Master Inventor and Senior IT Specialist for the IBM System Storage product line at the
IBM Executive Briefing Center in Tucson Arizona, and featured contributor
to IBM's developerWorks. In 2011, Tony celebrated his 25th year anniversary with IBM Storage on the same day as the IBM's Centennial. He is
author of the Inside System Storage series of books. This blog is for the open exchange of ideas relating to storage and storage networking hardware, software and services. You can also follow him on Twitter @az990tony.
(Short URL for this blog: ibm.co/Pearson
)

Since the [IBM System Storage Technical University 2011] runs concurrently with the System x Technical University, attendees are allowed to mix-and-match. I attended several presentations regarding server virtualization and hypervisors.

Hypervisor Architecture

Matt Archibald is an IT Management Consultant in IBM's Systems Agenda Delivery team. He started with a history of hypervisors, from IBM's early CP/CMS in 1967, through the latest VMware Vsphere 5 just announced.

He explained that there are three types of Hypervisor architectures today:

Type 1 - often referred to as "Bare Metal" runs directly on the server host hardware, and allows different operating system virtual machines to run as guests. IBM's System z [PR/SM] and [PowerVM] as well as the popular VMware ESXi are examples of this type.

Type 2 - often referred to as "Hosted" runs above an existing operating system, and allows different operating system virtual machines to run as guests. The popular [Oracle/Sun VirtualBox] is an example of this type.

OS Containers - runs above an existing operating system base, and allows multiple "guests" that all run the same operating system as the base. This affords some isolation between applications. [Parallels Virtuozzo Containers] is an example of this type.

The dominant architecture is Type 1. For x86, IBM is the number one reseller of VMware. VMware recently announced [Vsphere 5], which changes its licensing model from CPU-based to memory-based. For example, a virtual machine with 32 virtual CPUs and 1TB of virtual RAM (VRAM) would cost over $73,000 per year to license the VMware "Enterprise Plus" software. The only plus-side to this new licensing is that the "memory" entitlement transfers during Disaster Recovery to the remote location.

"Xen is dead." was the way Matt introduced the section discussing Hybrid Type-1 hypervisors like Xen and Hyper-V. These run bare-metal, but require networking and storage I/O to be processed by a single bottleneck partition referred to as "Dom 0". As such, this hybrid approach does not scale well on larger multi-sock host servers. So, his Xen-is-dead message was referring to all Hybrid-based Hypervisors including Hyper-V, not just those based on Xen itself.

The new up-and-comer is "Linux KVM". Last year, in my blog post about [System x KVM solutions], I mentioned the confusion over KVM acronym used with two different meanings. Many people use KVM to refer to Keyboard-Video-Mouse switches that allow access to multiple machines. IBM has renamed these switches to Local Console Managers (LCM) and Global Console Manager (GCM). This year, the System x team have adopted the use of "Linux KVM" to refer to the second meaning, the [Kernel-based Virtual Machine] hypervisor.

Linux KVM is not a product, but an open-source project. As such, it is built into every Linux kernel. Red Hat has created two specific deliverables under the name Red Hat Enterprise Virtualization (RHEV):

RHEV-H, a tiny ESXi-like bare-metal hypervisor that fits in 78MB, making it small enough to be on a USB stick, CD-rom or memory chip.

Personally, I run RHEL 6.1 with KVM on my IBM laptop as my primary operating system, with a Windows XP guest image to run a few Windows-specific applications.

A complaint of the current RHEV 2.2 release from Linux fanboys is that RHEV-M requires a Windows server, and uses Windows Powershell for scripting. The next release of RHEV is likely to provide a Linux-based option for management server.

Of the various hypervisors evaluated, KVM appears to be poised to offer the best scalability for multi-socket host machines. The next release is expected to support up to 4096 threads, 64TB of RAM, and over 2000 virtual machines. Compare that to VMware Vsphere 5 that supports only 160 threads, 2TB of RAM and up to 512 virtual machines.

Linux KVM Overview

Matt also presented a session focused on Linux KVM. While IBM is the leading reseller of VMware for the x86 server platform, it has chosen Linux KVM to run all of its internal x86 Cloud Computing facilities, as it can offer 40 to 80 percent savings, based on Total Cost of Ownership (TCO).

Linux KVM can run unmodified Windows and Linux guest operating systems as guest images with less than 5 percent overhead. Since KVM is built into the Linux kernel, any certification testing automatically benefits KVM as well. KVM takes advantage of modern CPU extensions like Intel's VT and AMD's AMD-V.

For high availability, in the event that a host fails, KVM can restart the guest images on other KVM hosts. RHEV offers "prioritized restart order" which allows mision-critical images to be started before less important ones.

RHEV also provides "Virtual Desktop Infrastructure", known as VDI. This allows a lightweight client with a browser to access an OS image running on a KVM host. Matt was able to demonstrate this with Firefox browser running on his Android-based Nexus One smartphone.

RHEV also adds features that make it ideal for cloud deployments, including hot-pluggable CPU, network and storage; service Level Agreement monitoring for CPU, memory and I/O resources; storage live migrations to move the raw image files while guests are running; and a self-service user portal.

IBM has been doing server virtualization for decades. When I first started at IBM in 1986, I was doing z/OS development and testing on z/VM guest images. Later, around 1999, I started working with the "Linux on z" team, running multiple Linux images under PR/SM and z/VM. While the server virtualization solutions most people are familiar with (VMware, Hyper-V, Xen) have only been around the last five years or so, IBM has a much deeper and robust understanding and long heritage. This helps to set IBM apart from the competition when helping clients.

Clod Barrera is an IBM Distinguished Engineer and Chief Technical Strategist for IBM System Storage. He predicts that by 2015, 10 percent of the servers and storage purchases, as well as 25 percent of the network gear purchases, will be related to Cloud deployments. Cloud Storage is expected to grow at a compound annual growth rate (CAGR) of 32 percent through 2015, compared to only 3.8 percent growth for non-Cloud storage.

Cloud Computing is allowing companies to rethink their IT infrastructure, and reinvent their business. Clod presented an interesting chart on the "Taxonomy" of storage in Cloud environments. On the left he had examples of Storage that was part of a Cloud Compute application. On the right he had storage that was accessed directly through protocols or APIs. Under each he had several examples for transactional data, stream data, backups and archives.

Clod feels the only difference between Private and Public clouds is a matter of ownership. In private clouds, these are owned by the company that uses them via their private Intranet network. Public clouds are owned by Cloud Service providers and are accessed over the public Internet. Clod presented IBM's strategy to deliver Cloud at five levels:

Managed Private Cloud: on-site equipment, behind company firewall, managed by IBM or other Cloud Service provider

Hosted Private Cloud: dedicated, off-premises equipment, located and managed by IBM or other Cloud Service Provider, and access through VPN

Shared Cloud Services: shared, off-premises equipment, located at IBM or other Cloud Service Provider, managed by IBM or Cloud Service provider, and access through VPN. The facility is intended for enterprises only, on a contractual basis, and will be auditable for compliance to government regulations, etc.

Public Cloud: shared, off-premises equipment, located and managed by IBM or other Cloud Service provider, targeted to offer cloud compute and storage resources, with standardized platforms of operating systems and middleware, for individuals, small and medium sized businesses.

As with storage in traditional data center deployments, storage in clouds will be tiered, with Tier 0 being the fastest tier, to Tier 4 for "deep and cheap" archive storage. IBM SONAS is an example of Cloud-ready storage that can help make these tiers accessible through standard Ethernet protocols. Cloud Service providers will use metering and Service Level Agreements (SLAs) to offer different rates for different tiers of storage in the cloud.

Clod wrapped up his session explaining IBM's Cloud Computing Reference Architecture (CCRA). This is an all-encompassing diagram that shows how all of IBM's hardware, software and services fit into Cloud deployments.

Not too many people have heard of IBM's Smart Archive strategy and the storage products IBM offers to meet compliance regulations. This session covered the following:

The differences between backup and archive, including a few of my own personal horror stories helping companies who had foolishly thought that keeping backup copies for years would adequately serve as their archive strategy

Why putting a [space heater] on your data center floor is a bad idea, driving up power and cooling costs for little business value to the enterprise once the unit is full of rarely accessed read-only data.

An overview of the [IBM Information Archive], an integrated stack of servers, storage and software that replaces previous offerings such as the IBM System Storage DR550 and the IBM Grid Medical Archive Solution (GMAS).

The marketing bundle known as the [Information Archive for Email, Files and eDiscovery] that combines the Information Archive storage appliance with Content Collectors for email and file systems, as well as eDiscovery tools, and implementation services for a solution that can support a small or medium size business, up to 1400 employees.

IBM Tivoli Storage Productivity Center v4.2 Overview and Update

Many of the concerns raised when I [presented v4.1 at this conference last year] were addressed this year in v4.2, including full performance statistics for IBM XIV storage system, storage resource agent support for HP-UX and Solaris, and a variety of other issues.

I presented this overview in stages:

"Productivity Center Basic Edition" that comes pre-installed on the IBM System Storage Productivity Center hardware console, that provides discover of devices, basic configuration, and a clever topology viewer of what is connected to what.

"Productivity Center for Data" which supports reporting on files, file systems and databases on DAS, SAN and NAS attached storage from a Operating System viewpoint.

"Productivity Center Standard Edition" which includes all of the above except "Replication", and adds performance monitoring of SAN Fabric gear, and some very clever analytics to improver performance and problem determination.

One of the questions that came up was "How big does my company have to be to consider using Productivity Center?" which I answered as follows:

"If you are a small company, and the "IT Person" has responsibilities outside the IT, and managing the few pieces of kit is just part of his job, then consider just using the web-based GUI through a Firefox or similar browser. If you are a medium sized company with dedicated IT personnel, but mostly run by system admins or database admins that manage storage and networks on the side, you might want to consider the "Storage Control" plug-in for IBM Systems Director. But if you are larger shop, and there are employees with the title "Storage Administrator" and/or "SAN Administrator", then Tivoli Storage Productivity Center is for you."

Tivoli Storage Productivity Center has matured into a fine piece of software that truly can help medium and large sized data centers manage their storage and storage networking infrastructure.

I like speaking the first day of these events. Often people come in just to hear the keynote speakers, and stay the afternoon to hear a few break-out sessions before they leave Tuesday or Wednesday for other meetings.

Jim is an IBM Fellow for IBM Systems and Technology Group. There are only 73 IBM Fellows currently working for IBM, and this is the highest honor IBM can bestow on an employee. He has been working with IBM since 1968.

He is tasked with predicting the future of IT, and help drive strategic direction for IBM. Cost pressures, requirements for growth, accelerating innovation and changing business needs help influence this direction.

IBM's approach is to integrate four different "IT building blocks":

Scale-up Systems, like the IBM System Storage DS8000 and TS3500 Tape Library

Resource Pools, such as IBM Storage Pools formed from managed disks by IBM SAN Volume Controller (SVC)

Integrated stacks and appliances, integrated software and hardware stacks, from Storwize V7000 to full rack systems like IBM Smart Analytics Server or CloudBurst.

Mobility of workloads and resources requires unified end-to-end service management. Fortunately, IBM is the #1 leader in IT Service Management solutions.

Jim addressed three myths:

Myth 1: IT Infrastructures will be homogenous.

Jim feels that innovations are happening too rapidly for this to ever happen, and is not a desirable end-goal. Instead, a focus to find the right balance of the IT building blocks might be a better approach.

Myth 2: All of your problems can be solved by replacing everything with product X.

Jim feels that the days of "rip-and-replace" are fading away. As IBM Executive Steve Mills said, "It isn't about the next new thing, but how well new things integrate with established applications and processes."

Myth 3: All IT will move to the Cloud model.

Jim feels a substantial portion of IT will move to the Cloud, but not all of it. There will always be exceptions where the old traditional ways of doing things might be appropriate. Clouds are just one of the many building blocks to choose from.

Jim's focus lately has been finding new ways to take advantage of virtualization concepts. Server, storage and network virtualization are helping address these challenges through four key methods:

Sharing - virtualization that allows a single resource to be used by multiple users. For example, hypervisors allow several guest VM operating systems share common hardware on a single physical server.

Aggregation - virtualization that allows multiple resources to be managed as a single pool. For example, SAN Volume Controller can virtualize the storage of multiple disk arrays and create a single storage pool.

Emulation - virtualization that allows one set of resources to look and feel like a different set of resources. Some hypervisors can emulate different kinds of CPU processors, for example.

Insulation - virtualization that hides the complexity from the end-user application or other higher levels of infrastructure, making it easier to make changes of the underlying managed resources. For example, both SONAS and SAN Volume Controller allow disk capacity to be removed and replaced without disruption to the application.

In today's economy, IT transformation costs must be low enough to yield near-term benefits. The long-term benefits are real, but near-term benefits are needed for projects to get started.

What set's IBM ahead of the pack? Here was Jim's list:

100 Years of Innovation, including being the U.S. Patent leader for the last 18 years in a row

IBM's huge investment in IBM Research, with labs all over the globe

Leadership products in a broad portfolio

Workload-optimized designs with integration from middleware all the way down to underlying hardware

Comprehensive management software for IBM and non-IBM equipment

Clod Barrera

Clod is an IBM Distinguished Engineer and Chief Technical Strategist for IBM System Storage. His presentation focused on trends and directions in the IT storage industry. Clod started with five workload categories:

Transaction/Database

Data Analytics

Business Logic/Apps

Web/Collaboration

Archive/Retention

To address these unique workload categories, IBM will offer workload-optimized systems. The four drivers on the design for these are performance, efficiency, scalability, and integration. For example, to address performance, companies can adopt Solid-State Drives (SSD). Unfortunately, these are 20 times more expensive dollar-per-GB than spinning disk, and the complexity involved in deciding what data to place on SSD was daunting. IBM solved this with an elegant solution called IBM System Storage Easy Tier, which provides automated data tiering for IBM DS8000, SAN Volume Controller (SVC) and Storwize V7000.

For scalability, IBM has adopted Scale-Out architectures, as seen in the XIV, SVC, and SONAS. SONAS is based on the highly scalable IBM General Parallel File System (GPFS). File systems are like wine, they get better with age. GPFS was introduced 15 years ago, and is more mature than many of the other "scalable file systems" from our competition.

Areal Density advancements on Hard Disk Drives (HDD) are slowing down. During the 1990s, the IT industry enjoyed 60 to 100 percent annual improvement in areal density (bits per square inch). In the 2000s, this dropped to 25 to 40 percent, as engineers are starting to hit various physical limitations.

Storage Efficiency features like compression have been around for a while, but are being deployed in new ways. For example, IBM invented WAN compression needed for Mainframe HASP. WAN compression became industry standard. Then IBM introduced compression on tape, and now compression on tape is an industry standard. ProtecTIER and Information Archive are able to combine compression with data deduplication to store backups and archive copies. Lastly, IBM now offers compression on primary data, through the IBM Real-Time Compression appliance.

For the rest of this decade, IBM predicts that tape will continue to enjoy (at least) 10 times lower dollar-per-GB than the least expensive spinning disk. Disk and Tape share common technologies, so all of the R&D investment for these products apply to both types of storage media.

For integration, IBM is leading the effort to help companies converge their SAN and LAN networks. By 2015, Clod predicts that there will be more FCoE purchased than FCP. IBM is also driving integration between hypervisors and storage virtualization. For example, IBM already supports VMware API for Array Integration (VAAI) in various storage products, including XIV, SVC and Storwize V7000.

Lastly, Clod could not finish a presentation without mentioning Cloud Computing. Cloud storage is expected to grow 32 percent CAGR from year 2010 to 2015. Roughly 10 percent of all servers and storage will be in some type of cloud by 2015.

As is often the case, I am torn between getting short posts out in a timely manner versus spending some more time to improve the length and quality of information, but posted much later. I will spread out the blog posts in consumable amounts throughout the next week or two, to achieve this balance.

Maria Boonie is the IBM Director for IBM Worldwide Training and Technical Conferences. She indicated that there were 1500 attendees this week crossing both the System Storage and System x conferences at this hotel. There are 35 vendors that have sponsored this event, and they will be at the "Solutions Center" being held Monday through Wednesday this week.

She took this opportunity to plug IBM's latest education offerings, including Guaranteed-to-Run implementation classes, and Instructor-Led Online (ILO) technical classes.

Brian Truskowski

Brian Truskowski is IBM General Manager for System Storage and Networking. I used to directly report to him in a previous role, and a few years ago he used to be the IBM CIO that helped with IBM's internal IT transformation.

Brian indicates that the previous approach to growth was to "Just Buy More", but this has some unintended consequences. He argued that companies need to adopt one or more of the following approaches to growth:

Store more with what is already on the floor - improve storage utilization with technologies like storage virtualization and thin provisioning

Move data to the right place - implement automated tiering, such as "Flash & Stash" between Solid-state drives and spinning disk, and/or Information Lifecycle Management (ILM) between disk and tape. Studies at some clients have found over 70 percent of data has not beed touched in the last 90 days

This time of dramatic change is the result of a "perfect storm" of influences, including the rising costs and risks associated with losing data, the increased need to index and search data, the desire for "Business Analytics", and the expectation for 100 percent up-time. This is driving IBM to offer hyper-efficient backup, Continuous Data Availability, and Smart Archive solutions.

The case study of SPRINT is a good example. SPRINT is a Telecommunications provider for cell phone users. They were challenged with 35 percent utilization, 165 storage arrays from six different vendors, and an expected 100 percent increase in their IT maintenance costs. After implementing IBM SAN Volume Controller (SVC) and Tivoli Storage Productivity Center (TPC) to manager 2.9 PB of data, SPRINT increased their utilization to 82 percent, reduced down to 70 storage arrays from only three vendors, and reduced their maintenance costs by 57 percent. Today, SPRINT now manages over 5 PB of data with SVC and TPC, have reduced their power and cooling by 3.5 million KWh, representing $320,000 USD in savings.

Roland Hagan is the IBM Vice President for the System x server platform. He talked about the "IT Conundrum" that represents a vicious cycle of "IT Sprawl", "Untrusted Data" and "Inflexible IT" that seem to feed each other. IBM is trying to change behavior, from thinking and dealing with physical boxes representing servers, storage and network gear, to a more holistic view focused on workloads, shared resource pools, independent scaling, and automated management.

IBM is leading the server marketplace, in part because of clever things IBM is doing, especially in developing the eX5 chipset that surrounds x86 commondity processors, and in part because of actions or decisions the competition have taken:

It doesn't break IBM's heart that Oracle decided to drop software support of their database on Itanium, which focued entirley against HP. Oracle runs on IBM servers better than Oracle/Sun or HP servers today, so it does not impact us, other than IBM has had a lot of people leaving HP to switch over to IBM.

HP has taken on a new CEO and reduced their R&D budget, causing them to be late-to-market on some of their offerings.

Dell continues to focus on the small and medium sized customer, and have not really broken into the "Enterprise".

Newcomer Cisco has some great technology that only seems to be adoptable in "Green Field" situations, as it does not integrate well with existing data center infrastructures.

The combination of ex5 chip-set architecture, Max5 memory expansion capabilities and Virtual Network Interface Cards (NICs), provide for a very VM-aware platform. For those who are not ready to fully adopt an integrated stack like IBM CloudBurst, IBM offers the Tivoli Service Automation software on its own, and a new [IBM BladeCenter Foundation for Cloud] as stepping stones to get there.

There are certainly more attendees here than last year, which reflects either the change in location (Orlando, Florida rather than Washington DC) as well as the economic recovery. I'm looking forward to an excellent week!