Orchestration can be one of those ambiguous concepts in cloud computing, with varying definitions on when cloud capabilities truly advance into the orchestration realm. Frequently it’s defined simply as automation = orchestration.

But automation is just the starting point for cloud. And as organizations move from managing their virtualized environment, they need to aggregate capabilities for a private cloud to work effectively. The automation of storage, network, performance and provisioning are all aspects handled in most cases by various solutions that have been added on over time as needs increase. Even for organizations that take a transformational approach -- jumping to an advanced cloud to optimize their data centers -- the management of heterogeneous environments with disparate systems can be a challenge not simply addressed by automation alone. As the saying goes, “If you automate a mess, you get an automated mess.”

The need to orchestrate really becomes clear when various aspects of cloud management are brought together. The value to the organization at this stage of cloud is simplifying the management of automation – otherwise a balancing act to manage multiple hypervisors, resource usage, availability, scalability, performance and more -- based on business needs from the cloud, with the ultimate goal of delivering services faster.

With orchestration, the pieces are woven together and can be managed more effectively to ensure smooth and rapid service delivery -- and delivered in a user-friendly catalog of services easily accessible through a single pane of glass. In essence, cloud orchestration = automation + integration + best practices.

Without cloud orchestration, it’s difficult to realize the full benefits of cloud computing. The stitching together of best practices and automated tasks and processes becomes essential to optimize a wide spectrum of workloads types.

In addition to rapid service delivery, the benefit of orchestration is that there can be significant cost savings associated with labor and resources by eliminating manual intervention and management of varied IT resources or services.

Some key traits of cloud orchestration include:
• Integration of cloud capabilities across heterogeneous environments and infrastructures to simplify, automate and optimize service deployment
• Self-service portal for selection of cloud services, including storage and networking, from a predefined menu of offerings
• Reduced need for intervention to allow lower ratio of administrators to physical and virtual servers
• Automated high-scale provisioning and de-provisioning of resources with policy-based tools to manage virtual machine sprawl by reclaiming resources automatically
• Ability to integrate workflows and approval chains across technology silos to improve collaboration and reduce delays
• Real-time monitoring of physical and virtual cloud resources, as well as usage and accounting chargeback capabilities to track and optimize system usage
• Prepackaged automation templates and workflows for most common resource types to ease adoption of best practices and minimize transition time

In short, many of the capabilities that we associate with cloud computing are really elements of orchestration. In an orchestrated environment, organizations gain tools to manage their cloud workloads through a single interface, providing greater efficiency, control and scalability. As cloud environments become more complex and organizations seek greater benefit from their computing resources, the need for sophisticated management solutions that can orchestrate across the entire environment will become ever clearer.

As part of the transparent development initiative, IBM SmartCloud Provisioning (formerly known as IBM Service Agility Accelerator for Cloud) launches a series of daily demos, starting from November 7th. Every session will take about one hour.

In this way you can have a look in almost real time at what is happening in IBM SmartCloud Provisioning development, learn about new and enhanced capabilities.If you are interested in joining the sessions, here is the schedule in Central European Time (CET):

Tivoli Usage and Accounting Manager (TUAM) development are pleased to announce the release of the IBM® Tivoli® Service Automation Manager (TSAM) - Extension for Usage and Accounting v1.0. This TSAM extension delivers cloud cost management capability by enhancing the integration, reporting and services between TUAM and TSAM. The extension allows cloud users to view historical invoice reports that show the charges associated with each project.

The Usage and Accounting v1.0 extension provides the following features:

Easier Cloud Usage Report Access - Enabling Cloud users to access and view historical Usage and Accounting Manager Cognos reports directly from TSAM. Single sign on is configured between the two systems to allow for easier report access.

Role-based Report Security - Security access can now be configured to ensure that users that belong to the TSAM Cloud security groups can only access the TUAM Cognos reports that they are assigned to. For example, users that belong to the Cloud Customer and Cloud Team administrator user groups in TSAM can now be assigned access to specific TUAM Cognos reports.

Account Code Report Security - Account code security is used for customer and team reporting data segregation based on cloud roles in TSAM. This is achieved by data synchronization between TSAM and TUAM which involves aligning TSAM entities such as customers, teams, security groups and users with TUAM entities such as clients, users and user groups. After the synchronization process has completed, account code security is applied to the reports that TSAM users access.

The following table shows the evolution in the TSAM/TUAM integration.

..

The diagram below show how the Usage and Accounting v1.0 extension facilitates the integration between TSAM and TUAM.

For more information about the Usage and Accounting v1.0 extension, log on to the Information Center.

The extension is available free of charge and is part of the TUAM 7.3.0.1 FixPack, which is available on Fix Central.Note: A Rates Preview and Charges Preview of costs is available now on the ISM Library as fully supported.

IBM made a significant commitment to OpenStack by joining the OpenStack Foundation as a Platinum Member. The IBM SmartCloud Orchestrator v2.2 product has adopted OpenStack to provide enterprises the functionality needed to effectively create and manage their cloud implementations.

The IBM Cloud Labs team is innovating in the area of cloud analytics. A new feature has been created named Information Hub for SmartCloud Orchestrator that adds exciting new reporting dashboards. The new feature will be available as an add-on at ISM Cloud MarketPlace.

The Information Hub dashboard has been designed for cloud users, administrators, planners and decision makers to provide information about the cloud infrastructure at their finger tips. It provides usage trend graphs, determines when a critical resource will run out, and aggregates the information for multi-OpenStack environments. Additionally, the information is made available for mobile devices.

These capabilities improve the productivity for cloud users and administrators. It helps cloud capacity planners to see the pace of cloud adoption in the enterprise and plan ahead. Decision makers can take the information with them and make informed business decisions about the cloud infrastructure.

IBM SmartCloud Orchestrator, the first new private cloud offering based on OpenStack and other cloud standards, is now available. Users are looking for Cloud solutions that increase agility, cost savings and offer a competitive advantage. IBM SmartCloud Orchestrator exceeds those needs:

Patterns of expertise learned from decades of successful Client and Partner Engagements - SmartCloud Orchestrator captures best practices for complex tasks, abstracted not hardcoded. Built in best practices KPIs, Measurement & Policies in the patterns to allow for semi-automated or automated vertical scaling up & down. Deploy applications rapidly with repeatable patterns across private and public clouds: SmartCloud Orchestrator enables third-party software deployments and custom pattern creation to “build once” and deploy across private and public clouds.

Robust, automated, high scale cloud provisioning - requested VMs will be up and running in under a minute using standard hardware

SmartCloud Orchestrator includes OpenStack!

End to End Orchestration, bridging domains, cloud, infrastructure, back-end integration, processes, service processes, etc. Dynamic at runtime to ensure you have the latest Human and Automated Interaction.

Two
new white papers are available on the IBM Integrated Service Management
Library ( ISML ) that explain how to use Tivoli Storage Manager to back
up different areas within IBM SmartCloud Provisioning.

The
first white paper provides information on how to use Tivoli Storage
Manager Backup-Archive client to back up and restore the boot volume of
an IBM SmartCloud Provisioning persistent virtual machine and how to
make periodic back ups of a normal volume, and select and restore a
particular backup.

The
second white paper provides information on how to use Tivoli Storage
Manager Backup-Archive client to back up and restore the following
components of the IBM SmartCloud Provisioning infrastructure: the
Preboot Execution Environment ( PXE ) server, the web console
configuration, and the HBase data store.

I wanted to let everyone know that a Trial Virtual Machine is available for the SmartCloud Monitoring version 7.2 FP1 product. The Trial provides a 90 day trial of the software to monitor your virtualized environment and includes the Capacity Planning tools for VMware and PowerVM. These tools can help you optimize your virtualized environment and save money.

Within a few hours you can have the Virtual Machine up and running and monitoring your Virtualized environment.

This is a great tool if you are working with a customer on a Proof of Concept. Or, if you are customer, it is a really quick and easy way to evaluate the software.

The Trial includes the SmartCloud Monitoring product plus a little bit of extra content. It includes monitoring for:

VMware

PowerVM including (OS, VIOS, CEC, and HMC)

Hyper-V

Citrix XenApp, XenDesktop, XenServer

KVM

Cisco UCS

Log File Monitoring

DB2

Agent Based and Agent-less Operating System monitoring

Network Devices

NetApp Storage

Integration with Tivoli Storage Productivity Center

Integration with IBM Systems Director

The trial also includes Predictive Analytics, Capacity Planning and Optimization for VMware and PowerVM

We know that cloud computing offers a myriad of benefits like rapid service delivery and lower operating costs. But it can also lead to challenges in data governance, access control, activity monitoring and visibility of dynamic resources—in essence, all aspects of IT security.

The IT organization must have the capabilities to both deliver services more quickly to meet the demands of the business and be able to provide high levels of security and compliance. In the past the delivery of the services was typically the bottleneck in providing new services, but now with automated cloud and self service delivery models the teams responsible for change management and security can quickly become the bottleneck due to manual processes and siloed tools.

For example, organizations need the ability to patch all of their systems, both physical and virtual, whether distributed or part of a cloud. Operations teams need better insight into and control of deployed virtual systems, including OS patch levels, installed middleware applications and related security configurations. And there can be too many security exposures with offline and suspended VM’s that haven’t been patched in weeks or months.

A holistic approach is needed that addresses rapid provisioning of services and automation of key security and compliance requirements. Together these capabilities can keep you in control of rapidly changing cloud environments. First let’s look at the capabilities needed in a cloud provisioning solution.

Second, a unified endpoint management approach is required to provide visibility and control of your systems, regardless of context, location or connectivity, and needs to deliver:· Heterogeneous platform support with seamless patch management for multiple operating systems, including Microsoft Windows, Unix, Linux and Mac OS, as well as hypervisor platforms· Automatic assessment and “single click” remediation, which shortens time to compliance by automatically identifying necessary patches and enabling users to target and remediate endpoints quickly

· Enterprise-class scalability and security to provide proven scalability, including fine-grained authorization and access control capabilities

With December's release of IBM SmartCloud Monitoring, Tivoli's venerable IBM Tivoli Monitoring product family, proven in data centers at the world's largest corporations, begins to adopt a "Cloud" posture. Sure, "Cloud" is a term bereft of a clear operational definition that we can apply at any given moment, and customers, analysts and vendors tend to bandy it about pretty freely these days. However, if we don't get too hung up what Cloud is or isn't, we can probably agree that it represents a migration from our traditional server-delivered infrastructure to one comprised of pooled computing resources shared by virtual workloads. Whether or not our customers are calling their virtualized environments "private clouds" today, and whether or not they've got a "cloud budget" that they're using for such initiatives, the fact that they're moving along the cloud maturity continuum at some pace seems inescapable, given IDC's assertion that we crossed the magical "50%" boundary last year, when half of all corporate workloads were running on virtual machines instead of physical ones.

If we're beginning to think in terms of clouds of pooled computing resources, it makes sense that we begin to deliver management solutions in the same way, right? If the server administrators, storage administrators and network administrators now report to a cloud administrator, we should begin to package solutions for those cloud administrators, combining multiple pieces of management technology into a single part number that customers can purchase and deploy. That's exactly what we've done with SmartCloud Monitoring. The discrete monitoring agents that are at the heart of IBM Tivoli Monitoring; OS monitors, application monitors, storage, etc., are as important as they ever were. Even though we're pooling those resources across virtual machines, we still have to monitor things like processes, CPU activity, IO throughput, and so on. We just need to add a layer on top of all that granular detail, so the cloud administrator can see, at a glance, what's healthy or unhealthy about his cloud environment, before drilling down into the nuts and bolts.

SmartCloud monitoring combines the VMware virtualization management features in ITM for Virtual Environments with virtual machine instance monitoring from ITM's operating system agents, to monitor a cloud infrastructure and the workloads running on it.

Our roadmap looks like an analyst's cloud maturity ladder, adding features such as automated provisioning, usage and accounting integration, and more detailed network monitoring, so our solution will "mature" along with the market, and customers' needs. See if the challenges along this ladder look like things that you or your customer have faced on their cloud journey, or are grappling with now. It's important to note that Tivoli has solutions that can be applied to each step, and for each problem. What SmartCloud promises is a way to bring those solutions together into more consumable bundles, tightly integrated together, to make cloud management simple to purchase and simple to deploy.

Service Health for IBM SmartCloud Provisioning has officially GA'ed and is now available on IBM Integrated Service Management Library ( ISML ).

Service Health provides pre-built integrations between IBM SmartCloud Provisioning and IBM SmartCloud Monitoring utilizing a custom agent, OS agents, and the ITMfVE agents. A product provided navigator offers a concise overview on the health of the IBM SmartCloud Provisioning infrastructure enabling the ability to identify and react to issues in your environment quickly minimizing the impact, such as an unresponsive compute node, high disk usage on storage nodes or key kernel services not responding. It also provides visibility into the KVM and ESXi hyper-visors.

IBM® Tivoli® Service Automation
Manager (TSAM) has delivered yet another cloud extension that provides service
offerings for automating the provisioning of network attached storage (NAS)
with an NFS export name. The file systems can then be mounted into virtual
machines provisioned within TSAM Virtual Servers Projects. The
extension introduces the concept of Storage-only Project, which
allows managing the entire life-cycle of the file systems (create, expand, set
access, and destroy), in a secure multi-tenant environment. It works in
integration with IBM N series and NetApp FAS series
storage systems as sketched in the picture below.

Once you download the installation
package from the Integrated Service Management Library (http://www.ibm.com/software/ismlibrary?NavCode=1TW10TS0F) and install it on top of TSAM 7.2.2
platform, your cloud administrator can easily configure the Extension for
Network Attached Storage to provision NFS-mountable file systems. In fact, the
extension provides a plug-in to the Cloud Storage Pool Administration
TSAM application where she can enter the hostname of the workstation running the
OnCommand NetApp management software, and the credentials to
access it. Then the extension automatically discovers all the storage resources (NetApp
Datasets) from the underlying storage systems and makes them visible as
TSAM Storage Pools. At that point the cloud administrator can regulate
access to the storage resources using the TSAM way of associating storage pools
and quotas to customers[1],
and that’s it, the extension is configured. Now you can delegate to your
customers the management of storage up to the assigned quota: the customer
administrators can start requesting storage for their virtual servers by
creating storage projects and add, expand, and delete file systems. The entry
point for this is the Tivoli Self Service Station – Storage Management folder
(showed in the picture below).

The customer administrator has to
enter a prefix for the NFS export name, a TSAM Storage Pool from which to carve
the storage, and the size of the file system, that’s it. She can decide to
create many file systems with same characteristics by increasing the value of
the “Number” spin control. She can decide to make the file systems available to
all the teams of the customer by checking the “Access to All Teams” box: by
default the storage is only visible to the team of users that owns the storage
project.

Note that once the storage project
has been created, the file systems cannot be mounted yet into virtual servers because
there is no ACL set on the IBM N series boxes for them. To do so, the customer
administrator creates TSAM Projects with Virtual Servers, and associates file
systems to the virtual machines belonging to the project: the extension
automatically updates the access control list (ACL) of the NFS export name
adding the IP address of the virtual machines. When the user logs in, she can
mount the file systems and use them (she gets the information of the NFS export
name with a notification e-mail).

In summary, the predefined functions
that you get with the TSAM Extension for NAS storage are:

There are no predefined features to
create and manage NetApp Datasets neither vFilers to create customers silos.
For example, what if you want to automate the creation of a vFiler and of a
couple of storage pools – gold and silver, upon on-boarding of a new customer?

There are no predefined features to authorize
the shared file systems to anything but a virtual server within virtual servers’
project. What if you want to automatically attach a file system to a VMWare Cluster
as backend data store for VM images upon creation in a storage project?

Well, the TSAM Extension for NAS
storage provides low-level Tivoli Provisioning Manager (TPM) Workflows and
Tivoli Platform Automation engine (TPAe) Runbooks that can be used to implement
such automations in custom extensions that you can write based on best
practices described in the TSAM platform extensibility guide.

- michele

[1]This
article focuses on a public cloud solution, where the service provider sells
services to his customers. So, the cloud administrator is the administrator of
the entire cloud platform, end the customer administrator is the administrator
of the customer segment of the cloud.

With the proliferation of cloud computing, many businesses are starting to adopt a service provider model—either as a deliberate strategy to establish new revenue streams or, in some cases, inadvertently to support the growing needs of their organizations. This is especially true for companies with diverse needs, whether they’re tech companies with dev teams churning out new apps and services, or business owners driving requirements for SaaS services and cloud capabilities to enhance their data center operations.

In any event, the distinction between managed service providers (MSP) or cloud service providers (CSP), and companies growing in-house capabilities may not be as important as the common need to respond quickly and scale to support customer needs. The challenges facing all of these companies include facilitating the creation of new applications and services while maintaining quality of service, and the need for automation to reduce human resources and error from manual tasks—all with an eye to drive revenue and acquire new customers.

And so, the challenge for service providers of any kind is to increase scalability, automation and uptime while constraining costs. Companies are increasingly solving the critical piece of this puzzle by embracing rapid, high-scale provisioning and key cloud management capabilities to allow them to grow as quickly as their customers’ needs. In particular, the benefits accrue in four key areas.

First, applications can be deployed rapidly across private and public cloud resources.

Third, operational costs can be lowered by leveraging existing hardware to support an array of virtual servers and diverse hypervisors.

And fourth, high-scale provisioning enables rapid response to changing business needs with near-instant deployment of hundreds of virtual machines.

While the spectrum of virtualization to orchestration functionality helps to manage their environments, high-scale provisioning in particular offers a cost-effective way to leverage capacity as a business commodity—a way for service providers to offer seemingly limitless capacity to their customers while lowering the relative costs of providing it.

In the case of Dutch Cloud, a CSP based in the Netherlands, a growing client base allowed the company to expand but it was very conscious of the costs and issues related to scalability, performance and security. By adopting a lightweight, high-scale provisioning solution for core service delivery, Dutch Cloud added capacity easily and was able to scale up rapidly without interruption to customer service. The CSP also reduced its administrative workload by 70 percent by adopting automation best practices. Monthly revenue has tripled twice in the last six months without an increase in operational costs.

Other service providers such as SLTN, a systems integrator serving large and mid-sized businesses, have experienced similar cost savings by extending platform managed services to a cloud delivery model. By implementing a low-touch, highly scalable cloud as its core delivery platform across multiple compute and storage nodes, SLTN was able to deploy new services in seconds rather than hours. It was also able to utilize existing commodity skills without significant training, integrate the existing mixed environment and minimize operational administration and maintenance. The underlying IaaS cloud capabilities allowed SLTN to be more efficient and to provide the full spectrum of cloud services to their own customers in a pay-as-you-go model—with better service and at a lower price point.

The benefits that these companies experienced are evidence that high-scale provisioning and cloud management capabilities can dramatically increase service capacity. For service providers of all stripes—whether deliberate or not—these benefits are a critical part of the evolution of cloud services and offer a meaningful way to deliver more value to themselves and their users.

As businesses adopt cloud environments to control IT complexity, pool resources, and improve cost efficiencies, the TUAM development team have been engaged in evolving the usage and accounting capability in IBM Tivoli Usage and Accounting Manager (TUAM) beyond traditional Enterprise charge-back.

In such a shared cloud environment the ability to accurately assess which IT resources and services are being utilized, how much they are being utilized, and by whom is fundamental if service providers are to justify the cost of the IT resource and expense.

The latest release of IBM Tivoli Usage and Accounting Manager, Version 7.3, provides Cloud Cost Management for those businesses needing to understand the new and dynamic usage of shared IT resources in Cloud and Virtualized environments, and seeking to bill or charge business units for their share of resource use including compute, storage, networks, energy, and personnel.

Read more about the new TUAM Cloud Cost Management Extension v1.0 for Tivoli Service Automation Manager (TSAM) in our blog update.

IBM Tivoli Usage and Accounting Manager allows businesses to:

Link their Cloud IT expenditures to business value delivered

Accurately allocate cost across functions and departments/projects

Understand true IT costs resulting in better IT investment decisions and get more out of their current investments

Interactively report and, if desired, bill or charge departments and functions accurately for their use of IT resources

Additionally the development team are working to supplement these core capabilities with new price tiering and invoice preview features for Cloud administrators and consumers. These features will be provided to TUAM users via the IBM Integrated Service Management Library from October 2011.

Please contact our usage and accounting architect John Buckley (john.buckley@ie.ibm.com) if you wish to understand or share your thoughts on the new Cloud use cases.

IBM Cloud Orchestrator 2.5 comes with a set of interesting new features.

First of all the support for OpenStack Kilo; this opens the door to a set of very interesting scenarios related to software defined environments (think about the neutron capabilities in Kilo). Moreover you can now either leverage the OpenStack distribution provided by IBM (IBM Cloud Manager with OpenStack 4.3) or another OpenStack distribution based on Kilo. Orchestrating workloads on a non IBM OpenStack distribution does no longer rely on the Public Cloud Gateway.

The list of supported public clouds has been enriched with the addition of Microsoft Azure: from IBM Cloud Orchestrator self service user interface you can now register and manage Microsoft Azure regions, deployment artifacts and manage resources.

The pattern engine is now based on OpenStack Heat, no more proprietary technology involved; the user experience has been enhanced allowing to store and select heat templates from the self service UI.

The installation procedure has been simplified; the number of needed servers has been shrunk to reduce hardware requirements, a prerequisite checker has been added to enable a fast detection of possible failure points.

I've been impressed by the speed of
provisioning a set of virtual machines in just a few tens of seconds
using IBM Smart Cloud Provisioning. In most cases you can get a
running virtual machine in less than one minute.

The Smart Cloud Provisioning technology
has been devised and particularly optimized for managing the
following cloud infrastructure scenarios:

Infrastructure composed of
homogenous resources

High level of standardization with
a relative small set of master images used to provision many
instances from the same image

Typical life cycle of the
provisioned resources with short average time of life of provisioned
virtual instances

Many other workloads can be deployed
and easily automated on top of Smart Cloud Provisioning. For example,
traditional stateful applications can be easily deployed for simple
HA solutions. Anyway you get the maximum performances from Smart
Cloud Provisioning when operating in the context of the above
scenarios.

To achieve such high performances Smart
Cloud Provisioning has been designed focusing the attention to an
optimized virtualization infrastructure based on OS streaming: no
need to copy large image files over the network when provisioning.

Image copying is the single biggest
bottleneck in VM provisioning today both in terms of CPU, memory, I/O
and bandwidth usage. In traditional Cloud provisioning approaches all
of this overhead is system resource that is just pure overhead
(nobody builds a Cloud to provision systems - provisioning is an
overhead that is required to have systems on which business workload
is deployed, and any overhead is in conflict with the business
workload).

The key element of such infrastructure
are the so called ephemeral instances, that are virtual machines
having no persistent state. Once they get terminated all the data
associated with them is deleted as well. They are clones of a master
image and these clones will have a primary virtual disk which is
ephemeral: when the instance goes, so does its ephemeral storage
(mechanisms exist in Smart Cloud Provisioning to provide persistence,
if needed by some scenarios).

When creating a new instance, since
master images are read-only resources and are replicated across the
storage cluster, Smart Cloud Provisioning uses the Copy-on-Write
(CoW) technology and the iSCSI protocol to stream them avoiding
expensive copying. Each iSCSI session results in a valid block device
to be created in the host OS. Of course each guest OS (corresponding
to a given instance) requires a writable block device representing
the main disk of the system. All supported hypervisors have a storage
virtualization layer which includes the Copy-on-Write technology. For
example, KVM's qcow2 files can be configured to implement CoW
by referencing a backing storage device. VMWare has something called
redo files which effectively do the same thing as well. In each case,
the hypervisor can natively use the CoW file referencing the iSCSI
block device to expose a virtual block device to the virtual machine. Depending on the hypervisor and guest
OS this device will show up as something like /dev/sda or c:\. The CoW files are stored locally on the
hypervisor's file system. When the instance is terminated, the
Smart Cloud Provisioning agent will simply discard the CoW file and
check if any other instances are using the same iSCSI device. If the
device is no longer in use, the agent will also tear down the iSCSI
device.

Thanks to the above infrastructure the
action of provisioning a new virtual machine results in a very fast
and reliable process that allows to create individual systems in tens
of seconds and of peak requests of 1000s of systems per hour.

If you're interested in trying the
Smart Cloud Provisioning product, you can download a trial version
from the following link:

DevOps has become something of a buzzword lately but the idea behind it can be truly powerful. Using a combination of technology and best practices to increase collaboration between development and operations teams can accelerate the application development lifecycle while improving software quality and reducing costs.

For many, the development process has become more complex and segregated from operations. Factors such as inefficient communications, manual processes and poor visibility into the deployment process result in production bottlenecks as well as subpar quality throughout the development and delivery cycle.

To address these challenges, organizations have often turned to adhoc and siloed efforts. And so gaps still exist due to lack of integration across people, processes and tools. The reality is that an effective DevOps solution requires an integrated approach of continuous delivery that optimizes and accelerates the application lifecycle in every phase: development, testing, staging and production.

What this means is that changes made in development are continuously built, integrated and tested for function, performance, systems verifications, user acceptance, and then staged, ready for production. And it can all be brought together through an integration framework that can automate the individual tasks across the various stages of the pipeline and continuously deliver changes, providing end-to-end lifecycle management. Continuous automation is necessary in the following key areas:• Continuous integration provides faster validation and delivery of code changes via automated, repeatable execution of build processes with continuous feedback• Continuous deployment provides on-demand environment configuration and the ability to continuously deploy code and configuration middleware.• Continuous testing automates testing in production-like environments.• Continuous monitoring increases visibility into application performance and provides data to trace and isolate product defects.

With an automated process for moving application changes through progressively richer test environments that mirror the production environment, chances for error and roll back are greatly reduced.

The result is increased visibility into the delivery pipeline, standardized communication between Dev and Ops and more efficient and accurate delivery of software projects. And the delivery process can scale dynamically as business needs grow.

Here’s how IBM is addressing DevOps, with the launch of SmartCloud Continuous Delivery--an agile, scalable and flexible solution for end-to-end lifecycle management that allows organizations to reduce software delivery cycle times and improve quality. SmartCloud Continuous Delivery is also available on Jazz.net.

Most generally accepted definitions of Cloud Computing imply the notion of Pay per use. For a Service Provider this means defining how they intend to bill for Cloud Services, while for a Cloud enabled DataCentre in the enterprise this implies some form of showback/chargeback model. As for those consumers actually using the Cloud, they want to understand the financial implications (what will it cost?) before committing their workloads to it.

.

As a Cloud User

Do you want to see what your project will cost before you provision it?

See a price list for all the services you can provision - comparing prices for different options?

Use a calculator to help you predict what a project will cost per month (or day or year)?

See what the effect of changing the resources used by a project will do to the cost?

As a Cloud Provider

Do you want to define different prices for a Service depending on the options that the user chooses?

Set different prices for each service for different customer groups?

.

The following screenshots illustrate how the new cloud cost management capability delivers solutions to these problems. The new TSAM Extension for Usage and Accounting is available to download now via the ISM Library.

See the Prices for the different Cloud offerings and compare different options

The
first dropdown in the view shown below shows the Offerings that are available to the customer.
Offerings can be anything the Cloud provider chooses to make available, for example: Virtual Servers, Storage or even PaaS or Saas offerings. The consumer can see up front what the different rates are for each component, and compare these across different offering types..

.

See what it would cost per month to run a new project in the Cloud

In this example, we want to have one machine to run an Application Server and one machine to run a Database and we need additional Tier1 storage in order to store the database data. The calculator shows how much this will cost per month overall and in terms of the two Service Offerings that this particular Cloud provides..

.

Different customers can be assigned to different subscriptions

A subscription is a means to segment your customers into different groups such as by geography or customer type (direct, business partners etc).

In this example, the RATIONAL and TIVOLI customers are assigned to the US (United States) subscription. Customers with this subscription share the same set of available offerings and pay the same price for those offerings..

.

Offerings are defined once and then added to Subscriptions

Once they are part of a subscription, the actual rate values (price per unit) can be defined for each element of the offering template.

.

If you wish to join the TUAM group to get more involved in reviewing new features and testing beta capability, then let me know and I can send you an invite.

How can I "easily" monitor the
performance and availability of the OS and applications of launched
instances?

The
solution is to integrate IBMSmartCloud Provisioning with IBMTivoli Monitoring (ITM) so that all the running instances will be
connected to the ITM Server and managed according the performance expectations

It can be
achieved by exploiting the current integration between IBM
SmartCloud Provisioning and the Image Construction and Composition Tool
(ICCT), available in IBM SmartCloud
Provisioning version 1.2, and performing the following steps:

San Diego was my second OpenStack summit. Many of the same faces were in the design
summit sessions I attended, but there were many new faces as well. One of the most exciting observations from
the Folsom design summit was the incredible talent pool assembled. The Grizzly summit was no different – it’s
great to interact with so many incredibly smart, deep and experienced
people. I’m convinced that a single
company could never amass such a collection of quality talent for one
project. I guess it’s no wonder they’re
saying OpenStack is the fastest growing open source project ever.

I must apologize in advance, because I am sure to miss
someone, but I want to tell you about some of the people I interacted with in
the nova, glance, and cinder design sessions.
Over the past few months I’ve really been impressed with the PTL
leads. They’re very smart, highly
motivated, and excellent facilitators.
The design sessions invariably get into open debate, but productive
debate. I was impressed with the PTLs’
natural abilities to channel the discussion to bring out the key issues and
land on some concrete next steps.

I got to meet Microsoft’s Peter Pouliot, who’s heroic and
tenacious efforts successfully delivered HyperV support after a rather dodgey mess earlier in the year. Peter is not your stereotypical Microsoft
developer. He’s an open source guy
through and through. It’s clear that his
personal spirit had a lot to do with corralling the community to deliver
quality code in a very short time frame.
It was great to meet Peter and some of his non-Microsoft
colaborators. Great job guys!

I also had the pleasure to meet with some of VMware’s
developers and not just those acquired via the billion dollar Nicira
acquisition. The Nicira guys are great –
no question but I was also very pleased to meet the VMware developer who
completely rewrote the less than adequate VMware compute driver. I hope to work closely with them to ensure
the hypervisor is well supported and as interoperable as possible with other
properietary and open source technologies.

Of course, I can’t speak of OpenStackers without mentioning
RackSpace. Over the past two summits, I
got to interact with a number of RackSpace developers, aka Rackers. I got to hand it to them, they really do
have a great bunch of people and definitely
bring a massive scale service provider
perspective to the discussion. Of
course, being an IBMer myself, I can’t help but bring the enterprise customer
perspective into the mix. I think
OpenStack benefits greatly from these two perspectives brought together in open
source.

The Innovation:

OpenStack has done a great job defining an extensible
framework for IaaS. This flexibility not
only helps accommodate the varied needs from enterprise to service provider,
but it also enables a massive sea of innovation. Since the Nicira acquisition there’s been a
lot of attention on the innovation around software defined networking and
quantum, the OpenStack project that provides the abstraction for a variety of
implementations ranging from proprietary,
to pure open source like Open V Switch, to traditional standard
networking equipment. I think storage is
even hotter than networking these days with a slew of vendors combining
commodity 10Ge switches with commodity Intel servers with a combination of SSDs
and spinning disks to provide new approaches to storage for virtualized
environments. Of course software plays a
critical role in many of these virtualized storage solutions. Dreamhost’s open source distributed file system Ceph has been getting a lot of interest.
Enterprise storage vendors like NetApp, IBM, and HP have also
contributed cinder drivers to support their products within OpenStack
clouds. There were also a number of
summit discussions about exposing the different backend implementations of the
abstractions with different qualities of service. Some people, including one of my developers,
have begun to use “Volume Types” as a way to let users choose the kinds of
volumes they need. I believe this is
critical for compute clouds to cover the broadest spectrum of workloads. Of course this principle applies to other
resources and not just cinder volumes.

Finally, perhaps the most important thing about an OpenStack
cloud is interoperability. Starting with
the hypervisor, IBM has a solution that enables interoperability of images,
volumes, and networks across Xen, KVM, VMware and Hyper-V. We had a fewsessions where we
discussed how we can bring the same interoperability to OpenStack. To start with, we need to be able to register
readonly cinder volumes as glance images.
Next to ensure we can scale out we need to be able to register multiple copies of the same image.
Finally, to take advantage of performance we need to abstract the clone
operation to enable Copy on Write (CoW), Copy on Read (CoR), as well as the
current local cache plus CoW mechanism for backwards compatibility and to
support 1Ge networks. Combining these
will enable images to work across multiple different hypervisors.

We also need interoperability with existing images which
means VMware and Amazon as the two most common forms of images. Today, it’s quite easy to automate simple
image formatting differences, but the challenge is in the assumptions made by
the images. The current direction for
OpenStack is to use config drive v2 to pass instance metadata to the
guest which is responsible to pull key system configuration such as hostnames,
credentials, and IP addresses. Typical VMware
images on the other hand generally expect either a push model, where the
hypervisor manipulates the filesystem prior to booting the image, or via their
guest agent, VMware tools.

To make matters worse, the current OpenStack assumes
different image formats for each supported hypervisor. One of the sad punchlines from Troy Torman’s keynote was that RackSpace’s private cloud distro named Alamo does not interoperate
with their publci cloud even though they’re both OpenStack. The good news is that, as Troy went on to
say, the time has come to focus on interoperability.

I got into a great conversation with Jesse Andrews, one of
the original OpenStack guys now at Nebula.
He described the approach to image interoperability by enabling cloud
operators to provide custom image workers at image ingestion time. This enables cloud providers to register
custom image processing code that gets called whenever an image is uploaded to
Glance. The simplest case of this is to
convert image formats to enable Alamo KVM images to run on RackSpace’s Xen
based public cloud.

Fortunately, IBM’s SmartCloud Provisioning (SCP) includes
some image management technologies which can help with the more challenging
problems mentioned above. Today’s SCP
2.1 will interrogate images in the library and check for cross hypervisor
compatibility. Users gain visibility
into this information and can optionally automate fixes wherever possible. We also use this technique to detect the
presence of a critical guest agent.

This brings me to one of my favorite little open source
projects, cloud-init created by Scot Moser at Canonical. If only it wasn’t GPL ;-). Many OpenStackers are using cloud-init to
automate the system configuration pull from config drive v2 mentioned above. This little bootstrap can do much much more,
but this is certainly a great job for this trusty little tool. Unfortunately, it’s only for linux. It’s even been made to work with fedora and
will likely be included in RHEL. Since
we cannot use GPL code in IBM products, we have a similar bootstrap for both
Windows and Linux guests. We’re working
with our lawyers to get approval to contribute this code to cloud-init. Of course,
if Canonical wants to use a more commercial friendly license like
OpenStack has done, then I could spend less time with lawyers and more time
hacking code ;-).

The beauty of this little bootstrap is its simplicity. This simplicity enables us to automatically
inject the bootstrap into Windows and Linux images. This will let us automatically fixup any old VMware,
or Hyper-V image so that it works on OpenStack.
This is a critical first step towards interoperability.

OpenStack is truly becoming an industry changing
and historic project. With so many
incredibly talented people from countless companies across the globe it’s no
wonder there is so much innovation in the community. I’m really happy to be a part of this growing
community. Together I believe we can
change the industry for the better. If
you would like to be part of this growing and innovative project, check out the
“community” link at www.openstack.org.
Also, we would like to invite you to check back here for future blogs on
OpenStack and IBM’s involvement. OpenStack
is a big part of IBM’s open cloud strategy and we want to be sure to keep you
up to date on our progress

The new capacity planning tool, now available in Beta, will unlock the value of Tivoli Monitoring and Warehouse and enable a rich set of analytics on the existing data. This new capability will enable you to

Optimize how you use capacity in the environment with intelligent workload sizing and placement

Apply business and technical policies to keep your environment efficient and risk-free

Make changes in a what-if analysis framework and view the impact of change.

The tool leverages Tivoli Integrated Portal (TIP) and Tivoli Common Reporting (TCR) with embedded Cognos reporting engine. It integrates with the ITM and TDW infrastructure to get configuration and usage data from your virtual infrastructure.

Here's a quick overview of the advanced planning scenarios you can now implement in your virtual environment using this tool.

Key Scenarios for a Capacity Analyst

Planning for capacity growth: Let's suppose your business provides a forecast that will increase the load on the IT infrastructure in the coming months. The capacity analyst can model the increase in resource requirements from the existing VMs in the what-if planning tool, scope the part of the infrastructure to analyze, and automatically generate a plan to fit the increasing demand. If required, new servers can be added to handle the growth.

Ensure compliance with defined capacity planning policies: The LOB and application owners often provide a list of their requirements to the capacity analyst in terms of how their workloads should be placed on the IT infrastructure. These are typically business guidelines to improve efficiency, reduce cost, respect organizational boundaries, or cut risks on a virtual infrastructure. For e.g. The Finance and Payroll apps may not share common hosts or may not want to share hosts among apps with different downtime requirements. There may also be technical policies that guide planning. For e.g. reduce license cost by putting OS images on fewer hosts or the DBA may want to keep some headroom for the database VMs. The tool can help to centralize the creation of such policies and select a subset to guide a what-if planning scenario.

Avoid bottlenecks in your environment: IT Administrators can predict a bottleneck in a VMWare cluster that may not be fixed by dynamic allocation within the cluster. These are often long term issues as the cluster may be running VMs that are not the right combination to share resources dynamically. The planning capability may be used to recommend how VMs can be moved “across” clusters or clusters can be restructured to remove bottlenecks and optimize resources in a broader scope.

Plan for new users into a Cloud environment: The Cloud administrators are often challenged with planning for new users on the shared infrastructure and do what-if analysis planning. With this tool, they can simulate new VMs on the discovered Cloud, add information regarding users, create policies specific to such users, and create a recommended new environment plan. The policies may simulate users that want dedicated hosts for their VMs, or some images that need specific types of hardware etc. The recommended plan can help them to understand how and where to add new hardware or consolidate VMs to free up fragmented Cloud resources.

Plan for retiring or re-purposing hardware: The planning capability enables the user to add new information for the discovered environment. For example, a user can add warranty date information about the discovered hardware, often contained in spreadsheets or other tools, and then select hosts that are more than 5 years old in the planning tool. They can add new hardware from the catalog for a what-if scenario. The tool can then automatically generate an optimized plan on how the workloads from the old hardware will fit on the new hardware and how many new machines of what type are required.

There may be several other scenarios that one can come up with on this tool framework.

The planning tool also provides a workflow-driven UI with both fast-path and expert-mode options. The main workflow page is shown below with a 5-step approach to create optimized virtual environment plans with default options for several steps. One can iterate through these steps to reach the desired results.

Load the latest configuration data of the virtual environment for analysis

Set the time period to analyze historical data

Define the scope of hosts to analyze in the virtual environment

Size Virtual Machines in scope

Generate a placement plan for the virtual machines on the physical infrastructure in scope

An example recommendation output of the tool is shown below with interactive topology navigation capability, summary views, and risk scores assigned to the infrastructure elements. This is an actionable recommendation as one can take this structured output in an XML and write an adapter to trigger automation workflows that implement the recommendations. The example screen shows how we analyzed a cluster with 4 hosts and recommended a consolidation on 3 hosts.

The topology view is interactive as it allows the user to click on various nodes and visualize the summary of the infrastructure levels below the node. Risk levels of the nodes are shown as node colors.

Hope this will be an exciting set of functions to start with and we look forward to suggestions on feature improvements and scenarios. Please contact Gary Forghetti (gforghet@us.ibm.com) to schedule a demo or sign up for the Beta version of the tool. We'll keep updating this forum with more and more details, such as demo videos, white papers etc.

A usual adoption
pattern for cloud computing are desktops. It's really straight
forward because in general each company has standardized desktops:
only some specific version of the operating system are supported,
only specific flavours, only some applications are allowed and
typically everything is managed by the IT team.

If we think at the
benefits of adopting desktop cloud, some of them really jump
powerfully in front of the eyes: the IT team can really enforce
standardization (e.g. you can select as desktop only one of the
proposed flavours); the maintenance of the hw becomes far easier
given its consolidation; old, out-dated PCs can be used just as
connectors to the desktop hence gaining new life. From the desktop
user point of view he does no longer need to carry on some company
asset to work: healthier (no more heavy hw to take home or
travelling); safer (data is in the cloud).

But this is
nothing new, desktop cloud solution are already on the market, so
let's see if IBM SmartCloud Provisioning can bring additional
benefits to the desktop world.

What if we start
dealing with non-persistent desktop images?Non-persistent
images are the ones that disappear once you shut them down. You might
be asking yourself “well, that's not so clever, what about my data?
Are they lost?”. This is actually a very good point and this is the
keystone of the benefits coming with the adoption of non-persistent
images.

The idea is that
all user data get stored into external (persistent) volumes that can
be attached/detached on demand to the non-persistent image.

If we now apply
this technology to the desktop world, it shed an interesting new
light on some typical and painful scenarios:

Operating
System aging

Operating
System or Software patching

Maintaining
the compliance of the desktops

Optimizing
resource consumption

Supporting
changes in the amount of desktop users

In a traditional
infrastructure, when the operating system goes or is getting close to
go out of maintenance, a massive migration campaign starts: all
desktops need to be migrated. Now the migration statistically does
not go smoothly for all users and hence some of them will be stuck
even for days. If you use non-persistent images, you can easily
overcome this either creating a new master image with the new
operating system or upgrading a single instance of the image, do your
test campaign to make sure everything keeps working, then deploy it
in as many instances as the desktops you need to upgrade are, attach
to the new images the volumes with the user data and get rid of the
old images. If you leverage the incredible deployment speed of IBM
SmartCloud Provisioning, you'll have a brand new set of desktops in
minutes.

Analogously we may
think about patching the operating system or a software running on
the desktop: they key idea behind this is that you're always going to
patch either the operating system or a specific software, never the
user data that keep living into separate volumes.

If we think at the
compliance aspect, remember that the user cannot save any change he
does on the boot disk of the image since nothing gets ever stored on
the disk. He is only empowered to write his own stuff on the
additional volumes. This should discourage him from even trying
installing new software or editing the operating system
configuration, since everything will be lost at the first shutdown.

I know in your
company you may have different configuration flavours of the same
operating system according to the department for which the desktop is
tailored. For example you may need to have different firewall
configurations according to the security level the end user is
entitled to. Well, with IBM SmartCloud Provisioning you can leverage
the User Data field at deployment time to specify these special
configurations. Of course this may even not be shown to the end user,
but you may mask it enlarging the list of the offering with the
specific configuration. Under the covers the instance is launched
with the proper parameters: no master image duplication, no manual
configuration; everything is automated and standardized.

What about
optimizing resources? Desktops by their nature have all the same
operating system and configuration (at least for department),usually
they come also with the same applications installed on top. If you
deal with non-persistent images you are just saving lots of
duplicated, useless copies of the same operating systems and software
on the disk. Then, if you think that once the desktop is shut down,
its resources are released (i.e. cores and memory), you can better
optimize your hardware using those resources for other
applications/users (they may even be server application or desktops
for users residing in a different timezone).

New employees
coming on board? A project outsourced to an external work-force?You may want to
have this people productive more than immediately. With IBM
SmartCloud Provisioning, their desktops will be up and running in
seconds.

With our 7.2.2 release we enhanced our extensibility model. What does this mean for you?

private and public cloud service providers can extend their solution by adding extensions to their environment. For example they may want support for specific network or storage use cases.You will see extensions appear over the next weeks and months on the ISM library.

ISVs, SIs, customers and IBMers can contribute to the extension community by building and uploading their accelerators.How you build Tivoli Service Automation Manager extension is described in the extensions guide.

Extension can vary in value and complexity depending on the business and technical objectives.

You can just change the UI branding or implement sophisticated custom workflows.Here an overview of the extension points:

The case study shows how one of the world’s leading infrastructure outsourcing providers has seen the business opportunity of offering to its clients a cloud-based solution that combines the benefits of a high-value infrastructure service provider with the cost advantages of Cloud computing. Capgemini focused the new cloud based services on delivering to their clients Infrastructure as a Service capabilities with much higher flexibility and substantial cost-efficiency. In partnership with IBM, Capgemini built a fully integrated cloud delivery platform for clients in the UK and USA leveraging the Tivoli Service Delivery Manager solution that includes the IBM Tivoli Service Automation Manager, Tivoli Monitoring and Tivoli Usage Accounting Manager products. On top of the IBM hardware BladeCenter HS22V and XIV Storage System technologies.

The key aspects of the solution built by Capgemini has been:

Implementation of a resilient and scalable global infrastructure with capability of managing resource pools in different regions and with a modular design for quick scale out

Single solution that enables to manage a wide range of platforms and architectures that does not tie to any specific hardware technology or vendor. Ability to choose the right hypervisor and guest OS platforms for the right workload

IBM® Tivoli® Service Automation
Manager (TSAM) has delivered a new extension to configure
extra disks in addition to the boot disk when requesting virtual machines
within a Project with VMWare servers. Downloading the installation package from
the Integrated Service Management Library and installing it on top of TSAM 7.2.2
platform enables the cloud administrator to prepare and manage a multi-tenant, customer-segregated
environment for hosting the additional disks. In particular, the cloud
administrator can select the VMWare data stores that he wants to use for
additional disks grouping them in TSAM storage pools that can be then
associated with one or more customers (*), meaning that only those customers
can carve storage from the data stores. She can also limit the amount of
storage that each customer can use on a TSAM storage pool. Finally, the cloud
administrator can flag this type of TSAM storage pool to be thin provisioned.

Once the cloud administrator has
prepared the environment, then the users of the cloud can request virtual
machines equipped with extra disks – in addition to the boot disk, taken from
one of the TSAM storage pools they are authorized to. The extension
automatically formats and attaches the disks to the virtual machines, so when
the users log in they can start working.

The life-cycle of the extra disks is
tied to the life-cycle of the virtual machine to avoid any inconsistency of
data, which means that they are saved, restored, and deleted together with the
boot disk.

The Extension for Additional
Disk has some gaps that should be filled in one of next releases: the
users cannot expand extra disks and cannot modify the configuration of a
virtual machine to attach or detach extra disks.

(*) This article focuses on a public
cloud solution, where the service provider sells services to his customers. The cloud administrator is the administrator of the entire cloud platform.

The IBM® Image Construction and Composition Tool is a web application that simplifies and automates virtual image creation for public and private cloud environments, shielding the differences in cloud implementations from its users.

This white paper provides Software Specialists and other product experts with helpful tips and techniques to plan, design, and create software bundles in the Image Construction and CompositionTool.

My team and I have been heads down working to get Smart Cloud Orchestrator, our newest cloud offering, to market. Last week we had our annual Pulse conference in Vegas. I'm just recovering from its aftermath now and wanted to write a short blog about the experience. It should be no surprise that folks like James Governer of Redmonk offered some interesting perspectives along with Infoworld, and Wired. While I am very pleased to hear the overwhelmingly positive press coverage, I am truly stoked about the direct customer feedback I got during the event.

Between sessions, Vegas dinners, and the occasional shut eye, I had a lot of customer meetings. Since we first announced our involvement with OpenStack, Chris Ferris, Todd Moore and I have been meeting with customers all over the world. Most of these discussions were with customers already working with OpenStack on their own. Last week, we had the band together again meeting with customers together and independently. What was interesting for me was that it's no longer just the bleading edge early adopters! Many customers are realizing that OpenStack is the future of the datacenter and they don't want to get left behind. Similarly, more and more of our enterprise customers have seen the benefits of DevOps and its relationship to cloud technologies. Things really have changed a lot during this past year!

While standardizing on the IaaS is a critical first step, I was thrilled to hear how many customers are using Chef and/or Puppet These arguably represent the second step towards the fruits of DevOps. It really feels like we're finally ready for the next step in this journey. Ironically, less than two weeks before Pulse, OpenStack Heat was voted in as a core OpenStack project after a year of incubation. Heat was started by RedHat as an open source implementation of Amazon's Cloud Formations which enables users to easily combine multiple cloud resources together to form more meaningful solutions, applications, or services. Just as OpenStack compute moved past its original Amazon compatible APIs onto its own truly open APIs, I expect we'll see the same evolution in Heat. In fact, there is already an Oasis standards technical committee working on this very problem called TOSCA. I really think these two efforts need to converge so that TOSCA is the open standard specification and Heat is the open source reference implementation. The Heat team has been talking about this since its inception.

I really liked the way Jesse Andrews, one of the OpenStack founders, put it. Jesse has long been using the analogy of the linux kernel to describe OpenStack and does not want it to stray from this for its own good. When we talked about heat last week he again used an analogy from linux. This time he chose the debian package manager tool APT to describe heat as the package manager for the cloud operating system. I think this is a brilliant analogy, because the success of any operating system hinges upon the applications that run on it. Similarly, the value of cloud is in the applications or services that run on it.

I'm excited about heat and I'm looking forward to the next OpenStack summit to discuss its evolution. Our Smart Cloud Orchestrator is all about open reusable automation content. Be it native packages, chef recipes/cookbooks, virtual images, TOSCA templates, or BPMN standards based runbooks we want our customers, partners, and open source communities to be able to share and reuse cloud automation. I hope heat and TOSCA become the enabler for distributing and operating cloud applications and services. Anyone interested to help on this, please contact me and join me next month at the Havana summit!

IBM
SmartCloud Provisioning is an infrastructure-as-a-service cloud able
to work with different types of hypervisors. You can easily install
and configure new compute nodes to run your virtual images on KVM,
VMWare and Xen.

This
is a very interesting sentence, and it seems to be very useful. First
time I read it, I thought: “Do I need to have 3 different images?
Can I have same image running on any hypervisor?” Answers are yes
to both question. Depending on how you would run your image you could
need have different images for different hypervisors or just use an
single image regardless the underlying hypervisor.

Before
going deeper on how IBM SmartCloud Provisioning deploys virtual
images, I would discuss different hypervisors. Each of them has its
own peculiarity, allowing you to leverage different features,
implemented in different ways. This lead us to deal with different
hypervisor limitations. Here the following are most common
limitations:

VMWare
and Xen are able to manage SCSI devices, but not KVM

KVM
and Xen can use virtio drivers but not VMWare

VMWare
uses a proprietary agent inside the guest OS (VMWare tools) which
does not work with Xen or KVM

VMWare
uses vmdk file format, which is a proprietary format

Any
of these differences can prevent an image from working on any
hypervisor. It is clear that if you do not pay attention on how you
create your base images, you might need different images for the
different hypervisors. So next step is understanding how we should
create a “magic image” able to run everywhere.

First
point is to figure out list of similarity between different
hypervisors:

Format:
any hypervisor type support the raw format.

Device
type: any hypervisor type supports ide device.

OS
configuration: hypervisors do not require specific configurations,
but the manager could.

Working
with IBM SmartCloud Provisioning you will not have any issue from any
of the previous points. In fact before creating a base image you
should just follow a few rules to ensure portability.

It
is important to use raw format, for the initial image. Here we have
an interesting problem: how to create a VMWare image in raw format.
The answer is very simple: we are creating a fully portable image, so
you can use KVM to build such master image and than run it
everywhere.

At
this point we have our raw image, fulfilling all requirements from
the hypervisor manager. What is next step? You need to register it
into IBM SmartCloud Provisioning. To do that you can use either the
administrative UI or CLI. Regardless the user interface you are using
just remember to use following settings during registration:

Use
raw type

Use
ide device

Do
not enable virtio

You
finally have a fully portable image. IBM SmartCloud Provisioning will
decide by itself which will be the most appropriate compute node to
run your “magic image”.

Even
though the described process is very easy, there could be some cases
where you cannot follow it. This is just in case you already have
your image in a proprietary format, and you need to use them. In this
case you have Virtual Image Library helping you. It is a very useful
IBM SmartCloud Provisioning component able to manage images
federating different hypervisors. It has capability to check image
into its own repository so that you can then check them out to a
different federated virtualization environment. And during this
process it will convert the image format for you.

Using
it you will be able for example to check in a VMWare image and then
check the same image out to IBM Smartcloud Provisioning. Resulting
in a raw format image. Next interesting question is if it will run or
not. The answer strongly depends on compute node type and image
configuration. For what previously discussed, you should care about
following consideration:

OS
configuration: as I said IBM SmartCloud Provisioning requires images
to have some OS configuration. To have final working image you must
ensure that the initial VMWare image has all required configuration
before stating importing it into Virtual Image Library. Otherwise
it will not able to start (for example if image does not have DHCP
configured, it will never get a valid IP)

Device
type: if you only have KVM compute node within your IBM SmartCloud
Provisioning an image using SCSI device will not be able to run at
all. To have it running you must have at least one VMWare compute
node. If initial image is using an ide device, than you will not
have any trouble.

In
addition to image format conversion Virtual image Library is also
able to modify Windows device driver. In the process of moving an
image from VMWare to Virtual Image Library and than to IBM SmartCloud
Provisioning, the application change Windows configuration allowing
it to run into any hypervisor.

Additional
information about previous topics can be found at IBM info center
pages: