IBM is excited to announce that the IBM Tivoli Monitoring (ITM) vNext release is making Beta versions of the product available to any and all interested customers. IBM invites you to download our Beta code and assist us by evaluating the new functionality, product improvements, and code quality of IBM Tivoli Monitoring vNext.

A new ITM Community site has been defined to provide you with all the information you need to participate with us in this exciting Beta program.In this community you can download Beta drivers, see important announcements, interact directly with product developers and planners, and provide the ITM development team your valuable opinions about our planned product enhancements.Please click here and ask to join the ITM vNext Open Beta Community.

IBM Smart Cloud Provisioning introduces PaaS capabilities with the possibility to create blueprints to standardize
the deployment of complex tiered applications like for example a J2ee three
tiers application made of a Http Server, an Application Server, and a DB Server,
each running on a different VM eventually configured on different network
segments. These blueprints are called patterns in IBM Workload Deployer
terminology, which is the foundation technology of SmartCloud Provisioning.
Virtual system patterns are used to define a topology middleware software configuration
to meet application requirements and you can setup that configuration using familiar
concepts and leveraging existing scripts that SmartCloud Provisioning takes care of executing when the virtual machines hosting the middleware components are deployed to the
cloud.

You can use any virtual image to
build a virtual system pattern. However, in order to perform aforementioned configuration
steps you need to inject a so called activation engine, which is able to
execute the configuration scripts defined when creating the virtual system
pattern (add-on scripts and script packages). The good news is that you do not
have to do that manually: SmartCloud Provisioning provides the Image Construction and Composition
Tool (ICCT) that you can use to clone and extend your basic certified
image to make it “cloud ready”. Images extended that way are called intermediate
images. You can drop any add-on script and script package on an intermediate
image when building a pattern in the pattern editor, while you cannot for basic
images. You can still add basic images as part of your virtual system
pattern topology, but SmartCloud Provisioning cannot perform sophisticated configuration steps:
these images are more suitable to fit IaaS deployment scenarios. For these
scenarios you can still define additional network interfaces (vNIC) and you can still attach
additional disks to the virtual image instance. What you cannot achieve without
extending the image is the configuration of these things: you have to login
into the provisioned virtual machines and configure the vNICs as
well as formatting and mounting the raw disks.

Virtual Image Library is enhanced to
discover the capabilities of a virtual image and tagging it so you visually
know whether a virtual image is suitable to be included in a virtual system pattern, and you
can eventually extend it.

People always ask about failures. That's great that the cloud software can survive failures, but what about my user workloads? Of course the simplest yet all too unsatisfying answer is that your application should be designed to tolerate failures and since the cloud is resilient you can always get more cloud resources. Unfortunately, most people aren't satisfied with this answer. Many enterprise IT folks are used to running expensive servers with very expensive fiber channel attached SAN storage. But what happens with commodity storage exposed over commodity networks and servers?

SCP 1.2 has three kinds of storage 1) gold master images, 2) block storage (volumes), and 3) ephemeral storage. Master images are replicated across a cluster of linux servers. When an instance is created from a master image the guest OS will see a single disk, however, all writes will go to ephemeral storage which is attached to the hypervisor. Although some people do recover the ephemeral storage upon failures, it is designed to be discarded whenever instances are terminated intentionally or otherwise. The master images are replicated for resiliency and scale out performance. For resiliency, we generally establish 2 redundant iSCSI sessions to two separate storage nodes. This can survive network, disk, and storage node failures without affecting the guest workload.

Block storage on the other hand is a bit trickier. We purposely chose not to force redundancy, which turned out to be the cause of amazonocolypse last spring. We had some early customers tell us that they were sufficiently happy to use RAID storage on their storage nodes so they could recover from a failure even though there would be some down time. Of course other users want their storage to be always available. For those users, we've always recommended allocating multiple volumes for each instance. If you create multiple volumes in one call the cloud will attempt to place each volume on a separate physical storage node. Then using guest level software raid like mdadm for linux or the Windows Disk Management tool you can set up disk mirroring to tolerate failures in one of the nodes. Of course you'll need to monitor for failures so you can re-establish redundancy. You can use Smart Cloud Monitoring to detect faults and even trigger automated recovery scripts.

While this is an entirely workable solution that is both scalable and low cost, it is still not enough for some use cases. In particular, this solution will not work for "persistent instances". Of course, you should avoid persistent instances, but sometimes, it's just a heck of a lot easier - you don't have to be smart about configuring your windows or linux guest OS. For this scenario we do have some customers combining SCP 1.2 with GPFS an extremely powerful cluster file system which has been used in some of the world's largest super computer HPC clusters. Using GPFS as the backing store for the SCP storage nodes it is quite simple to automatically failover volumes onto another storage node. In fact, IBM Research has internal prototypes that go even further avoiding any downtime whatsoever as a result of a failed storage node. But I can't tell you about that ;-).

I hope you've found this helpful. I hope you'll agree that there are some pretty good solutions available even if we cannot offer perfection, yet ;-)

This goes out to all the Operations guys and gals. Have you been tasked with getting your IT organization to be more efficient, more effective..."more with less?" At the same time, your development teams are expected to delivery new applications at warp speed while you have specific service level agreements to meet governing the stability of your production environments. Speed.... stability.... seems diametrically opposed? If you haven’t heard of DevOps yet--the methodology of bringing development and operation teams together to collaborate, integrate and deliver more robust applications to the marketplace more efficiently and more effectively—its a cool new way of thinking and doing for all teams involved.

IBM has jumped into the deep-end of DevOps with the recent announcement of the SmartCloud Continuous Delivery beta. This solution will allow the integration of new and existing tools to automate and enhance the delivery pipeline of applications end-to-end. This post will hopefully give you some ideas on how you might be able to utilize DevOps to bring tangible changes to your IT organization..

First off, is your organization using cloud computing effectively today? Ops teams may already be utilizing some form of virtualization to increase efficiency and effectiveness. Aligning DevOps methodology, cloud can automate and reduce routine daily tasks and free up resources to focus on different innovation. Have a closer look how SmartCloud Continuous Delivery, in conjunction with IBM SmartCloud Provisioning, can help mobilize teams to move to DevOps.

Fact or Fiction?

I won't have to provision environments for development teams any more! Fact- Ops can define the system patterns for developers to self-provision so they are no longer dependent on the Ops team. There will likely be times when Ops teams do want to provision environments that are needed but it doesn't have to be as often.

I will never be able to monitor all the virtual systems to validate they meet the security requirements of my company Fiction - Patterns can be built based on the compliant virtual images that Ops maintains and tracks. Development can then self-provision these pre-defined patterns. Ops an update existing patterns and upgrade deployed VMs as required.

I can define network isolation and resource constraints to ensure the integrity of my cloud for my customersFact -The automated deployment scripts define the access level of authorized users and groups--these stored artifacts preserve the authorization specific users and groups are given allowing controlled multi-tenancy in a cloud.

The ability of developers to be able to standup their own environments is helpful but the consequence will be tons of stagnant VMs hanging around Fiction - Build artifacts can be stored in the asset manager which tracks each state and age of the provisioned VMs. Policies are used to ensure VMs are maintained only as long as appropriate for a particular deployment (for example, personal deployment vs long test run deployment)

I hope this taste of Fact or Fiction gives you a sense of how DevOps can transform collaboration and effectiveness for both Development and Operations teams. The Enterprise DevOps Blog here will keep you up to date and provide additional information around DevOps. You can also test drive highly-scalable, low-touch cloud with a SmartCloud Provisioning no-charge trial.

There will be a live session held Tuesday June 12 2012 that will provide an overview of the data protection capabilities that IBM Tivoli Storage Manager for Virtual Environments brings to IBM SmartCloud Provisioning.

Come to learn what we have to offer and tell us about your data protection strategies in the cloud and what use cases you have and see value in.This will be an opportunity to share and provide valuable feedback to the product teams that will shape future capabilities.

An interesting insight by Dr. Angel Diaz into the Practical Guide to Service Level Agreements ( SLA ), published by the Cloud Standards Customer Council ( CSCC ).

Who is responsible for the management of the services that will operate in a cloud environment? Who is responsible for identifying the elements of the agreement? What type of agreement should be in place? These are all questions that should be asked and understood before moving a service to the cloud.

To read the complete article, go to http://thoughtsoncloud.com/index.php/2012/05/cloud-service-level-agreements-slas-what-you-dont-know-can-hurt-you/ .

Two
new white papers are available on the IBM Integrated Service Management
Library ( ISML ) that explain how to use Tivoli Storage Manager to back
up different areas within IBM SmartCloud Provisioning.

The
first white paper provides information on how to use Tivoli Storage
Manager Backup-Archive client to back up and restore the boot volume of
an IBM SmartCloud Provisioning persistent virtual machine and how to
make periodic back ups of a normal volume, and select and restore a
particular backup.

The
second white paper provides information on how to use Tivoli Storage
Manager Backup-Archive client to back up and restore the following
components of the IBM SmartCloud Provisioning infrastructure: the
Preboot Execution Environment ( PXE ) server, the web console
configuration, and the HBase data store.

Service Health for IBM SmartCloud Provisioning has officially GA'ed and is now available on IBM Integrated Service Management Library ( ISML ).

Service Health provides pre-built integrations between IBM SmartCloud Provisioning and IBM SmartCloud Monitoring utilizing a custom agent, OS agents, and the ITMfVE agents. A product provided navigator offers a concise overview on the health of the IBM SmartCloud Provisioning infrastructure enabling the ability to identify and react to issues in your environment quickly minimizing the impact, such as an unresponsive compute node, high disk usage on storage nodes or key kernel services not responding. It also provides visibility into the KVM and ESXi hyper-visors.

How can I "easily" monitor the
performance and availability of the OS and applications of launched
instances?

The
solution is to integrate IBMSmartCloud Provisioning with IBMTivoli Monitoring (ITM) so that all the running instances will be
connected to the ITM Server and managed according the performance expectations

It can be
achieved by exploiting the current integration between IBM
SmartCloud Provisioning and the Image Construction and Composition Tool
(ICCT), available in IBM SmartCloud
Provisioning version 1.2, and performing the following steps: