Please join the Tivoli User Community for a live Webcast and opportunity for questions, Thursday,July 19th 2012, 11:00
AM ETReserve
Your Webcast Seat NowOverview:Cloud computing has been driving increased innovation and flexibility,
but this shift has also introduced new complexities in the world of IT and
process automation. Now multiple topics emerge on the radar of a cloud manager
which all point in the direction of easier management of the entire life cycle.The all new IBM
SmartCloud Workload Automation provides you with a perfect entry point to the
theme of Unattended Workloads as a critical topic to make clouds more
cost-effective. With the new
Per-Job-Pricing, the solution is even more attractive and affordable. After
setting best practices in your organizations it’s now time to explore and learn
the “next practices” in the historic world of batch and beyond with the new IBM
SmartCloud Workload Automation. Learn MoreAbout the Speaker:
Xavier
GiannakopoulosisIBM Tivoli Workload
Automation – Product ManagerXavier
Giannakopoulosis the product manager for Tivoli Workload Automation where he
has world-wide responsibility, primarily on the distributed side. He has the
working knowledge of the Development Process, technical support, HR management
and client handling. Click Here to visit
his TUC ProfileREGISTER NOWThe Official Tivoli User Community is the largest online
and offline organization of Tivoli professionals in the world – home to over
160 local User Communities and dozens of virtual/global groups from 29
countries – with more than 26,000 members. The TUC community offers Users
blogs and forums for discussion and collaboration, access to the latest
whitepapers, webinars, presentations and research for Users, by Users and the
latest information on Tivoli products. The Tivoli User Community offers
the opportunity to learn and collaborate on the latest topics and issues that
matter most. Membership is complimentary. Join NOW!

The next release of SmartCloud Monitoring, which includes new releases of IBM Tivoli Monitoring (ITM) and IBM Tivoli Monitoring for Virtual Environments (ITM for VE), is currently in development, and we would like to invite customers old and new to participate in our Early Adopter Program, our fancy name for a beta program (because we HAVE to have an acronym here at IBM, and how do you make an acronym out of "beta?")

This open program will allow you to download our Beta code and provide feedback and guidance on the new functionality, product improvements, and code quality of IBM Tivoli Monitoring "vNext." As the SmartCloud brand continues to expand, this beta will help long-time customers see that the ITM foundation is strong, and being continually enhanced to help us all adapt to the disruptive influence of "Cloud" on our IT management responsibilities. Both ITM and ITM for VE are still separately available (and are the products where the code enhancements you'll see reside), while the SmartCloud Monitoring bundle makes it convenient for customers to purchase the two products together.

This ITM Community site will enable you to download Beta drivers, see important announcements, interact directly with product developers and planners, and provide the ITM development team your valuable opinions about our planned product enhancements. As we develop this release, however, we're already doing long-range planning for the "N+1" release that will follow this one, so long-range enhancement requests are a good topic of discussion as well.

Interim Fix 2 for the ITM VMware VI agent version 7.1.0 is available. This interim fix is cumulative so customers will not need to install Interim Fix 1. For a list of APARs fixed in IF 1 see this list. Interim fix 2 includes fixes for problems described by APARs

IV19978 Abstract: EFFECTIVE SERVERS AND TOTAL SERVERS HAVE A WRONG VALUES.

IV22056 Abstract: VM AGENT SHOWING NAA.ID AS LARGE NUMBER.

In addition to APAR fixes this interim fix includes new attributes that were requested by customers. These attributes provide further insight into the memory demands of executing virtual machines and the CPU utilization on the host server. Added for virtual machines are usage, active, shared and granted memory attributes. For the host, CPU core utilization (vSphere 5.0 or higher is needed) has been added.

Interim Fix 2 may be downloaded from IBM Support Fix Central. More information may be found here.

There is a wealth of information which will help make your interactions with IBM support more efficient.

Troubleshooting section includes documentation for known problems, how to use IBM Support Assistant and support for tools for IBM Systems. Work with Support covers all you need to know to log a problem as well as work interactively with a support engineer.

From Overview check out the IBM Electronic Support Community blog to read about the latest ways IBM is improving your support experience. Better yet follow the blog and receive the latest entries automatically.

One of the many business benefits of honing your skills at this conference is the enhanced returnon investment in Tivoli & Security products. Whether you learn best by listening, watching or by doing,we have it covered with our expert presentations, demos and hands on labs.

Take this opportunity to attend the only IBM Tivoli & Security Technical conference in Europe this year,but be quick, as places are limited and early booking is highly recommended! Book before July 31st andreceive a 10% discount and 2 free certification exams worth $400! Tivoli solutions are at the heartof IBM’s Smarter Planet initiative. In addition to our deep technical sessions we will focus on someactual projects, and related technologies. We are excited to demonstrate our best practices based oncomprehensive Tivoli implementation projects. Whether your role in managing a dynamic infrastructureis executive leadership, security, operations, storage, production, delivery, facilities or communicationsservice, the most valuable opportunity to gain the necessary service management skills is at the EMEA Tivoli& Security Technical Conference. This year, the event offers:

There is a new white paper available on the IBM Integrated Service Management Library ( ISML ) that explains how to use Tivoli Storage Manager to back up a VMware virtual machine that was deployed by the Workload Deployer in IBM SmartCloud Provisioning version 2.1.

The white paper explains how to locate, and back up the virtual machine in VMware using IBM Tivoli Storage Manager, and how to restore the virtual machine to the Workload Deployer environment.

In this new post I would like to describe how you can script the building of virtual images using the Image Construction and Composition Tool provided by IBM Smart Cloud Provisioning.

The upcoming release of IBM Smart Cloud Provisioning 2.1 embeds, among other things, a new version of the Image Construction and Composition Tool. Image Construction and Composition Tool allows to build virtual images that are self-descriptive, customizable and manageable; at the end it produces Open Virtualization Appliance (OVA) images that can be deployed into a cloud environment.

One of
the new features of this tool is the capability of performing image
management operations directly through a command-line interface. This
capability enables a set of new use cases through a scripting
environment.

The
command-line interface of Image Construction and Composition Tool
provides a scripting environment based on Jython (i.e. the Java-based
implementation of Python) and in addition to issuing commands
specific to the Image Construction and Composition Tool, you can also
issue Python commands at the command prompt.

Using
such interface, you can manage the Image Construction and Composition
Tool remotely since you can download it to any machine and then point
to the system where the tool is running: it communicates with the
server using the HTTPS protocol so that all the communications are
encrypted. The command-line interface can be installed on both Linux
and Windows operating systems and can run in both interactive and
batch modes.

Anything
that can be managed in the Image Construction and Composition Tool is
modelled by a resource object on the command-line interface that
exposes a set of methods for performing the related management
actions. The following objects are available: software bundles
references (for defining software configurations to be deployed on a
virtual machine), cloud providers references (for defining the
hypervisors used by Image Construction and Composition Tool to build
and capture images), images references (for handling virtual machine
images to be used for import, extend, capture and export operations)
and users references (for administering the user of Image
Construction and Composition Tool ).

Once
you have downloaded and configured the command-line to start a new
session in interactive mode you can issue the following command from
a shell prompt:

To
get a list of all the images, you can use a command like the
following:

>>>
allImages = icct.images

And
so on.

You
can also use the Image Construction and Composition Tool command-line
interface in batch mode, by creating your own script and then
launching it. For example, to run a script called myScript.py
you can issue the following command:

A few
samples come directly with Image Construction and Composition Tool.
They are located under the following directory:

<icct_cli-install-dir>/samples

They
cover some of the Image Construction and Composition Tool basic
flows, such as creating a new cloud provider configuration, importing
an image, extending an image, etc..

You
can use them as a starting point for creating your own workflows.

That's
all for now.

We
have just provided a quick introduction of all the capabilities of
the Image Construction and Composition Tool command-line interface.
If you are interested in discovering more about Image Construction
and Composition Tool, its command-line interface and SCP 2.1, you
can have a look at what is included in IBM Smart Cloud Provisioning
beta code:

If you ever observed babies playing, you'll notice that at a certain point in their development, the idea of property comes into the game: "this is my toy, I'll not let you play with that". Usually parents needs to invest some time to make the baby understanding the value of sharing things: "the toy remains yours, but you can enjoy sharing it with other babies... If you are kind and polite the other babies may share in their turn their toys with you". Usually this trick work. The next step will be that they will start adding "special conditions": "you can use my blocks but only the blue ones" or " you can play with this doll but I'll not borrow you the pink dress". A different stories comes when sharing can make you save a lot of money: you do not need to buy the same toy your baby saw another baby is using if they can share it...Did you ever try to apply this model to cloud computing?I know it may sound strange at a first glance, but there are some similarities...Let's start from the last example, kids sharing the same toys: doesn't it look like familiar to the idea of sharing the same master image? In a lot of cases I do not need my own master image, I can use the same one another user is using.But the "conditions" apply: "you can use my same master image, but I do not want you to stay on my own network!" or "you can use my same master image, but you cannot use my package scripts!" ... Not a lot of differences from"you can play with my doll but I'll not give you the pink dress" or " you can play with my blocks but you can use only the blue ones"There will be situations is which you even do not want to share the master image at all: "this is mine, it's my treasure, I have my own information there and I do not want you to see that"...I'm pretty sure you've seen babies doing that with their favorite teddy bear ;-)

I hope these few examples made you look at objects authorizations in a cloud with different eyes...Anyway, the problem is there, a cloud is typically a shared environment and we do not want to have everybody to have access to everything. Privacy is important.Let's see one of the ways to resolve this issue. We could give to every individual/user the right to determine who can access his own objects. "who" of course can be a single user or a group of users. Depending on the role of the user he can have access to different objects.The cloud administrator for example can decide who can access a specific network, who can see a specific cloud group; the cloud catalog editor can decide who can access to which master image, or to which package scripts (package scripts are the building blocks for patterns); the image deployer can decide if somebody else can see the details of his images. In some cases he may also be interested in letting other users accessing his own volumes.With the same ease a user can decide either to give full access, read-only access or no access at all to each of its own resources/objects.

Using such fine grained access policy makes the cloud software really flexible to fit various adoption models like a classical private cloud or a more complex environment like the ones a cloud service provider may have.

In case of enterprises and cloud service providers, authorization and network segregation are critical prerequisites for building and managing a secure cloud environment.For this SmartCloud Provisioning is the right choice.

You can also rely on a robust auditing mechanism that allows you to track what is happening in the cloud: who logged in/out, user creation/deletion/update, data access attempts either if they are successful/unsuccessful, virtual machine instance creation/deletion update and far more...

If you are interested in walking through this model, you can have a look at what is included in IBM SmartCloud Provisioning beta code:

We know that cloud computing offers a myriad of benefits like rapid service delivery and lower operating costs. But it can also lead to challenges in data governance, access control, activity monitoring and visibility of dynamic resources—in essence, all aspects of IT security.

The IT organization must have the capabilities to both deliver services more quickly to meet the demands of the business and be able to provide high levels of security and compliance. In the past the delivery of the services was typically the bottleneck in providing new services, but now with automated cloud and self service delivery models the teams responsible for change management and security can quickly become the bottleneck due to manual processes and siloed tools.

For example, organizations need the ability to patch all of their systems, both physical and virtual, whether distributed or part of a cloud. Operations teams need better insight into and control of deployed virtual systems, including OS patch levels, installed middleware applications and related security configurations. And there can be too many security exposures with offline and suspended VM’s that haven’t been patched in weeks or months.

A holistic approach is needed that addresses rapid provisioning of services and automation of key security and compliance requirements. Together these capabilities can keep you in control of rapidly changing cloud environments. First let’s look at the capabilities needed in a cloud provisioning solution.

Second, a unified endpoint management approach is required to provide visibility and control of your systems, regardless of context, location or connectivity, and needs to deliver:· Heterogeneous platform support with seamless patch management for multiple operating systems, including Microsoft Windows, Unix, Linux and Mac OS, as well as hypervisor platforms· Automatic assessment and “single click” remediation, which shortens time to compliance by automatically identifying necessary patches and enabling users to target and remediate endpoints quickly

· Enterprise-class scalability and security to provide proven scalability, including fine-grained authorization and access control capabilities