I went to an itSMF UK regional
meeting last week. I haven’t managed to get to our local meeting for a while
and I found I was being introduced to new members as someone who has been
around ‘since the beginning of ITL’.

Now that kind of thing, apart from making
me feel old (which is, admittedly, a fair enough feeling at my age) also made
me look back and think on where we (the ITIL community) have come from and
where we are now.

The first thing that occurs to me in
thinking back to the early days of ITIL is that we now find ourselves in a
place that none of us imagined we would. Don’t get me wrong, the original
inventors and drivers[1] of the
ITIL idea were not short on confidence or vision, nor in seeing the benefits
that documenting this aspect of best practice would bring. But I suspect that
world domination of this industry sector by the word ‘ITIL’ was beyond even
their best possible visions.

The key to the expansion of ITIL was that
it quickly became about more than just the books. The ITIL advertising leaflets
produced in the mid 90s coined the term ‘ITIL philosophy’ to represent this
expanded

scope of ITIL. I suppose I should confess that I invented that phrase
and also the diagram that went with it – a version from about 1997 is shown
here. The accompanying words suggested that, even back then, less than 1% of
‘ITIL-related sales’ were about the actual ITIL books, and the rest were
evolved services.

The fact that I couldn’t even hazard a
guess at what that percentage might be today indicates a few, pretty
self-evident truths:

When I was writing those things in 1996-1998, I felt I could
pretty much ‘take-in’ what was going on related to ITIL, and even know
most of the people developing and delivering new ideas. Nowadays no-one
can honestly claim to be able to do that.

What is ‘ITIL-related’ has become a much more debatable
concept. Whatever its faults might have been (and there were many) ITIL
was just about alone in its market space. The initiatives kicked-off by
ITIL have spawned fellow travellers, such as COBIT, ISO20000 and others.
The fact that I could easily start a long running – and probably vitriolic
– debate[2] on
the social media pages by asserting which are and which are not ITIL
derived, ITIL alternatives etc indicates that this is now a loosely
bounded region. That makes any assessment of its scale, scope and success
very hard

Some other things have changed too.

Nowadays the maturity of the ITIL ideas
means most players are focused on market share rather than growing the sector
itself. That means more competition than there used to be. Nonetheless there
are still lots of examples of that collaboration still easily found. Probably
the best example is the ‘Back2ITSM’ facebook group – a place where free advice,
constructive debate and openly shared thoughts are still the norm[3].

The itSMF was born in 1991, and played –
probably – the major coordinating role is promoting the idea, importance and
approaches of service management. Like ITIL, itSMF predates the term ‘service management’,
having started as the ITIMF. Even here we have seen a lot more competition
during the last third of its lifetime: both competition from other community
organisations and also considerable internal competition. I hope itSMF will
evolve form this to carry on delivering benefit to its members. I am a bit too
frightened to work out what percentage of my time has been given to itSMF over
the last 17 years – or at least frightened what my employers over that period
might think. But that commitment does make me wish hard for its future health.

So, looking back should makes us appreciate
where we are now – nostalgia can be deceptive for usually the past wasn’t
better; because progress is exactly that – going forward and getting more. And
wherever ITIL is now, IT Service management has come a wondrous way in the last
20 years. Global technology changes have made a difference to that journey;
we’ve seen personal computing and the internet make all but unbelievable levels
of change. We may well see Cloud do the same; personally I think cloud might do
that by freeing us from some of the technical baggage and letting us see and
address real service management issues, without the obfuscation of technology
issues or the opportunity to hide behind them any more.

We’ve seen a move from books being the
go-to source of wisdom when ITIL started to an amazing range of information
sources. Nowadays your typical service management will expect their influences
to come via social media, electronically delivered white papers and the like.
Interestingly, in many cases, they would also expect them to come for free, and
that throws a real challenge on the thought leadership business. If ITIL 4 ever
happens I think it will be a radically different entity from versions1-3.

Where I want to see ITSM going is towards
SM. IT is now so pervasive that it is everywhere, which to me means that ITSM
cannot be a subsection of overall SM anymore because it logically applies to
everything, since all services now depend on IT. Nevertheless, IT has treated
SM well, and – after some effort –has taken it seriously. I hope those lessons
will work their way into broader adoption and we will see an improved – and
critically an integrated – approach to service management across enterprises
because of that. I am driven to optimism in this (not my natural state you
understand so it is noteworthy) by the fact that, alongside this blog, I am
involved just in this same month in a webinar and an article for IBM’s SMIA
series on the idea that IT is now spreading its ideas – and delivering its
technology and specifically its evolved software solutions – to the broader
enterprise needs.

I wonder what we will be saying in another
20 years looking back – maybe ITIL will survive another 20 years, maybe not,
but I am certain service management will progress and improve.

[1] And the top two names I would put here are Pete Skinner and John
Stewart – perhaps our least sung heroes, especially the late Mr Skinner – but
pivotal all the same.

[2] I don’t plan to, and hope no-one else is tempted – there are far
more constructive things for intelligent service management practitioners to
progress knowledge about.

[3] And if you are interested (sad?) enough to be reading this then you
should be part of that group if you aren’t already.

For those new to the blog, IBM SmartCloud Control Desk was one of the new announcements made at Pulse. It is a service catalog/service desk based on IT Infrastructure Library™ (ITIL™) V3 and ideal for streamlining incident, problem, change, configuration, release, and IT asset management.

This service desk offering will assist customers in process control center for managing change & configuration, assets, incidents/problems, service requests, SW licenses and more.

The announcement letter (212-051) was published on March 13 and we now have a very cool demo that showcases the solution.

There have been a lot of good discussions
on Back2ITSM recently. I find the site a wonderful reminder of the two
universal constant truths: ‘everything changes’ and ‘there is noting new under
the sun’. They might seem contradictions at first, yet the older I get the more
both seem true.

Firstly, if you aren’t looking at the
Back2ITSM group on facebook then you are missing out - go sign up, now! Let me
explain what it is and how it is brand new and full of ITSM tradition at the
same time.

Secondly, it is about people talking with
each other. That’s the bit that is the same as it’s always been. The
willingness to share ideas, help others – even those in competing organisations
– is just exactly like many itSMF regional meetings I have been to, in UK,
Canada and New Zealand; except that now we are all in three at the same time.

Of course, social media isn’t new, and
facebook is not the newest kid in town. But what is 21st century
about this kind of group are the immediacy of comment and dialogue and the wide
spectrum of simultaneous participants it allows. Since it has active members
from all across the world, there is constant input and comment.

OK, so we have all know that the technology
for this has been around a while. After all it is ‘just’ about real time input
to a forum – and we now have about 20 or 30 people across the world presenting
their opinions to an audience of 500+ (lurking is positively encouraged). For
me what is important is precisely that I am not aware of the clever technology
or feel all the time that I am using a novel means of communicating or even
just how damned clever the whole thing is. With this group I have reached stage
three in my own ‘using technology’ scale: comfort and taking for granted.

Stage 1 is when
you are using some new way of doing things just because you can. This isn’t
just about IT of course, many of us may recall how such things have affected
our choice of travel (my

example is choosing an airline because they had A380s
on the route, and even if a bit dearer I had never been on one of them before
…).

Stage 2 is when
the mean is no longer overwhelming the ends – you’re using it now because it is
logical to do so, and it is delivering value. But, you are still very aware of
how cool it is. And you probably keep telling other people how cool it is too.

Stage 3 is when
your focus is totally on what you are doing. I can now just read what is written, comment
if I have something to say. You know it’s a normal conversation because it goes
off at tangents, people get flippant, say daft things, agree, argue, make
subtle (and sometimes not so subtle) digs at each and launch jokes that no-one
else notices. In short, it’s normal human conversation, without thinking about
how you are achieving it nor where all the people are, or what time it is there.

And to me this is a good motif for
successful technology. It isn’t when it is there and running that the implementation
part is properly over. Real success is when people don’t notice it any more,
but just get on with using it, unconsciously – as part of their everyday lives.

It’s one more example of how success is
about being invisible. First time I flew in an A380 I was excited about it –
last time I was watching a movie before we reached the runway. That’s success.
(Ok, so there was a little re-attention on the technology after the Qantas 380
had an engine explode but I am back to ignoring it again now.)

So the important lesson and message that I
see is how we need a customer perspective on the introduction of new
technology. And maybe what you actually want is people to stop telling you how
impressed they are, because then they are getting on with using it, which was,
after all, the real point of the exercise, wasn’t it?

Over 51 million tourists travel to Orlando, Florida every year, but only the cool ones go to attend IBM Edge and IBM Innovate.

As I type this, so many of our customers, partners and my colleagues are in the "brutal" 88°F* weather learning more about storage and software & system innovation.

IBM Storage

Since much of my focus is around product announcements, I wanted to point folks to the IBM Tivoli Storage Productivity Center V5.1 announcement that happened yesterday (Announcement Letter 212-189).

For content coming from the conference, a number of the marketing team are on the ground at Edge and tweeting. Be sure to follow Maria, Martha and Branavan (and of course, @ibmstorage) as well as the hashtag #ibmedge.

IBM Innovate

The Rational team have a number of exciting new announcements around Jazz and they will be talking quite a bit about mobile, cloud, industry solutions and a few other things including DevOps.

For us service management folks, DevOps translates into tangible benefits we can bring back to the business; like fewer errors and faster time to resolving errors if they do occur.

Back at Pulse 2012, we announced, among other things, the Beta for IBM SmartCloud Continuous Delivery (see the blog post and press release).

Along with IBM SmartCloud Control Desk and IBM SmartCloud Provisioning Manager (among others), it's about developers and testers having access to the same tools, data and information that operations uses and leveraging them to fix problems before they occur. And if problems do occur, the linkages with tools like Rational Application Developer and Rational Performance Tester allow the developers and testers to quickly resolve these issues as everyone and everything is connected.

As stated before, fewer errors and faster time to resolving errors if they do occur. This translates into using time to be productive and being innovative. Innovation is what provides value back to the business.

When IBM first kicked off the Dynamic Infrastructure announcement at Pulse 2009 conference, we heard some rumblings on whether Dynamic Infrastructure was just another executive buzzword or if there was real meat behind "the concept."

Doug McClure summarized the feeling well in his blog: “While this is great for executive level folks, I think we needed to drive this message into consumable and actionable things that lower level technical attendees could take back to their companies. They may be the ones who need to execute and show how previous or planned investments could help their company become smarter and more dynamic.”

After IBM’s announcement yesterday on new Dynamic Infrastructure offerings, critics will be hard-pressed to wonder whether Dynamic Infrastructure is actionable.Not only did IBM announce new products and services in the areas of Information Infrastructure, Virtualization, Service Management, and Energy Efficiency, but they also demonstrated how these solutions are helping three of our clients--the Taiwan High Speed Rail Corporation, Tricon Geophysics and the United States Bowling Congress--build new, more dynamic infrastructures to help reduce costs, improve service and manage risk.

A key piece of the announcement is the IBM Service Management Center for Cloud Computing, which now includes new IBM Tivoli Identity and Access Assurance, IBM Tivoli Data and Application Security, and IBM Tivoli Security Management for z/OS, for Cloud environments. I don’t know about you, but all that’s more meat than this vegetarian can handle. :)

To continue driving home the Dynamic Infrastructure success, IBM is sponsoring a variety of events for the public to learn more. Register for a free, local Pulse Comes to You event to see how Service Management is a key component for enabling a DyanmicInfrastructure for a Smarter Planet.

IBMers are hyper-aware of our clients and the issues that they address when they're on the job. So much so, that I've said in past blogs that the majority of conversations I have with my colleagues start with, "How does [blank] beneift our customers?"

To that end, everything we do revolves around questions like - how can we give our customers what they need to get their job done and stay innovative in their industry?

Questions like that get answered at conferences like Pulse 2012. It's where we continue to deliver value to our customers.

And, as mentioned in yesterday's blog about the general session keynotes from Danny Sabbah, not technology just for technology's sake. Providing real business value.

This particular blog is going to focus on the specific announcements we made around cloud, starting with SmartCloud Foundation.

IBM SmartCloud Virtual Storage Center

Storage is "the next big line item" for IT, which is why the idea of improving storage efficiency has always been a hot topic.

Storage virtualization brings the promise of not only improving efficiency, but also providing levels of data mobility that are crucial to delivering modern services to customers.

The ideal solution for storage virtualization should be able to do both the virtualization/provisioning as well as the actual management.

And IBM SmartCloud Virtual Storage Center does both and it's one of the most impressive things being shown on the Expo Center floor here at Pulse 2012. Not to worry though, the team has information on the website and the team talks about this as well as all storage information on our @ibmstorage Twitter account and the Storage blog.

IBM SmartCloud Monitoring and IBM SmartCloud Provisioning

If you were following our SmartCloud announcements last year, you saw these two solutions make a big splash in the market and we're continuing to add value to both of these solutions.

Today. As in right this second, you can go to the ISM Library and download the "Service Health for IBM SmartCloud Provisioning" that will integrate provisioning and monitoring so that you easily monitor what you've provisioned and be able to identify and react to issues in your environment.

To help further simplify how you provision, we've released a statement of direction for SmartCloud Provisioning that may provide enhancements with image lifecycle management.

New features that may provide the ability to control image sprawl, an Image Construction and Composition Tool as well as highly automated self-service deployment of virtual machines.

All of which translate into spending less time wrestling your virtualization and cloud environments to ground and more time working on innovation.

IBM Endpoint Manager for Mobile Devices (New)

Yesterday's general session keynote emphasized mobile.

Between "Bring Your Own Device" (BYOD) and organizations embracing using their own mobile devices for their employees, mobile is the new platform of choice. (which means it's probably time to ditch my IBM 5100)

As you know, our IBM Endpoint Manager solution is built on BigFix technology and it's been invaluable to our overall service management strategy for Visibility. Control. Automation.(TM) (VCA)

On January 31, we announced an update to one of the key pieces of this portfolio; IBM Security Identity and Access Assurance 1.2.

Security was one of the three areas of focus with regard to increasing complexity and new features deliver improved identity and access governance with open authentication standards, role modeling and lifecycle management, and a virtual appliance delivery method all simplify deployment and provides faster time to value for security while reducing risk.

IBM SmartCloud Continuous Delivery

Continuous Delivery is a topic that we have discussed quite a bit on this blog (it has also been known as "collaborative development and operations" or "DevOps").

The challenge of getting services to users is balanced by ensuring that speed does not come at the expense of governance and increased risk.

The strategy to bring development and operations teams together is often stalled when the tools each team are using don't work well together.

Per the announcement letter, "IBM plans to provide an extensible architecture for delivering and managing the entire application lifecycle, creating an environment that brings development and operation teams together with collaboration, automation, and analysis."

IBM SmartCloud Control Desk

With IBM SmartCloud Control Desk, IBM plans to deliver a solution for service catalog, service desk, and IT Infrastructure Library™ (ITIL™) V3 based processes for incident, problem, change, configuration, release, and IT asset management.

This service desk offering will assist customers in process control center for managing change & configuration, assets, incidents/problems, service requests, SW licenses and more.

Software As A Service (SaaS) - IBM SmartCloud Solutions

The innovations happening with Smarter Planet, are quite simply staggering. One of the most interesting, and most visible, areas is in the Intelligent City solutions.

You've seen these solutions in market and in any number of places in the past, but now Intelligent Operations, Intelligent Transportation and Intelligent Water also have SaaS offerings that allow customers to quickly get started, since there is no hardware to procure or installation services to contract.

Cloud computing and VCA means less time (and resources and money) working on your infrastructure issues and more time being innovative.

To find out more about any of these solutions, contact your IBM sales rep contact your IBM sales rep or one of our Business Partners using the Business Partner Locator website.

* some of the new announcements are statements of direction and they are noted as such here and in the announcement letter. (and see the announcement letter and the bottom of this blog as the standard disclaimers apply).

Statement of direction disclaimer

IBM's statements regarding its plans, directions, and intent are subject to change or withdrawal without notice at IBM's sole discretion. Information regarding potential future products is intended to outline our general product direction and it should not be relied on in making a purchasing decision. The information mentioned regarding potential future products is not a commitment, promise, or legal obligation to deliver any material, code or functionality. Information about potential future products may not be incorporated into any contract. The development, release, and timing of any future features or functionality described for our products remains at our sole discretion.

Today we trust computers – literally and
unconsciously with our very lives. I was reflecting on this level of trust when
I got £50 of cash out from my local ATM and declined the offer of a receipt.
Seems I now have total faith the computer systems will ‘get it right’. I’ve
come a long way from keeping all my own cheque books to cross check against
later bank statements.

Now, combining that faith with a little
healthy British cynicism, and triggered by watching the Olympics tennis finals on
TV, a mischievous but irresistible thought came to my mind.

It used to be that when a ball hit the
ground near the line we relied on the human eye to say whether it was ‘in’ or
‘out’. That caused disagreements and discussion – and - in tennis often -
sulking, swearing and the full range of petulant behaviour.

Nowadays that is all replaced by
referencing the technology. When there is doubt – or one of the players
questions a call - then we simply ask the computers. What we get then is a neat
little picture representing the appropriate lines on the court and a blob
showing where the ball had hit. So, problem solved: disappointment still for
one player but, so it seems, total acceptance that the computer is right. After
all it is an expensive system working away inside a very expensive box – must
be right, mustn’t it. Or to put it another way ‘computer says in’, who would
argue?

But what occurred to me is this. All we can
actually see is some boxes around the court, and a stylised display with a blob
on it. That could be delivered by one person with a tablet showing the court
lines and them touching the screen where they think it landed. Very cheap and
still solves all the arguments because – naturally – everyone trusts technology
don’t they!

Now – of course, and before anyone calls
their lawyers – I am not suggesting for the merest moment that there is the
slightest possibility of such a thing happening. But it’s fun to think it might
be possible. There is little public awareness of what accuracy the system – and
here I presume it does really exist – works to. If you dig around on the web
you can find out (the answer by the way for tennis is 3.6mm). You also find out
there is some very minor grumbling and questioning going on. But that seem at
geek level – in everyday use the audience stands instantly convinced.

So, thinking it through there are a couple
of interesting consequences to real IT life:

Once you realise that trust depends on quality of presentation
at least as much as on accuracy, should you focus more on that? Certainly
you have to take presentation seriously, because the corollary is that if you
deliver perfection but don’t make it look good, then no-one will believe
it even though you are right.

Whose responsibility is it to check – and is it even possible? I
suspect this discussion will take us into the territory of ‘governance’. But
even before we get there it implies that User Acceptance Testing needs to
do more than look at things. Of course yours does, doesn’t it?

I guess my big issue is to wonder how
comfortable we are – as the deliverers of the technological solutions for our
customers – and especially our users - to have such blind faith. Of course,
people being the irrational things they undoubtedly are, that blind faith in
the detail is often accompanied by a cynical disregard for overall competence –
think faith in ATMs and on-line bank account figures with the apparent level of
trust in the banking industry as a whole.

As a little codicil to the story, I registered
with anew doctor yesterday – the nurse asked me questions, took blood pressure
etc and loaded all the data she collected into a computer. The system was
clearly ancient, with a display synthesising what you typically got on a DOS3.0
system. First thought: ‘OMG why are they using such old software, that can’t be
good? Second thought: ‘They’ve obviously been using it for years, so they
really understand it, have ironed out all the bugs and it does what they need. It
ain’t broke so they aren’t fixing it’. But my instinctive reaction of suspicion
of it for not being pretty was there and I had to consciously correct myself.

Would you as a service provider prefer more
questioning of what you package up and present to your customers and users, or
are you happy to have that faith? My own view is that the more blind faith they
have in you, the more the retribution will hurt if things do go wrong. Or
perhaps that’s just me being cynical again?

Earlier today, IBM shared its point of view on the future of the data center with Smarter Computing V3 (press release). A central focus is IBM Enterprise Systems (zEnterprise EC12 and Power) and their ability to deliver exceptional value through a private Cloud. We've seen how organizations have been able to leverage IBM Enterprise Systems to achieve significant benefits. Take the City of Honolulu for example which was able to lower its licensing costs by 68% while increasing tax revenue by $1.4M USD in just three months.

By adding Tivoli software to their current IT environment, organizations can advance their enterprise-class Cloud environment while protecting their existing IT investment. How? IBM SmartCloud Foundation software is deeply rooted in openess - an open standards approach and common management tools that are platform agnostic. Essentially, you pick the platform(s) that best meets your business goals and we deliver a set of interoperable Cloud management tools across your heterogeneous environment. Of course, there are intrinsic benefits to building a Cloud management stack on top of IBM Enterprise Systems given the tight integration between hardware and software. OMEGAMON for example leverages a deep integration with zEnterprise systems to deliver advanced monitoring that reduces typical time to resolution from 90 minutes to 2 minutes.

Whether your starting to consider virtualizing your IT environment or deep into your Cloud journey, we have open Cloud management tools that help you expand your Cloud footprint without fear of vendor "lock-in". Learn more about the latest announcement and our Cloud solutions by visiting this site and attending the System z webcast on October 17.

Firstly it’s reassuring because anything that works towards the realisation that development and operation are not really separated by any kind of wall has to be a good thing. Of course there are different areas of focus at different times in the life of a service but they all should have the same aim – delivering what is needed in best possible way. We already all knew that, it is so obviously sensible that who would vote against it? The equally obvious fact that we then don’t do it is one for the psychologists and later blogs, but does lead me into my other reaction:-

The horror that we should be 50+ years into IT services before this seems important to enough for people to give a trendy name. How on earth have we survived this long without a “collaborative and productive relationship” between the people who build something and the people who operate it? And bear in mind both those groups are doing it for the same customer (in theory anyway).

To be fair to IT people though, perhaps this is an obligatory engineering practice we have picked up. Who remembers the days when getting your car repaired was unrelated to buying it? You bought it in the clean and shiny showroom at the front of the dealer, took it to the oily shed around the back if it broke. One of the things that has seen a step-change in the car industry – and is also changing ours and most others – is the realisation that we are now all delivering services and not products. So we are finally realising that long term usability and value is what defines success, not a shiny new – but fragile – toy. In fact, thinking of toys we all recall the gap between expectation and delivery of our childhood toys – the fancy and expensively engineered product that broke by Christmas evening compared to the cheap and solid – be it doll or push along car – that lasted until we outgrew it.

The car industry saw that happen – and we now have companies leading their adverts with a promise of lifetime car driving with their latest vehicles – with the mould really having been broken by Asian manufacturers offering 5 year unlimited mileage warranties. That was about selling a self-controlled transport service instead of a car – and really that is what most of us want. Amazing strides taking place on that front, of course, being taken by companies like Zipcar who have thought simply enough to see there is no absolute link between that service (self controlled transport) and car ownership. (Some of us want other things from a car of course – but that just leads us into the key first step of any successful service, know what your customer(s) want.)

Why I get so interested in all this is its basically what I’ve been saying for the last 20 years – my big advantage is that I came into IT from a services environment (I worked in a part of our organisation called ‘services group’) – and I never really understood why IT needed such a large and artificial wall between build and do. ITIL was (in large part) set up to try and break down the walls – initially an attempt to set up serious best practices and methodologies within operations to match what was already alive and well in development (hence the original name of the project – GITIMM, to mirror SSADM).

So … what am I saying? Please take devops seriously if that is what is needed to get better services. The complexity we need to address now means we have to stop maintaining any practices that prevent good ongoing service design and delivery. If giving it a name and a structure helps then let’s go there.

One of the things I am most proud about in the books I have contributed to is that we made up a fancy name for something good people already did (in our case early Life Support) – the intention was to give it profile and then people would add it to job roles and actually start to plan for it and then, finally, do it better.

Of course that brings with it the chance of looking like the emperor in his new clothes once you examine the detail and originality too carefully. But that’s good too – clever and original usually = doesn’t work too well at first. Solid old common sense (eventually) seems to me to offer a much firmer foundation to build on.

We need good foundations because the situation is actually a lot more complicated than we pretend – multiple customers, other stakeholders, users, operations as users – enough for a dozen more blogs, a handful of articles and a book. So … I’d better get on writing – and maybe so should you?

[1] Seems so to me anyway – the Delphic oracle was widely believed, responsibility free and most of those who used it didn’t understand where the knowledge came from.

The following article was written by Cameron Allen, Pierre Coyne and Beth Sarnie and is the second in our OSLC series.

In non-acronym speak, what I'm saying is that the future of service management has arrived in the form of Open Services for Lifecycle Collaboration.

Or, "OSLC."

But, what is OSLC and what does it have to do with you?

If you are a user of service management tools of any kind, or rely on information from tools to do your job, then you probably know that finding the right information is half the battle, and getting realtime access to that information when it is not under your direct control can feel next to impossible.

OSLC means you can now leverage the simplicity and ease of web links to both find and share information across your management tools (be they IBM, or any vendor tools).

Just as web pages can be linked on the Internet, data can be linked together from one application to another – creating an application ecosystem where applications don't care what vendor they're from. They look up who has the data in a directory, and jump right to it.

OSLC is not something new, and Tivoli is not the first to adopt it for integration. If you're an IBM Rational user, you may already be a believer. IBM Rational, its users, and an extensive ecosystem of partners have been using OSLC to successfully interconnect the application lifecycle for years.

In fact, Rational Jazz is the realization of OSLC community specifications and shared services in an open platform that anyone can use to interconnect the application lifecycle. Rational just delivered their 4th incarnation of the integrated product offering called Collaborative Lifecycle Management based on Jazz.

Tivoli is now leveraging these same principles to help break down silos of information across the end-to-end service lifecycle. That means expanding the notions behind Jazz from service design and development to now include service delivery and management. We call this Jazz for Service Management.

Take for example, problem management. In order to diagnose and resolve a given trouble ticket, the problem information must be gathered and aggregated from multiple sources. We may need information pertaining to the application topology, the health of a system within that topology, outages or events that may be affecting the application, the CPU utilization, the versions and configurations of the hardware and software that this application is dependent upon. I could go on...

The problem is that all of this information lives in different places. You can either call around to the various owners of that information, or you pay a business partner to learn the API of the tool in order to get to the data, or you can have a highly skilled, in-house resource write the integration. These options require extensive expertise in vendor-specific APIs and lots of maintenance to keep them current.

OSLC utilizes community defined specifications for sharing and linking data applied to specific service management scenarios so that in a critical outage scenario, all relevant information relating to that outage can be accessed in real time from any number of sources, displayed in the context of that problem, in a single integrated view, with related actions that can be taken.

The difference is simplicity. You might be able to do this this now with a lot of experts and time but OSLC delivers simplicity.

And, most importantly, because OSLC uses community specifications for service management scenarios, integrations can be built once and applied across multiple 'related' OSLC-enabled tools. "Write-once, Apply-many."

For more information, listen to this podcast on the Tivoli User Community. This podcast provides a deeper insight into the next generation of service management built using linked data.

As names go, neither IBM Tivoli Unified Process nor its acronym ITUP is among my top 10.They just dont fire the imagination or clearly describe its purpose like, for instance, Conan the Barbarian or Floyd the Barber.Now those are names that say something.

Yet more than 17,000 people are registered users of ITUP, a Web-based service management tool, which is now in its eighth release. And to be fair, it would be hard to come up with snappy name that captures what ITUP can do, which cant be described in a few words.

For example, ITUP is often touted for its ability to make ITIL actionable. Say what?

Okay, lets break it down.ITIL theInformation Technology Infrastructure Library -- is a few volumes worth of best practices on how to manage IT infrastructures, based on hard-won, real-world experience.Many of us in Tivoli have gone through a few days training to become ITIL-certified, which gives you a general understanding, a nice certificate and healthy respect for it.

But the challenge with ITIL has always been, how do you take that general wisdom and apply it to real-world situations at a particular organization?Thats where ITUP comes in.It takes ITIL from concept to reality by mapping those best practices to the people (by their specific roles), processes, information and technology (right down to the names of specific IBM solutions) that customers use, or can use, in their organizations.It serves as a roadmap to help customers understand how to actually implement ITIL best practices and a service management approach and see the real value they can gain.It even shows how ITIL best practices can fit in with other models, such as CoBIT and eTOM.

ITUP is based on the collective experience of thousands of IBM engagements, is continually updated to keep current with the latest version of ITIL, and its free.Theres also a product version of ITUP, called ITUP Composer, which takes customers beyond the understanding phase into the actual implementation phase.

I think an ITUP demo is worth 1,000 words, especially mine, and fortunately, weve got a good one.Check it out, and then maybe we can start a naming contest for the tool.Maybe ITUP the Implementer?ITUP the Eighth and Counting?ITUP the Actionable-izer?This name stuff is harder than I thought.

That’s a paraphrase of many quotes – but
whichever famous quote peddler you choose, it is surely a mantra of sorts for
successful service management. To me it
neatly addresses two key points:

It is no good meeting all the metrics that you set for yourself
if that only makes your performance look good to you – it’s the customers’
opinion that matters because they are the ones providing the money to make
it happen – and they may well stop doing that if they aren’t impressed

What people perceive is based upon their situation and
knowledge as well as your facts.

I had some first-hand instruction on this
recently that helped my understanding. Both were a little funny at the time but
maybe with some serious messages.

Firstly two different perceptions of what
must have looked very similar situations to a detached observer – driving last
year down a fast dual-carriageway[1] road.
Both times I was on my way to my father.

First time an ordinary sunny day. I am driving at ‘about’ the
speed limit of 70 miles per hour – and a car comes hurtling up behind me
and sits a few metres behind me with the driver clearly impatient that I
am holding him up. I ventured an opinion as to his personality –
considering him less than sensible, some pushy-salesman type, and
certainly not deserving of my moving quickly out of his way

Two months later I am driving down the same road – only this
time I have been summoned to my father’s hospital bedside by medical staff
with the line ‘I think you should get here as soon as you can’. Now I am
doing a lot more than 70mph, and find myself slowing down to 75 and
hanging on other cars’ back bumpers amazed at why people can’t simply get
out of the way – surely they can see I have to go quicker than that.

So – good guy or bad guy? Depends on what
you know, and that depends on what you are and what has happened somewhere
else.

The other one, I feel the need to share all
hinges around those daily gifts we get form our dogs. Each day I take our dog
for a walk in the field behind the house. The field is just the other side of
the fence and hedge around the back garden, but to get there you have to go out
the front, down the road through the alley and back – about 300 metres or so.
Now dogs, being dogs, use the daily walk for relieving themselves and people,
being only people, are left to pick it up in plastic bags and carry it. But
since our walk takes us back down the other side of that garden fence, rather
than carry the little bags round the field, I toss them over the fence and into
our garden, to pick up and dispose of when I get back. So, I am doing this when
I realise I am being watched, by another man out walking his dog. Thinking
about it afterwards he just sees someone flinging doggy doo over a fence into someone’s
garden. He did not speak, but did manage a look that clearly had me well below
pond-scum in any kind of social acceptability league table.

OK, so some examples of skewed judgement
based on incomplete knowledge, we all have lots of them – and please feel free
to send in any good ones that have happened to you.

Very few of these matter in everyday life –
we shrug and move on and usually never see the misunderstanding or
misunderstood person again. But when it matters we need to establish
communication to get some idea of the events that drive perceptions of those
who we will interact with long term. This is why we know things about those we
live with and care about – their favourite colours, the foods they like and
dislike, which football teams they support and lots more. That is worth doing
because these people matter to us, and because this makes both their life and
ours more pleasant.

So apply this to work, how much more
pleasant – and easier – will your life be if your customers are happy with you,
if they understand what you are doing and you understand what they care about.
That simple idea is at the core of a lot of my work these days – in the
simulation games and the presentation at events. It certainly underpins the
talks I am slated to do at IBM’s Pulse and itSMF Norway in March.

If I go back to the first set of two
bullets I wrote at the start of this piece, they are trying to say that you
need to know how your customers – and maybe other stakeholders – are feeling today. This will drive how you address
things. So customer perceptions influence prioritisation – standard best
practice stuff. What I was trying to point out in my driving example was that
those perceptions and attitudes are anything but fixed. Just because you know
what mattered yesterday, doesn’t mean you know what will matter today or
tomorrow. There are clues and signs you can look for – find out what things
affect your customers attitude and monitor those yourself. Again that is
something we can do fine at home – we are aware of some of the influences that
change attitudes and perceptions on our loved ones – be that exams the next
day, football on the TV tonight, or a fight with a friend.

Maybe what we need is more formalised
gossip at work – because it is often the conversations that don't seem to be
about work that tell us most about how our customers will react – and more
importantly how they want us to react. One thing the 21st century
has brought us – big time – is new ways to gossip, or should that be freely and
rapidly exchange more information than we ever dreamed was possible. So, maybe
this is just one more business benefit of social media, one that delivers its
success by not being so obvious?

Actually, I don't care how you gather more
understanding of your customers concerns and perception influencers use every
means you can. You could do worse than simply going to visit them, talking and
listening. Set yourself a target perhaps – name one thing that would change
your customer’s priorities, and then ask them if you are right.

The following article was written by Cameron Allen, Pierre Coyne and Beth Sarnie and is the second in our OSLC series.

In fact, if you were at Pulse 2012...you heard how IBM Watson will be used to help doctors diagnose medical conditions and improve patient care at WellPoint.

For those of you, like myself, that don’t have a Watson-like recollection, here’s a quick flashback detailing a millisecond in Watson's brain on a sample patient:

Watson is given specific information on a patient’s symptoms, and makes a preliminary diagnosis of the flu as the most likely illness.

Based on the unique patient's name, Watson looks up records of the patient's history for the past few years, providing new insights that point to the better possible cause of, for example, a Urinary Tract Infection.

Based on the patient's family connections, Watson is able to use the family history to derive that the mostly likely cause is now diabetes.

And finally, Watson is able to access a patient’s latest tests to derive a final diagnosis.

If you're in the business of IT, this may sound a lot like incident management. And as any level 1 support person can attest, diagnosing the root cause of an incident is much like diagnosing a patient's condition. You need information from multiple sources (e.g. service desk, license, CMDB, monitoring, and asset management systems), but more importantly, it has to be in context, up to date, and delivered in a timely basis to make an accurate diagnosis of the root cause.

The problem has always been that an incident manager, like a doctor, has to jump between tools, entering requests in each system for the right information...and that is time consuming. In some cases, information isn't readily available and must be requested from other sources, not under their direct control.

One of the ways Watson is able to be such a great diagnostician (and incident manager) is through "linked data," which allows it to seek out and find related information on the patient from multiple sources in a fraction of a second to facilitate faster, more accurate patient diagnosis.

Until now, an incident manager did not have this same luxury.

That's where Jazz for Service Management comes in. Jazz is IBM's realtime platform for integrating management across multivendor tools, and across service lifecycle processes and functions. Like Watson, Jazz for service management uses principles of linked data, along with community standards (including OSLC) to support Watson-like service management decisions, regardless of what vendor tools you have in place.

First to the stage was Erich Clementi (Sr. Vice President, IBM Global Technology Services) to talk about service aggregation.

Smarter Computing is offering new opportunities that will impact the infrastructure due to the unprecedented scale in everything and the way consumability (everything everywhere every time) is changing how IT needs to respond and react.

The boundaries of IT are changing, the infrastructure is changing. Anywhere. Anytime and any device is the new reality.

Erich remarked that the industrializatin of IT supported services (think Ford assembly line) will open up new options in sourcing services. This will reinvent all sorts of services born on the cloud to be more complex and with richer options.

The hybrid cloud will be critical because customers are going to run workloads where it meets the best fit. So these hybrid clouds need to be interconnected, integrated, seamless, secure, auditable and dependable.

This is changing the role of the CIO.

There was an interesting comment Erich made that James Governor (@monkchips) and I were talking about on Twitter. "We are confronted by the infrastructures our clients have, not the ones we wish they have." James responded (and I tend to agree), "make them change. the status quo is not acceptable."

Erich showed how CAPEX utilization is actually a minor benefit of going to the cloud whereas things like the standardization from being on the cloud provide the greater value to customers and it's in OPEX where the bigger savings come in.

There is an existing world that will need to be re-factored and re-thought out to get to the cloud.

Erich left the audience with three interesting thoughts:

Cloud is easy for consumption, but it requires a different delivery model

Changes in the role of IT will allow them to get closer to the business

you need a partner that gives you the choice and will get you there (like IBM)

Helene Armitage (GM of IBM System Software and Systems Growth) was next to present on innovations and Smarter Computing.

(I worked with Helene when she was in charge of AIX development it was her leadership with AIX, in my opinion, that helped get us back in the game in the early 00's with pSeries).

Helene did a very nice transition from Erich's keynote to talk about how these are the systems that are powering the things Erich discussed previously.

Consumer behavior is what is driving what happens in the IT data center and influencing hardware design. Consumers are creating data that is being captured and driven and running in the back-end systems in these data centers.

We need to evolve what is there today, but the rate and pace of change will continue to grow and the requirements for hardware will be driven by consumers. Where the consumers go, the IT department has to follow.

Smarter Computing systems are designed for data, delivered in the cloud and tuned to task. Helene used a good healthcare example. The data explosion in general, let alone healthcare (which Manoj will discuss), is phenomenal.

Everything is instrumented and capturing data. Data growth will be at 50x by 2020. An estimated 80% of the world's population will have a mobile device in the coming years.

The social implications of this data explosion will affect how hardware requirements are written. Enterprise systems with performance, scalability, reliability and availability will be critical.

Flexible systems to manage the data and remain secure will be important (and Helene gave a mention of RAS in this instance).

Helene also left the audience with three things (it's a day for lists):

(I call IBM Watson "he," though I was corrected on Twitter and IBM Watson could very well be "she")

Jeopardy was not the end, it was just the beginning of putting IBM Watson to work.

IBM Watson is currently focused on Healthcare (and now) Financial Services Sector jobs and is a key enabler for Smarter Planet and the new problem of data explosion.

Consider that 90% of data was generated over the past 2 years. 80% is unstructured and only 20% of it is used by traditional systems.

Those companies that can effectively use this "Big Data" are more successful.

Manoj is breaking down how IBM Watson does its magic. It not only reads Big Data, it understands it. IBM Watson is a filter, that's what makes it so good

Healthcare is a great place to start with IBM Watson because of the data explosion. Doctors can not keep up with this explosion and as a result, 1 in 5 diagnosis in the US are incorrect.

Between 44,000 - 98,000 people die every year because of being misdiagnosised, so it is crucial to get this right. (another sobering thought about how what we do impacts lives).

1 in 4 people will die of cancer and 20-44% of errors occure in the first diagnosis. So better diagnosis and treatment is far more complex than Jeopardy answers, but IBM Watson is learning about what it needs to do.

IBM Watson is going after cancer as a medical assistant. It's being packaged with "adviser cartridges" for different areas of different industries and will be in the cloud (public, private or hybrid - whatever works for the customer).

The following article was written with significant contributions from Cameron Allen, Pierre Coyne and Beth Sarnie

Question of the day: why is IT agility so darn elusive?

Give up?

Follow up question: after spending multiple millions in technology to improve service delivery, quality, and productivity, why do so many line of business executives perceive that IT is still not moving "fast enough?"

Silo'd information presents a big speedbump to agility. According to the 2012 IBM study of CEOs, high performing organizations are able to access data 108% more, draw insights from that data 110% more, and act on that data 86% more, than their underperforming peers.

Which brings us back to the specific problem: Information exists, but it is not shared. Information remains trapped in silo'd tools and departmental applications. It's not only not moving "fast enough," it's not moving at all.

If you agree with ITIL and related methodologies, agility is directly linked to your IT processes. So while we can improve process methodology and connections across roles and functions, and within specific technology siloes with tools, if the data and resources can not be freely shared across process-enabling tools, then its all for not.

Going one level deeper, what is the cause of this 'information black hole', where data enters tools, and is never seen again? Your reality is that you probably rely on a mix of multi-vendor tools. Those vendor tools rely on proprietary APIs for integration and trying to make tools with different APIs communicate requires the IT equivalent of a team of United Nations translators, where each is an expert in their applications main language (API). Once successful, the herculean effort can create a constant maintenance cost, and might not work well in the end - things will be lost in translation. That said, even single vendor tool suites are notoriously difficult to integrate.

So what can be done?

Stop for a moment and consider the best example that demonstrates simplicity of integration on a massive scale. It's the Internet. With the Internet, you can get information from millions of different web sites and all you need is a browser.

So for argument's sake, if tools are the equivalent of web sites, then all we need are links to connect two tools. We can take that one step further, borrowing principles from social networks like LinkedIn or IBM Connections, where we can search for one person, and see relationships to other people (making searching for data across tools much easier).

That in essence is OSLC (Open Services for Lifecycle Collaboration): A set of open, community agreed upon specifications for linking tools using web technology. (And before you ask, no. It's not a standard, because apparently standards alone have not done the job)

Data from any vendor tool is registered in a directory like a search engine, where other tools can find it, its relationship to other data, and access it via simple web link technology. Not similar to the Internet, but exactly like the Internet.

What that means is you can easily interconnect tools and processes. You can even replace tools with competitive tools - eliminating vendor lock in. It also means you can re-purpose one integration across a series of 'like' tools. "Write once, reuse-many" inherently applies here. All of this translates into simpler and faster access to information by people and tools, better analytics leading to better decisions, and better automation of workflow.