Friday, 21 February 2014

The other day I attended a meeting of a local business continuity forum. It was a very well run, very interesting meeting – the latter despite the fact that one of the topics was business interruption insurance, living proof that any subject can be made interesting by an engaging speaker. There was, however, one small glitch in proceedings that I thought was worthy of note. Or that at least gave me an excuse to write a blog.

The second item on the agenda involved a live link-up, via Skype, to a presenter in some far flung, desolate location – Reading, I think. At the appropriate time, the chairman initiated the call. And then… nothing happened, apart from a deafening silence. The technology didn’t work. Now, before you say anything, yes, of course it had been tested beforehand. This was, after all, a group of consummate business continuity professionals. It had, however, been tested on the previous Friday afternoon, whereas the live event was on a Monday morning, when the volume of traffic on the network is, apparently, much greater. To the extent that there wasn’t enough room left in the pipe for a teeny weeny little Skype call.

After much umm-ing and ah-ing and “talk amongst yourselves”-ing, the organisers finally got it working – for a while at least, but then it failed again and they eventually had to resort to a somewhat Heath Robinson solution involving the loudspeaker on a mobile ‘phone next to a microphone connected to the room’s sound system. Which, I have to say, was a better sound quality than the original Skype solution. And so the meeting continued with no further hiccups.

The episode brought to mind a technology glitch at another seminar that I was at a while ago. This time I was presenting to an audience of 200-odd people (by which I mean approximately 200, as opposed to 200 odd people – although there were one or two there who fitted the description admirably). The venue was a concert hall with a huge stage about ten feet above the audience, who were seated around tables in an auditorium the size of a small country. Not at all daunting.

It came to my turn. I took a deep breath, walked up the steps to the stage, introduced myself, pressed the button on the remote control to fire up my slides and…nothing happened. There was a completely blank screen behind me and a couple of hundred people looking at me expectantly. Not even a whiteboard to fall back on. Oops! Time for Plan B. Which was to busk it for ten minutes while the techies scurried around poking things and unplugging and re-plugging things, having tried the universal solution of powering it off and on, which had no effect whatsoever. Eventually the screen came back on, I re-synched my blathering with the pretty pictures and all was well. It was a bit uncomfortable for a while but I got away with it. I even got a bit of a buzz from it in a masochistic sort of a way, although I was only too happy to take the applause and return to my seat at the end of it. And, before you ask, yes, of course it had been tested beforehand. I am, after all, a consummate business continuity professional!

Both incidents made me think about the huge reliance that we place on technology and the difficulties it can cause when it’s not there. But they also made me think that, as often as not, there are alternatives, whether they involve the use of other technologies or switching to manual processes – maybe even reverting back to the way we used to do it in the old days, before all the clever and sophisticated technology arrived to “help” us.

They reminded me of the importance of testing, and the fact that, to be really confident that things will work, testing should be as comparable to the real thing as we can possibly make it. Even then there are no guarantees, but if our testing isn’t realistic it can give us a completely false sense of security.

And they reinforced the point that, whether the solution is highly technical and whizzy or simple and old fashioned, we should always, always have a Plan B up our metaphorical sleeve. Because, as a certain Mr Murphy decreed long ago, whatever can go wrong almost certainly will.

Andy Osborne is the Consultancy Director at Acumen, and author of Practical Business Continuity Management.You can follow him on Twitter and his blog or link up with him on Linked In.

Thursday, 13 February 2014

I’m relatively new to business continuity management, with only a little over ten years’ experience in this industry that is said to be made up of the 'Men in Grey' - bearded and grey suited men. Someone said this to me at last year’s BCI World Conference, I then looked in the mirror and sure enough that was me already.

So in my short time what changes have I seen, what incenses me and what gives me hope that as an Institute we are making progress?

Like many when they start out in this industry, I was volunteered as opposed to being a volunteer. It was in the days of PAS56 (Publicly Available Specification 56), the forerunner to BS25999 and now ultimately ISO22301.

My experience was that the business in Eastern Europe that I worked for needed to comply with various standards and regulations and business continuity management was beginning to be the latest fashionable topic.

Returning to the parent company in England, I was suddenly considered an expert because I had actually read the existing standard - "Dave can write us a plan" I was told. Oh dear! No ten pillars of business continuity (PAS56); no BCM Lifecycle (BS25999); just "write us a plan." This was post 2000 and the millennium bug scare which had achieved a lot in some respects, but also suggested that BCM was exaggerated to create a cottage industry.

So have we truly progressed? The point in time when business continuity management moved forward for me, I can now see clearly was driven by the right Top Management influencers driving it. Even then however, the dark side of 'minimum compliance' versus 'budget availability' was always present.

I’m proud to say I now tutor the topic for the BCI via one of its top training providers and in doing so I meet people from many business sectors from Directors to BC Coordinators, and yes, some of those who have been volunteered.

I still see in some of the biggest and multi-facetted global organizations a culture centred on compliance; equally I see huge amounts of dedication, expertise and frustration from people hugely committed to business continuity management.

So what incenses me?

The fact that we still use dramatic events to explain the concept of business continuity. As impacting as they are, and perhaps getting more frequent, I'm incensed that we still think this is how to promote this topic.

The fact that we are often still at loggerheads with the risk industry and that we struggle to embrace each other’s discipline to a common objective.

The fact that we as an Institute analyze supply chain continuity each year and come up with very similar data, yet we still do not have the means to change those findings through a common understanding of the issues.

Finally, the fact that whenever you attend forums, presentations are largely centred around statistics that depict the frequency of events and a series of pictures showing how bad things can get, invariably with no evidence of what we can do to make things practically better.

So, what is the solution and what are you doing about it I hear you say. My view is simple, but the solution may be a little more complex.

Organizations in this day and age have to be commercially driven, be they charities, public sector or private sector, small medium or global; they have to be commercially efficient. Top Management are driven by success often evidenced by financial targets.

The most common phrase I hear when discussing business continuity management and disruptive events is “what’s the chances of that happening?” the classic response borne out of risk appetite and risk attitude. Why spend budget on an unlikely event?

Top Management speak of 'risk' - they can comprehend this because it’s built in to us all from birth. Planning is counter intuitive, reacting is natural.

Something we all must do, and I try to, is promote the concept of business continuity as a value adding, commercially driven, essential part of a successful organization. This includes understanding your Top management’s appetite and attitude to risk, their maximum attitude to disruption (over time).

When it comes to procurement and managing supply chain continuity, Top Management need to understand the 'Risk/resilience Assessed Total Cost of Ownership'.

As an Institute, as BC professionals, we need to place business continuity at the top table by giving Top Management reasons to adopt it based on commercial efficiency, not compliance.

This cultural shift that the BCI Good Practice Guidelines tell us is so hard to measure will happen if we present commercial evidence as to why Top Management need business continuity management.
My part in this transition is to constantly discuss business continuity management in terms of a commercial imperative and offer solutions and concepts, not statistics and photographs.

David Window is the Managing Consultant of Continuity 22301 Ltd in Cheshire, UK.