I would be interested if you have a plan to follow when your company is in some kind of greater danger. With greater danger I mean things like forces of nature, destructive hackerattacks which only goals are to destroy your data, jumping off of your biggest costumers etc.The best would be if one have such a plan in written form but mostly smaller companies may have only unwritten ideas in the managers head's.How much are you prepared for such incidents?Do you know of any sample plans which can be recommended and adopted?

Google will turn up some example disaster recovery plans but you'll need to perform a risk assessment to get a good idea of what suits you. Risk the the product of probability (how likely an event will occur) and hazard (the potential impact of the event), so it makes sense to plan for the highest risk events. The highest risk may come from hardware failure on a critical system for example because the hazard is high and the probability isn't that small when you consider the potential for human error.

Test your plan. Backup is all about restoring your data, not the warm fuzzy feeling that you're safe so test your backups and verify your media. Do a dry run of replacing that critical server. All companies need to have a fire safety policy and fire drills test this and train staff, it might help to think of it like this.

We have multiple BCP/DR/COOP plans. They are at different tiers, so for what I'm in charge of, some of my plan may be inherited from something larger/greater.

I know when I was creating our COOP I had found several decent resources to use. I'll see if I saved them somewhere. I don't think I'm allowed to share/post our plan but I can probably name some specifics.

I am aware that one will probably not be allowed to offer the companies resources, but some specifics would be nice and interesting.

Lately I got in contact with a few companies and while talking I discovered that most of them have not such plans available. Some had only thought about some things which has to be done when something happens but never wrote it down. I tried to explain to some of them how essential this could be (just think about something happens when the responsible guy who thought about solutions is not present for some reason) but I guess it won't help anything as often people don't see this as something important. This remdinds me of backups - as long as nothing happens its fine, but as soon as people are glad to have such things.

I also second what jimbob says, that such plans have to be tested when they are finished as often something sounds great in theory but maybe not in practice.

I've been on the receiving end more than I'd care to be when someone called me to complain that they never tested their backup and it's *MY* fault. Yes, it is my fault that you do not know how to properly rotate your tapes and overwrite your last full backups, or store your tapes in a METAL safe that's magnetized, or you replaced your tape drive and the new one won't read your old tapes... but I digress.

Last edited by unsupported on Tue Jun 16, 2009 11:35 am, edited 1 time in total.

While we're on business continuity plans, I'll share something that I ran into some years ago. The organization I was working for thought they had done everything correctly. Warm site servers, offsite backups, the whole gamut. Low and behold, disaster strikes and we lose two critical servers that run mission critical applications and are worthless without the backend database. The Restore Time Objective was 12 hours. The servers were powered on and network accessible within 2 hours. Restore from the backup tapes took another 37 hours.

This isn't about testing backup. It worked flawlessly, as designed, at the speed of the tape. Turns out that if your RTO is 12 hours and you budget 9 of those for restore from backup, make darn sure that your backup/restore solution is capable of meeting that timeline. Failure to do so is most certainly a career limiting move.