Looks like you dealt with fairly smaller databases. The same scenario could have been worse, if the databases are of size, say 10 GB (in terms of restore times). Logshipping would have saved a lot of time in this case!

So far myself I have been lucky in that a very few number of our databases have a restore commitment of less than 24 hours and those that are are less than 100MB each and replicated to another location. I have however had the loving experience of within a month of each other both the primary and backup sites lost a drive in a RAID5 array. Fortunately this was one drive and we got replaced before lossing any more (we did have to wait a week on 1 drive and boy was everyone sweating it).

"Don't roll your eyes at me. I will tape them in place." (Teacher on Boston Public)

Logshipping is definitely a worthwhile thing. I created some ultra-basic scripts (since I don't have the enterprise edition of SQL Server) that do the equivalent thing (look on comp.databases.ms-sqlserver). My goal is to be up within 5-10 minutes.

Some ideas for log shipping -1) For all the databases that only require nightly backups (and can easily survive a day's loss of data), set a script to restore nightly on the backup server in operational mode. That way, they're ready to go & don't need to be touched, shaving off precious minutes.

2) Scripts scripts scripts. I have one to restore the transaction logs, one to run through and fix the users, etc, etc. I try to make them as generic as possible, using inputs to tell it which database/files to work on.

3) Jobs - I have 3 jobs set that will bring everything back up. They run the aforementioned scripts with the necessary parameters. The only thing they don't do is change the IP address, server name, and run the setup program so that everything synchronizes.

4) Documentation. Do a complete run through, documenting everything. Make it so easy your kids can run it!

5) Assume you won't be there. Assume you'll be hit by a bus. Although, granted, at that point you won't care if the databases aren't brought up quickly. ;)

I'm not sold on log shipping, mainly because I've seen too many errors with MS tools like this. I prefer to do it myself.

Total DB size (3 dbs) was about 1GB, though this was initially implemented because we had the servers in another location and ftp'd the data back every 15 minutes. A larger db wouldn't have changed this, though the fulls would probably have been weekly and differentials daily.

Good ideas below and I'd like to implement them, but I don't have a spare server. In this case, we pressed the QA server into production. However, I do practice the restore every Monday to reset the QA environment, so I've got good documentation on that. Only thing we missed, explaining the repointing of the web servers to a new database. Since this was a temp fix, we did not want to rename the server.

Performing a cold backup of master,model and msdb once in a while on the local disk could also help when you get a call from the System Engineer saying that there was a controll failure and I had to rebuilt the box and restore the files but the SQL Server services won't start. Obviously because the backup software that was used was not backing up the *.mdf and *.ldf files and these files were never restored. Fortunately I had taken a cold backup of all the system files and renamed it with different extensions which were restored. All I had to do was rename the files back to their orignal extensions and place them in the data folder and start the services and restore the rest of the user databases.