Ad hoc Backup Policy - prior to applying small changes

I am interested in people’s backup policies prior to applying small database changes.At our company, our policy is to backup the entire database before any script is applied, no matter how small the change. Whilst the policy is conservative and safe, it also uses a lot of resources in terms of time and server space. I understand the logic – eg. there might be triggers creating more changes than anticipated and so just backing up the data or objects that one is changing would not suffice.We use redgate to compare our dev and live environments and to create the change scripts. I guess we could create “reverse” scripts to restore if necessary. Transactions and Rollback won’t work as this is needed not necessarily in scenario of failure – rather just a restore point.I’d be interested to hear others’ suggestions / opinions.Thanks!

Jshapiro 32900 (11/3/2015)At our company, our policy is to backup the entire database before any script is applied, no matter how small the change. Whilst the policy is conservative and safe, it also uses a lot of resources in terms of time and server space.

I take a view based on recovery time, and the likelihood of having to make a Restore, or even multiple restores.

For a quick fix we take make a Transaction Backup (to "empty" the transaction log at that point-in-time) then a Differential Backup. At various points in the rollout we would take a Log Backup,. and explicitly add a comment to it indicating what stage of the rollout we had got to (we have an SProc that makes the Log Backup and it takes an option "comment" so its trivial for us to make a log backup during the rollout:

EXEC MyADminDB.dbo.usp_MyBackup @MyDBName='xxx', @MyComment='XXX'

For a large rollout (within scheduled downtime) I will always start with a full backup. That gives me the fastest recovery time if one of the rollout steps goes wrong and I have to restore-and-repeat (compared to having to, also, restore a DIFF backup). We also have (built into the rollout scripts) various points at which we make a Log Backup. That allows me to Restore Full and "Roll forward" various LOG backups up to the last-known-good point in the rollout process and then resume from there. Of course I am not expecting to have ANY issues during rollout, as it has all been tested, but in the real world we sometime get some data added, since the test, which conflicts with the rollout, so its mostly about insurance.

We use redgate to compare our dev and live environments and to create the change scripts.

Personally I don't like that approach. We script everything (rather than "just doing it" live in the DEV database and then scriupting it later with a COMPARE tool). Each SProc / Trigger / View has its own file (added benefit that they are stored in a Revision Control system). Any DDL changes we make are scripted - i.e. new/changed Tables, Column, Indexes, FKeys etc. We use the SSMS GUI Table Design to prepare changes (lazy mode!) but then press the SCRIPT button, rather than the SAVE button, and then we run the script on DEV - which proves that it works. Thus all such scripts are available, in chronological order.

I think this approach also has the advantage that if a DDL change ALSO needs an UPDATE script to "massage" the data, then that is all in the DDL script files, and that too is in chronological order. (I presume that a Database DDL Compare tool would not know about such data-changes and thus you would have to handle those separately)

At rollout we run all DDL scripts, in order, and then all SProc / Trigger / VIEW scripts in modify-date-order (we concatenate them into a single "Rollout Script" so we can easily run that single script on TEST and then identically on PRODUCTION (for us that is usually numerous separate-client Production databases, the approach may have less value for a single application/database rollout).

If we find that we have Chicken-and-Egg and we get a "X does not exist" type error then we rearrange the missing object earlier in the script (or the offending object later :-) ), restore the DB and run the script again. i.e. the script should be "clean" when we need to run it on Production in the future. This allwos us to take care of any FKey relationships that cause updates to need to happen in a particular order and so on.

I guess we could create “reverse” scripts to restore if necessary

I've never been in the situation where we had to do that, and it has always seemed like a huge amount of work to me - in the sense that it is very difficult to test that it will work in all possible rollback circumstances! But for any environment where you might want to rollback after some time has elapsed AND keep all new data/changes, then it would be essential. I just think that a rollout that splits ColumnA into ColumnA1 & ColumnA2, and all the data entry screens change to the New Way, makes it very difficult to built a bullet-proof "reverse" script.

That notwithstanding and depending on the change, I may backup separate tables using SELECT INTO to copy the data to a "safe" database. We also have a copy of the "money maker" database that is made from the backup immediately after database is backed up to not only provide an online secondary in case all hell breaks loose but also to test the backup. Of course, that database is only 350GB so none of that takes much time.

--Jeff ModenRBAR is pronounced ree-bar and is a Modenism for Row-By-Agonizing-Row.First step towards the paradigm shift of writing Set Based code: Stop thinking about what you want to do to a row... think, instead, of what you want to do to a column.If you think its expensive to hire a professional to do the job, wait until you hire an amateur. -- Red Adair

so that they are all neatly grouped, alphabetically, under TEMP_ and don't pollute the normal namespace and when they have been forgotten!! for long enough we can use the "yyyymmdd" to decide "that can't be needed any more" and DROP it.

We also have a copy of the "money maker" database that is made from the backup immediately after database is backed up to not only provide an online secondary in case all hell breaks loose but also to test the backup. Of course, that database is only 350GB so none of that takes much time.

Great idea, not considered that before. Where diskspace is sufficient (which will be 100% of the time for us as we have many-DB's per server, and definitely enough space for a "Copy of one of them") I will do this. If all all hell breaks loose I like the idea of just being able to rename a couple of databases to get me to an "active" state, and not having the stress of a restore-from-scratch; also being able to use that copy-DB during the initial does-it-hold-water tests, after rollout but whilst still in limited-access mode for key-users, would answer any "I'm sure it didn't use to be like that/I can't remember if ..." questions.

No surprise, I assume?!!, that we have a naming convention of RESTORE_OriginalDatabaseName_yyyymmdd and that they get purged after a while, based on their yyyymmdd staleness. Our automatic add-new-DBNames-to-the-backup-job routine ignores any DB with a name that starts "RESTORE_"