How often, you make backups and how long you keep them is completely dependent on the data. The backups you bank/doctor keeps will almost certainly be a lot different from the backups kept by some simple wordpress blog. What would be the advantage of storing your backups in a VCS, are you thinking this would act as some kind of simple de-dup?
–
ZoredacheJan 11 '12 at 22:47

I think it's depressing to realize that there are many small doctor's practices that probably keep the same backups Matthew is asking about...if any...
–
Bart SilverstrimJan 11 '12 at 23:15

@BartSilverstrim It's been my experience that small offices in general keep lousy backups (if any). From direct experience back in the ISP/MSP consulting days medical offices are no exception. Bright side is you look like a hero when you implement backups, a server dies a few days later, and you restore all their data!
–
voretaq7♦Jan 12 '12 at 16:19

2 Answers
2

First, don't version control your database backups.
A backup is a backup - a point in time. Using version control sounds like a nice idea but realize that it means you will need to restore the whole SVN repository (ZOMG Freaking HUGE) if you have a catastrophic failure and need to get your database back. That may be additional downtime time you can't afford.

Second, make sure your backups are getting off site somehow.
A backup on the local machine is great if you need to restore data because you messed up and dropped a table. It does you absolutely no good if your server's disks die.
Options include an external hard drive or shipping the backups to a remote machine using rsync. There are even storage service providers like rsync.net that specialize in that.

Third, regarding frequency of backups: Only you know how often you need to do this.
My current company has a slave database with near-real-time replication of our production data. That slave is backed up every night to a local machine, which then syncs to an off-site storage facility.
In the event of a production hardware failure we activate the slave. Data loss should be minimal, as should downtime. In the event of an accidental table deletion we can restore from the local backup (losing up to 1 day of data). In the event of a catastrophic incident we can restore from the off-site backup (which takes a while, but again will only lose up to 1 day of data).
Whether that kind of backup scheme works for you depends on your data: If it changes frequently you may need to investigate a backup strategy that gets you point-in-time recovery (log-shipping solutions can often do this). If it's mostly static you may only need to back up once a month. The key is making sure that you capture changes to your data within a reasonable time from when they are made, ensuring you don't lose those changes in the event of a major incident.

And don't make dumps on a production database, use, at least, a replicated source for backups. Doing so your backup procedures will not impact the production database performance.
–
theistJan 11 '12 at 23:00

@theist excellent point. While you can do a dump against production it will have a performance impact, and one day your users will notice and declare it to be "unacceptably slow"
–
voretaq7♦Jan 11 '12 at 23:46