HTTPS clone URL

Subversion checkout URL

Backup is a RubyGem, written for UNIX-like operating systems, that allows you to easily perform backup operations on both your remote, as well as your local environment. It provides you with an elegant DSL in Ruby for modeling (configuring) your backups. Backup has built-in support for various databases, storage protocols/services, syncers, comp…

README.md

Backup

Backup is a RubyGem, written for Linux and Mac OSX, that allows you to easily perform backup operations on both your remote and local environments. It provides you with an elegant DSL in Ruby for modeling your backups. Backup has built-in support for various databases, storage protocols/services, syncers, compressors, encryptors and notifiers which you can mix and match. It was built with modularity, extensibility and simplicity in mind.

Getting Started

What Backup 3 currently supports

Below you find a list of components that Backup currently supports. If you'd like support for components other than the ones listed here, feel free to request them or to fork Backup and add them yourself. Backup is modular and easy to extend.

Brief explanation for the above example configuration

First, it will dump the two Databases (MySQL and MongoDB). The MySQL dump will be piped through the Gzip Compressor into
sample_backup/databases/MySQL/my_sample_mysql_db.sql.gz. The MongoDB dump will be dumped into
sample_backup/databases/MongoDB/, which will then be packaged into sample_backup/databases/MongoDB-#####.tar.gz
(##### will be a simple unique identifier, in case multiple dumps are performed.)
Next, it will create two tar Archives (user_avatars and logs). Each will be piped through the Gzip Compressor into
sample_backup/archives/ as user_archives.tar.gz and logs.tar.gz.
Finally, the sample_backup directory will be packaged into an uncompressed tar archive, which will be piped through
the OpenSSL Encryptor to encrypt this final package into YYYY-MM-DD-hh-mm-ss.sample_backup.tar.enc. This final
encrypted archive will then be transfered to your Amazon S3 account. If all goes well, and no exceptions are raised,
you'll be notified via the Twitter notifier that the backup succeeded. If any warnings were issued or there was an
exception raised during the backup process, then you'd receive an email in your inbox containing detailed exception
information, as well as receive a simple Twitter message that something went wrong.

Aside of S3, we have also defined two SFTP storage methods, and given them two unique identifiers Server A and
Server B to distinguish between the two. With these in place, a copy of the backup will now also be stored on two
separate servers: a.my-backup-server.com and b.my-backup-server.com.

As you can see, you can freely mix and match archives, databases, compressors, encryptors, storages
and notifiers for your backups. You could even specify 4 storage locations if you wanted: Amazon S3, Rackspace Cloud
Files, Ninefold and Dropbox, it'd then store your packaged backup to 4 separate locations for high redundancy.

Also, notice the split_into_chunks_of 4000 at the top of the configuration. This tells Backup to split any backups
that exceed in 4000 MEGABYTES of size in to multiple smaller chunks. Assuming your backup file is 12000 MEGABYTES (12GB)
in size, then Backup will take the output which was piped from tar into the OpenSSL Compressor and additionally pipe
that output through the split utility, which will result in 3 chunks of 4000 MEGABYTES with additional file extensions
of -aa, -ab and -ac. These files will then be individually transfered. This is useful for when you are using
Amazon S3, Rackspace Cloud Files, or other 3rd party storage services which limit you to "5GB per file" uploads. So with
this, the backup file size is no longer a constraint.

Additionally we have also defined a S3 Syncer ( sync_with Cloud::S3 ), which does not follow the above process of
archiving/compression/encryption, but instead will directly sync the whole videos and music folder structures from
your machine to your Amazon S3 account. (very efficient and cost-effective since it will only transfer files that were
added/changed. Additionally, since we flagged it to 'mirror', it'll also remove files from S3 that no longer exist). If
you simply wanted to sync to a separate backup server that you own, you could also use the RSync syncer for even more
efficient backups that only transfer the bytes of each file that changed.

There are more archives, databases, compressors, encryptors, storages and notifiers than
displayed in the example, all available components are listed at the top of this README, as well as in the
Wiki with more detailed information.

Running the example

Notice the Backup::Model.new(:sample_backup, 'A sample backup configuration') do at the top of the above example. The
:sample_backup is called the trigger. This is used to identify the backup procedure/file and initialize it.

backup perform -t [--trigger] sample_backup

Now it'll run the backup, it's as simple as that.

Automatic backups

Since Backup is an easy-to-use command line utility, you should write a crontask to invoke it periodically. I recommend
using Whenever to manage your crontab. It'll allow you to write to the crontab
using pure Ruby, and it provides an elegant DSL to do so. Here's an example:

every 6.hours do
command "backup perform --trigger sample_backup"end

With this in place, run whenever --update-crontab backup to write the equivalent of the above Ruby syntax to the
crontab in cron-syntax. Cron will now invoke backup perform --trigger sample_backup every 6 hours. Check out the
Whenever project page for more information.

Want to contribute?

Fork/Clone the develop branch

Write RSpec tests, and test against:

Ruby 1.9.3

Ruby 1.9.2

Ruby 1.8.7

Try to keep the overall structure / design of the gem the same

I can't guarantee I'll pull every pull request. Also, I may accept your pull request and drastically change parts to improve readability/maintainability. Feel free to discuss about improvements, new functionality/features in the issue log before contributing if you need/want more information.