Search form

You are here

A heads up - I have tried everything mentioned above, to do arestore no joy at all. My situation dire because my PRODUCTION sever crashed. Any one with ideas on how I can do a tklbam restore without generating the error below?

I choose these 2 dirs because tklbam-restore creates a large /mnt/var/cache/tklbam/restore and unpacks the backup rootfs in /mnt/tmp/tklbam-xyz123

On top of that, the actual copying of the files being restored is written at the right place, say in /srv.

This means that the rootfs of the system must account for 3x the size of the data in the backup!!! (might be between 2x and 3x if the data is compressible by duplicity, i.e. not really with images, media and such). I don't know about you but that sounds like a pretty hefty price to pay!

I know there's a "--no-rollback", perhaps this saves space? Also I will try the "--restore-cache-size=10%" maybe?

Another obvious solution I can think of is to exclude THE large directory I have and manually copy it into place but I am afraid /tmp/tklbam-xyz gets deleted when tklbam-restore completes.

So under those circumstance (or any where the backup data won't compress well e.g. pictures, videos etc) that's to be expected. Because it doesn't compress well, when you restore, it will download ~4GB (archived backup). It will then cache the existing 4GB (as rollback data), then unpack the archived backup (another 4GB) and move it across to where it needs to go.

I'm not really sure how could avoid the caching of the data, without risking losing something along the way?! Obviously if you don't care about saving the rollback data you could avoid that via the --no-rollback switch. Although it's worth mentioning that the rollback data from a restore on a new server will generally be a lot less (probably only a few MB).

In fairness, a backup archive, would rarely be as big as your test file. In most cases, the majority of files would be non-random text, which generally compress really well (often to as little as 10% of original size - sometimes even less).

If you want to do some more tests, I'd recommend doing a staged restore. I.e. something like this:

Alternatively, if you don't want to run a server with that much root volume overhead, one way to workaround that would be via an additional (temporary) volume for the purposes of caching your backup.

IIRC on AWS it uses /mnt as the base for the cached and temp data. So if you added an additional (EBS) volume to your server. Then mount it to /mnt (before you do the restore) then all the caching should be on your additional volume. When you're done you could unmount it and destroy it. TBH, I've been planning to explicitly document how to do that for a while but just haven't had the spare cycles.