Linux backup stuff

31072011

Booted my PC up with a Linux CD yesterday afternoon to do a ‘dd’ image of the hard disk so that I have a checkpoint in time to image my machine back to. Only thing is, it was still only halfway through making the 300GB image by 9am this morning after running all night. 😦

The command I issued was “dd if=/dev/sda of=/media/Iomega\ HDD/laptop-image-30th-july-2011.img” – which by default will take 512Kb blocks at a time and write them to the image file.

Instead, I stuck the options “bs=100M conv=notrunc” on the end of it and the process has speeded up massively. This tells dd to copy 100MB blocks instead. After about 20 minutes, I’m already 10% of the way through making the image.

Another helpful thing I discovered this morning is the -h switch on the ls command. -h makes the output ‘human-readable’ – i.e. it uses K, M and G to indicate kilobytes, megabytes and gigabytes. See below:

Actions

Information

2 responses

Hello from a fellow UK Networking / Linux guy (One that also doesn’t smell and has a respectable Christmas card list.. Promise!)

Just a quick thought on this, I tackled a data copy (2T disk with only 1T Used) in the following way with DD;

– Fill all free space with a file made of Zero’s
– Delete the file
– Pipe DD if= etc through tar, which will then compress the free space down to nothingness due to your disk’s recent encounter with a massive file full of zero.

– Does mean if you ever need the image you have to go back through tar and it does use more CPU, but images are much smaller and you dont back up free space block for block.

There are alternatives like rsync etc but sometimes just grabbing a DD image feels simplest!