I was wondering whether there's a faster way of transfering files than rsync. Is there a utility that takes a directory as a whole piece without having to travel each and every object inside?

I was thinking something like dd of a directory or perhaps an option than I am not aware of in rsync. I don't need to have individual file information down to metadata when files are transfering. Something dumb that takes dir node then grabs all inside it. I am thinking something that reads a directory as a block.

dd is a low level disk utility, it has no concept of file systems.
You can dd files but then the file system is traversed to know what blocks to copy.
You don't get any metadata at all, so if you pass dd a list of files, you cant sort them out later._________________Regards,

NeddySeagoon

Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail.

you could tar the directory and untar on the other end. pipe through ssh if you must.

whether that's slower or faster than rsync depends on many circumstances.

I'd stick with rsync no matter what. Unless it's something like a sourcecode project where git might be more appropriate.

I'd say use rsync if you are doing something repetitively (especially in a cron job), but for speed and ease of setup, it's hard to beat a tar stream through a network connection. It's one of my favorite applications of tar. You can do this using an ssh tunnel if you can't trust the connection; assuming you are entering commands at the source machine, you could do this:

(The usual observations about SSH and home directories apply as do considerations of tar and the creation of directories. When you log into a machine with SSH, the current directory is the user's $HOME directory and the -C option of tar requires that the target directory exists. I'm assuming you know your way around tar.)

You can also do the transfer while seated at the target machine. The command you need has the same level of difficulty (viz. not awfully difficult). Here I pick up the postsync.d directory under /etc/portage and transfer it to postsync.d in my home directory on the target machine:

Code:

ssh sourcehost 'cd /etc/portage; tar cz postsync.d' | tar xz

The trouble with an ssh tunnel is that you incur the overhead of encryption and decryption. If you can reach the target over a trusted network (NOT the internet!), you can use netcat for the tunnel. The setup is a bit harder since netcat has no way to issue commands on the remote machine as does SSH. First pick a port number you want to open on one machine or the other; it doesn't make a difference if that machine is the source or the target. Let's say 9000 (the model number of Hal from 2001.) One machine listens on that port, and that other one sends or receives talking to that port on the listening machine. The command to listen is

Code:

nc -l -p 9000 -q 1

(the -q 1 parameter makes netcat close the connection quickly when the transfer finishes).

The other machine talks to the other machine at that port:

Code:

nc -q 1 otherhost 9000

These commands are the basis of netcat streaming; all you need is to set up a source and a sink. There are several permutations for setting this up. This example transfers a whole MySQL directory from one machine to another (there are much better ways to synchronize databases of course, but let's assume you're just setting up a new machine with the same MySQL version). On the source machine type

Code:

cd /; tar cz /var/lib/mysql | nc -l -p 9000 -q 1

and on the other type

Code:

nc -q 1 otherhost 9000 | tar xz -C /

Remember to set up the listener before you set up the talker.

Notice there's no mention of dd here. As NeddySeagoon points out, it does not automatically gather up files and metadata like tar does. You wanted to pick up a directory as a whole. The files in a directory in a mounted file system have to be accessed separately; there is nothing that can gloop them up in a single operation. The tar command comes awfully close to that: it picks up all the files as if it were a single operation. The tar format also has very little overhead.

You may be thinking of dd this way: you can move whole partitions with it and stream those over network connections if that's what you need to do. This has lots of additional trickiness to pull off: you have to unmount the source partition and be sure that the target partition is the same size or, if you can't do that, that it is larger and that you can use parted or something like that to grow the image once you've transferred it. It's a big pain. Be happy you've got tar.