Syncing Zarafa Attachment files, all gzipped by default. i could run multiple instances but thats less efficient than 10 threads. And the network is 1GBit to 1GBit but different datacenters but it shouldnt be a issue. got 24 SAS disks on the source side and intelligent storage with SSD on the destination.
–
Tom van OmmenJun 10 '11 at 14:20

1

@Tom van Ommen - why do you think you're CPU limited? How is multiple processes less efficient than threads if you really are CPU limited?
–
JimBJun 10 '11 at 14:31

1

@Tom van Ommen, 10 processes do have more overhead than 10 threads; however, locking data structures between threads is a coding nightmare. It's often much more efficient (for the coder's time) to just spawn multiple processes and be done with it
–
Mike PenningtonJun 10 '11 at 14:36

1

@Guacamole - multiple thread could help in some situations, but if his link is saturated, he's not going to push any more through no matter how many thread he has. Rsync does use threads for concurrency, and isn't internally blocking on IO.
–
JimBJun 10 '11 at 14:40

1

@Guacamole - All I'm pointing out is that if he's using ssh as a transport, his throughput is limited by ssh itself (specifically the static receive window, unless he's using the HPN ssh patches).
–
JimBJun 10 '11 at 15:28

If the disk subsystem of the receiving server is an array with multiple disks, running multiple rsync processes can improve performance. I am running 3 rsync processes to copy files to an NFS server (RAID6 with 6 disks per raid group) to saturate Gigabit Ethernet.

I've read many questions similar to this. I think the only real answer is break up the copy/move manually. IOps will be the issue here. If it makes you feel any better, I'm in the process of moving ~200 milllion files consuming well over 100TB of disk space.