However, I will echo what some of the commenters on the post have said, which is that when I have used suched utilities such as SuperCopier and Terracopy, it's not because I wanted to speed up the absolute transfer time for copying a large number of files, it's because they are much more robust and flexible -- and fail much more gracefully -- wheras default windows copy will completely abort and die midway if a single file copy out of a thousand fails.

Also, a lot of stuff happened from XP to Vista. Pre-Vista, the maximum transfer size you'd get (can't remember if this was all the way down at DMA level, or "just" some kernel<>usermode thing) was 64kb. For Vista/Win2k8 this was increased to (iirc) 4MB for desktop versions and 64MB for server versions.

+1 with Mouser and some of the comments on the original article. I was only in the habit of using TeraCopy for it's error reporting and the ability to continue after it encountered a bad file. However, TeraCopy did seem to speed up massive transfers to and from networked drives. But I never did any serious testing on that so it may have only seemed faster to me.

Teracopy shaves easily about an hour to two hours off when copying Oracle dump files to and from network drives. And yes, quite regularly I need to transfer 350GByte of dump files.

"Funny" thing is that extracting to the network drive directly is slower than extracting dump files locally and then transfer these with TeraCopy.

Also, I concur with the others about TeraCopy being a reliable way of copying data without supervision. Something that is nigh impossible with the Windows Explorer or any file-manager that uses the default explorer facilities.

"Funny" thing is that extracting to the network drive directly is slower than extracting dump files locally and then transfer these with TeraCopy.

Have you tried mounting via NFS instead of SMB/CIFS? The SMB protocol, especially before the vista/win2k8 updated version, is notoriously slow - haven't played with NFS myself, but it might be worth a try?

I use Teracopy myself (on XP, Vista and 7), and for the reasons given.

What I find interesting, though, is that we all tend to treat what are in theory copy acceleration utilities as real-life copy management utilities. And if Samer is right (and I believe he probably is) copy management has a speed cost in Windows.

My original question remains: is a *real* copy acceleration utility for Windows theoretically possible? How come none of the available utilities seem to achieve the acceleration it promises?

My original question remains: is a *real* copy acceleration utility for Windows theoretically possible? How come none of the available utilities seem to achieve the acceleration it promises?

I think within a given PC, file copying speeds would be more dependent on hardware (and possibly drivers) than anything else. Windows is a mature operating system. So I'd suspect Microsoft has by now identified and taken care of most of the safely fixable boottlenecks. Which would account for why copy accelerators had a more pronounced effect under XP - and may well be getting in the way with newer versions of Windows.

Just guessing though.

Network file copying is a different matter however, and there are definitely areas for improvement there. And also lots of ways to optimize and tweak network performance. However, hardware can once again play a major role since a faster network infrastructure yields faster transfers even if all other factors remain the same. So sometimes it's just more practical to put in faster NICs and data switches to get 'wire' transfer speed increases rather than bother with too much protocol or OS tinkering.

file copying in Explorer in Windows 7 is sometimes slowed down by having to deal with dialogs like this It is sometimes useful but I dislike that it pops up even for files that are exact copies. A smarter file manager would do a file hash comparison and skip copying that file if the hashes match, all in the background.

That's very strange... I did a comparison a couple years ago between Windows and several of the other accelerators (Fastcopy, Supercopier 2, Copy Handler, FF Copy, Ultracopier and a couple more, all that would be no cost for commercial use, which ruled out Teracopy) to see what they did locally and over a lan. It wasn't a double blind, duplicate hardware kind of test but it was the same sets of files and targets. In those tests Fastcopy was the clear winner speed-wise (although the interface is lousy, which is not an issue if you're scripting something). A few others were clumped behind that, generally not too far behind. Windows was never the fastest option, although some of the utilities were slower in some of the tests.

I don't have the details any more, but that said, the real end result is that I have Supercopier installed on my personal PCs. It was almost as fast as Fastcopy and does a good job with the management part that has been mentioned, and it has always been reliable. That was the surprising part to me -- with half of the other utilities I tested, I had issues with installation, lockups or copy failures. To my mind that immediately ruled them out.

That was back when I could do stuff like that for my previous employer. I can't recall if was running XP or 7 at the time, but would have been 32 bit in eiother case. Would probably be worth revisiting.

A smarter file manager would do a file hash comparison and skip copying that file if the hashes match, all in the background.

That would be pretty disastrous speed-wise - definitely not something you want for a general file-copying routine

I think it could be designed to avoid speed-problems in many use cases. First, the hashing would only be done for identical file names in source and target folder. Second, the user could configure how large files to do the automatic hash check for, depending on system speed and user preference. For example if the operation only ran for same named files under 100 Megabytes in size then would there really be any problematic slowdown on computer with a newish CPU? If you are copying thousands of files with name conflicts then yes, delays will add up. But a smart file manager could also calculate the total number of such conflicts prior to operation and, if the number reaches some upper limit, skip auto hashing and display the regular interaction popups (including the checkbox labelled something like "do this action for all similar files"). So smartly designed it could avoid the possible slow-down cases and still save the user time and attention in all other cases.

For example if the operation only ran for same named files under 100 Megabytes in size then would there really be any problematic slowdown on computer with a newish CPU?

CPU speed isn't really your main concern - something like Adler32 is pretty damn fast and "probably good enough". MD5 is also pretty fast, and since you're just comparing local files and not trying to be cryptographically safe, it will probably be sufficient.

The problem is that you're doing disk I/O. Instead of "1 read, 1 write", you'll be doing "2xRead + CPU, THEN perhaps 1 read, 1 write". OK, since you propose to use hashing, at least the 2xRead will run at full disk speed, whereas compare-the-bytes would be slower (seeking back and forth on a mechanical drive kills performance). But there's still a lot of overhead in this!

Ok, but doing a SHA1 check on a 150 MB file takes about 0,3 seconds on my system. So a bit over 0,6 seconds for a hash comparison. That is not so bad, especially compared to how Explorer works now, requestion user attention and delaying the file transfer with a popup window with a lot of text.