4 Answers
4

I'm quite sure using the OS specific copy command would be faster or at least as fast as a simple self-written solution. The OS specific command probably uses a sensible buffer size and other optimizations which you would otherwise have to figure out yourself.

Edit:
x-x is right, you should not call the copy command directly. I thought Java already had a copy method, like File.copy() or something, but I couldn't find anything, not even in JDIC. So Apache Commons IO is probably the way to go.

One problem with running an OS command is that you have to create a full process at the OS kernel level and that's a heavyweight operation. There is a large fixed overhead which will be particularly serious for smaller files.

The other problem is that it adds a system dependency without a good reason.

assuming portability is not the primary concern here, the primary use case of such a method will involve copying many files as part of the request processing and thus performance being a very imp criterion.
–
jjoshiSep 1 '09 at 6:01

even if performance is an important criterion, the bottleneck will be disk and not cpu so it does not matter in what language you do it.
–
flybywireSep 1 '09 at 6:05

A library to do a file copy!!! That's going a bit overboard for about 5 lines of code isn't it? Or does it use some arcane Java concurrent read/write calls for maximum throughput?
–
Adrian PronkSep 1 '09 at 11:42

@Adrian, the bottleneck is disk speed. Files cannot be copied faster than the speed of disk, and if you it reasonably, they can't be copied slower either.
–
flybywireSep 1 '09 at 13:40

@x-x: I always prefer to use very large copy-buffers (~16Mb) if I feel it's safe to use so much RAM to avoid the possibility of disk seek thrashing in case the OS decides to interleave the reads and writes too much. Also, I've seen file-copies that use 1-byte buffers and they are often noticeably slow.
–
Adrian PronkSep 6 '09 at 11:11