> One of the things we've noticed is that file-copies between temporary files
> and text-base files are done by doing read-writes of 512 byte chunks. On
> any vaguely modern PC, a buffer this small would seem to be better suited to
> a stress test of the operating system's disk-caching than to an efficient
> copy of a large file.
>
> This copy is being done by apr_file_transfer_contents, which uses the
> CRT-provided const BUFSIZE to set the size of its chunk buffer. On current
> MS compilers, this is defined as 512. I'm not convinced (from reading the
> spec) that this constant is provided by ANSI 'C' as a general purpose 'use
> this buffer size in your file operations' value, and I suspect that 512
> bytes is *way* smaller than anything any of us would choose for a buffer
> size in this role. (I note that elsewhere, SVN uses SVN_STREAM_CHUNK_SIZE
> for similar functions, which is a much more respectable size.)
>
I don't think BUFSIZ is the modern way of choosing a buffer size. Usually
one uses st_blksize from a stat call, but that seems not to be available
in APR.

We have a special version of file copying already. Using our own loop
wouldn't be much code duplication. So, if this is a problem, we could do
this as a short-term solution. Ofcourse, getting it into APR is even
better. Have you asked the APR people?