or there's some pretty effective read-ahead action going on with the disk driver

Hm. I used XP personally and professionally for circa 10 years, and I never encountered the situation whereby the first run of a program reading a file wasn't substantially slower than the second run due to cache priming. (Accepting when the file in question was much bigger than the available cache memory, when the second had to re-read the entire file from disk anyway.)

There have been options (FILE_FLAG_RANDOM_ACCESS/FILE_FLAG_SEQUENTIAL_SCAN) in NTFS since its inception designed to give the OS clues as to the best caching strategy to use. But, a) in some fairly extensive testing I performed back in the day on XP, the use of these flags made little or no detectable difference; b) Perl doesn't use them.

And the idea is easily disproved. Download CacheSet; start the program, hit the "Clear" button and confirm.

Then run one of the tests twice in succession. And record the run times. £ to p the first is substantially slower than the second.

With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'

Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.

"Science is about questioning the status quo. Questioning authority".

In the absence of evidence, opinion is indistinguishable from prejudice.

I'm on CentOS, a Linux disto, not Windows. I did issue the 'sync' command, but it apparently only commits buffer cache to disk rather than emptying it. I also tried touch'ing the file. In all cases, I'm able to cat my testfile to /dev/null in under one second. So I don't know what's up.

I did issue the 'sync' command, but it apparently only commits buffer cache to disk rather than emptying it.

A crude but usually effective way of flushing one file from the cache is to cat a file that is bigger than the cache. Say, copy/append your 80GB datafile to another file 5 times (=400GB), and then cat that to /dev/null before running your tests. Might work for you.

I'm able to cat my testfile to /dev/null in under one second.

Assuming this is your 10e6 record testfile, and it is representative of your 80GB file and has an average of 86 characters/line, that gives a filesize of ~820MB.

The very best sequential-read throughput figure I can find for a non-raided 15k local drive is a little over 100MB/s.

That pretty much confirms that your testing is reading from cache rather than from disk. Even the most optimistic read-ahead algorithm cannot drive the interface 8 times faster than its maximum throughput.

With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'

Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.

"Science is about questioning the status quo. Questioning authority".

In the absence of evidence, opinion is indistinguishable from prejudice.