2 Answers
2

In disk I/O there is a thing called elevator. The disk subsytem tries to avoid thrashing the disk head all over the platters. It will re-order I/O requests (when not prohibitted e.g. by a barrier) so that the head will be moving from the inside of the disk to the outside, and back, performing requested I/Os on the way.

Second thing is I/O request merging. If there are many requests within a short time window, which access different portions of the file, the I/O subsystem will try and get all the data in one go, instead of issuing several disjointed requests.

As far as tuning goes. If you are the application writer, there's a lot you can do. You can issue large, sequential I/Os, whenever you can and use fsync() et.al. when you need to be sure that the data is on the platters.

If you are a sysadmin, and you absolutely know, that the data requests of 2 apps leapfrog, and they try to read files sequentially (e.g. you have 2 DVDs being transcoded in parallel), then yes, increasing readahead should help. Otherwise you'd need to take a look at your I/O patterns and sizes, consider your RAID level (if any) and other factors, before doing any tuning. Look at what your real bottlenecks are, before you start tuning, it may be difficult to guess, what's really limiting your system.

In linux you can define your own scheduling algorithm, you have different possibilities, I had to do a piece on it at school and this article from Red Hat helped me a lot. Although it is specifically for Red Hat you can find these schedulers in virtually any linux distro.