I'm trying to understand the built-in IO cache on Windows. If an application writes to a file, and then reads/writes it multiple subsequent times, it seems that the IO cache should allow this to happen as fast as system memory will allow? Then Windows will gradually write changes back to the physical disk in a non-blocking way?

However, I've seen other ramdisk-related questions on this site where users see significant gains by mounting a portion of system memory as a disk drive. If the IO cache works as I described above, why is this even necessary? Does Windows have settings to tweak this?

These lead up to my real question: Is there a point to using a ramdisk and manually syncing changes back to a physical disk, even minutes or hours later?

1 Answer
1

I think you're confusing the functionality of direct memory access (DMA, a hardware feature) with the features provided by the Cache Manager in Windows.

DMA is a method to allow IO devices to directly access system memory without CPU intervention. Applications don't use DMA-- device drivers do. Applications are far removed from the DMA process.

Applications' read / write requests pass through a variety of layers on the way to the IO device. The Cache Manager is going to handle the bulk of caching of requests via system memory. Device drivers of the IO devices themselves may also implement caching.

If your application doesn't interact well with the Cache Manager but does with a RAM disk (which you'd determine by benchmarking) then, by all means, use a RAM disk. To me there's little point about academically dancing around the subjective "good" of various technologies. In terms of a production deployment, the key to whether there's a "point" to using a given technology should be based on benchmark with as close to real-world conditions as you can simulate. When you make changes to the OS, driver stack, application code, etc, you should redo your benchmarks since your old assumptions may no longer hold true.

From what you described, I don't understand how a ram disk would ever be faster. If a ram disk isn't used, shouldn't the OS use that same memory for the cache manager when it sees a lot of IO activity?
–
JoeCoderApr 14 '11 at 1:45

1

@JoeCoder: The algorithms used by the Cache Manager and your application's algorithms could end up leading to sub-optimal performance with the data backing the cache on disk versus in a RAM disk. Using a RAM disk "pins" data in RAM, whereas relying on Cache Manager means that the Memory Manager could discard pages of cached data depending on what other IO and memory activity is going on with the machine.
–
Evan AndersonApr 14 '11 at 2:59

Part of his question was "does windows have settings to tweak this"? -- So, is there a way to configure the cache manager? -- can you do things like fine tune the amount of RAM it users per disk, how aggressive it is, or what caching strategy it uses, etc?
–
BrainSlugs83Jan 12 '13 at 3:46