and you will see the "cache" value drop by the 100Mb that you copied to the ram-based filesystem (assuming there was enough free RAM, you might find some of it ended up in swap if the machine is already over-committed in terms of memory use). The "sync; echo 3 > /proc/sys/vm/drop_caches" before each call to free should write anything pending in all write buffers (the sync) and clear all cached/buffered disk blocks from memory so free will only be reading other allocations in the "cached" value.

The RAM used by virtual machines (such as those running under VMWare) will also be counted in free's "cached" value, as will RAM used by currently open memory-mapped files.

So it isn't as simple as "buffers counts pending file/network writes and cached counts recently read/written blocks held in RAM to save future physical reads", though for most purposes this simpler description will do.

+1 for interesting nuances. This is the kind of information I'm looking for. In fact, I suspect that the figures are so convoluted, so involved in so many different activities, that they are at best general indicators.
–
Avery PayneJun 10 '09 at 17:49

I was looking for more clear description about buffer and i found in "Professional Linux® Kernel Architecture 2008"

Chapter 16: Page and Buffer Cache
Interaction
Setting up a link between pages and buffers serves little purpose if there are no benefits for other parts
of the kernel. As already noted, some transfer operations to and from block devices may need to be
performed in units whose size depends on the block size of the underlying devices, whereas many
parts of the kernel prefer to carry out I/O operations with page granularity as this makes things much
easier — especially in terms of memory management. 11 In this scenario, buffers act as intermediaries
between the two worlds.