Why (on Linux) am I seeing so much RAM usage?

There are better options to see your memory usage; however it seems `free` is more attune to creating the confusion I’m attempting to quell here. That said, see the Redhat docs about /proc/meminfo

Other commands to use to see memory usage

$ vmstat -aS M #see the "inactive" column for a rough "free" idea.

The real answer

There’s no reason to clear what’s in RAM until you need more space to write to it.

The short answer analogy

Buffers and cache in RAM being cleared is silly. Imagine a professor, who rather than writing all the way across the chalkboard, finishes a sentence and immediately erases and starts writing in the upper left corner AGAIN and AGAIN and AGAIN.

OR imagine you like a song. You record it to the beginning of a cassette tape. When you want a new song, do you re-record over the first song or record after it?

AKA: The horrible House/Barn analogy

Many people new to Linux or computers in general have a poor understanding of how RAM works. On Linux systems, most users will look at `top` or use `free` to see the amount of memory installed and/or free. Below is an example:

At first glance, they may look at their machine with 2GB of RAM and wonder how they only have 53MB free! While this is true, the surprise, fear, or angst about this comes from a misunderstanding.

We could take a trip to a million places for this horrible analogy, but let’s pretend we’re on a country farm.
Rather than working with 2024MB of RAM and 1953MB of SWAP, we’ll say we’ve got 20 beds in the house, and 20 beds in the barn.
Rather than programs we’ll have people occupying the space.
For our purposes, ignore costs of cleaning the bedding, water, etc.
The house can hold active workers or non-active workers.

Due to its distance and the time to get to/from it, the barn can only hold non-active workers. When a worker is called from the barn they will have to pass through the house and stay in the house while they work.

10 laborers show up to a job. Since the house is closer to the food, showers, and work they’ll be doing, we let them stay in the house.

The first job is over, we no longer need to keep around the first 10 laborers; however, letting them stay doesn’t cost us anything, as if they weren’t there the beds would just be empty (i.e., go to waste).

Let’s take a timeout and review the above output. Right now we have 20 rooms. 18 are being used so only 2 are free. However, since 10 workers aren’t being used, they are in cache – kept around because we have no reason to kick them out – so, we actually have an operating space of 12 workers we could hire. 2 to stay in the unused rooms, and 10 to replace those that are already here.

We have a new job on the farm, so we have 4 new people show up. We do not have enough beds for them. 2 of the 10 who are not active leave. We move in those 4 new people.

Right now we have 20 rooms filled. 8 are filled by people who aren’t working though, so technically we have 8 beds we can use if we need to. Now let’s get crazy.

It’s production season and we have a lot to do around the farm. We setup another program and need to hire 14 new workers for it. We’ll have to kick out the 8 non-active workers and move in 8 of the new workers. However, because we run out of rooms in the house, our least important workers will have to stay in the barn. The barn is still good storing area, but it will take them longer to get to and from the job each time they are required to.

20 of 20 beds are used by active workers. 6 rooms in the barn are used.

Now, things calm down again and only 4 workers are going to remain active. We’re not going to toss out the rest though as they’re not harming anything just taking up space (at least not until we need the space again)

That’s right, our “free” stays 0, as we still have no space available. The important thing to look at here is how much do we have available if we clean out the buffers and cache – which are not necessary to keep, but we generally keep until it needs to be discard.

That’s right. You just read an entire horrible article about RAM just to know to look at your buffers/cache line before wondering why so much RAM is used.

What about the green argument? Holding information in RAM requires power to keep it there, or else the computer forgets it. RAM that has nothing in it has no power cost to the system, therefore you are being more power-wise by keeping system memory clear?

More information in RAM doesn’t mean more power consumption (or vise versa). Most memory technology is made up of “destructive technology” (capacitors) because it’s so cheap. Whenever a bit is read it’s destroyed. Picture a bottle of water with the outside painted completely black. You have to empty it to know what’s inside! This happens billions of times a second.

Now, pretend there’s a hole in the bottle of water with the outside painted black. You’d have to empty and refill it just to make sure it retained some semblance of its contents. This is true of memory too. Capacitors are like leaky water tanks that continually drain. The computer has to read (which takes power), calculate (which takes power), and write (which takes power too) thousands of times a second just to keep information in memory. More goes into it, of course, but in the grand scheme of things, it doesn’t matter whether the information is full or not in terms of power consumption.

Not to mention that unallocated memory is undefined and can be all 1s, all 0s, or any combination of the two ;-).

# 27 May 2009 at 2:08 am

syamsul said:

Thanks Chris for the insightful article. Just one question – do OpenVZ and Xen based VPSes differ in the way they manage/allocate memory?

The reason I’m asking is that on my OpenVZ VPS, I hardly see anything under Buffers or Cached after doing a free -m.

OTOH, the Xen VPS seems to be allocating quite a bit to buffers and cache like you said.

“OR imagine you like a song. You record it to the beginning of a cassette tape. When you want a new song, do you re-record over the first song or record after it?”

Damn, it’s 2009 🙂 Who on earth uses the cassette tapes for recording anymore.
Anyhow, thanks for the explanation.

# 18 September 2009 at 1:11 am

Nate Johnston said:

Chris,

There are some applications that will have issues when confronting a large cache size. In particular, I had a tomcat that was complaining of memory exhaustion on an 8GB linux host I was able to drop the cache size from 4015 megabytes to 50 megabytes.

First, here is the status quo ante. The “cached” field indicates the combination of the pagecache, the inode cache, and the dentries cache. The pagecache is a copy of files on disk for speedier access. Since this application does not need speedier access to files on disk, the size of this can be tuned down.

Finally, in order to cause the cache to be freed I used the drop_cache to drop just the clean pages in the pagecache. Always, always sync thrice before doing this, because in very rare instances it can cause a kernel panic if the number of dirty pages causes the swapout mechanism to choke.

The pagecache can vary upwards, but hopefully it should not grow larger than 20% of RAM under this configuration. That would be the second of the three numbers page cache is set to. The pagecache setting could be made permanent across reboots by adding it to /etc/sysctl.conf with the following line syntax.

vm.pagecache = 10 20 40

Conclusion:
Tuning the dentry and inode caches via vfs_cache_pressure is not necessary. Those caches were not dropped when I did a drop_cache, yet the requisite bulk of memory was dropped. I think that in a system with the webserver-like high-network I/O and low disk-I/O profile of the iloga hosts, that adjusting the pagecache variable is enough to get the system to a state where the memory is visibly free.

An outstanding share! I have just forwarded this onto a
coworker who was doing a little research on this.
And he actually ordered me dinner simply because I discovered it
for him… lol. So allow me to reword this…. Thank YOU for the
meal!! But yeah, thanks for spending the time to discuss this issue here on your blog.