Depending on how you look at it, ps is not reporting the real memory usage of processes. What it is really doing is showing how much real memory each process would take up if it were the only process running. Of course, a typical Linux machine has several dozen processes running at any given time, which means that the VSZ and RSS numbers reported by ps are almost definitely "wrong".

27 Answers
27

With ps or similar tools you will only get the amount of memory pages allocated by that process. This number is correct, but:

does not reflect the actual amount of memory used by the application, only the amount of memory reserved for it

can be misleading if pages are shared, for example by several threads or by using dynamically linked libraries

If you really want to know what amount of memory your application actually uses, you need to run it within a profiler. For example, valgrind can give you insights about the amount of memory used, and, more importantly, about possible memory leaks in your program.

To interpret the results generated by valgrind, I can recommend alleyoop. It isn't too fancy, and tells you simply what you need to know to locate sources of leaks. A nice pair of utilities.
–
DanSep 26 '08 at 15:55

1

Item (a) isn't true. ps shows virtual size as well as resident usage, which is the actually number of page frames used by the process.
–
sirideSep 16 '10 at 14:16

6

Item (a) is correct. There is a difference between pages used and memory actually allocated by the application via calls to malloc(), new, etc. The resident usage just shows how much of the paged memory is resident in RAM.
–
jcofflandNov 5 '10 at 9:16

47

This doesn't really tell how to get memory usage using valgrind?
–
Matt JoinerMar 11 '11 at 3:17

3

the default valgrind tool, memcheck, is useful for detecting memory leaks, but its not really a memory profiler. For that, you want valgrind --tool=massif.
–
Todd FreedJan 13 at 13:09

Am I missing something? The question asked how to better measure memory usage by a process, given that VSZ and RSS reported in ps is misleading. Your answer details how to look up the VSZ - the same value that was mentioned as being misleading.
–
thomasrutterDec 6 '12 at 12:03

4

@thomasrutter Yes, you are missing the original question (rev 1), it has been edited several times and is quite old (2008). The original question just asked how to measure the memory usage of a process. Feel free to edit questions and answers, though, if things are outdated. :)
–
DustinBJul 31 '14 at 13:04

In recent versions of linux, use the smaps subsystem. For example, for a process with a PID of 1234:

cat /proc/1234/smaps

It will tell you exactly how much memory it is using at that time. More importantly, it will divide the memory into private and shared, so you can tell how much memory your instance of the program is using, without including memory shared between multiple instances of the program.

Use smem, which is an alternative to ps which calculates the USS and PSS per process. What you want is probably the PSS.

USS - Unique Set Size. This is the amount of unshared memory unique to that process (think of it as U for unique memory). It does not include shared memory. Thus this will under-report the amount of memory a process uses, but is helpful when you want to ignore shared memory.

PSS - Proportional Set Size. This is what you want. It adds together the unique memory (USS), along with a proportion of its shared memory divided by the number of other processes sharing that memory. Thus it will give you an accurate representation of how much actual physical memory is being used per process - with shared memory truly represented as shared. Think of the P being for physical memory.

How this compares to RSS as reported by ps and other utilties:

RSS - Resident Set Size. This is the amount of shared memory plus unshared memory used by each process. If any processes share memory, this will over-report the amount of memory actually used, because the same shared memory will be counted more than once - appearing again in each other process that shares the same memory. Thus it is fairly unreliable, especially when high-memory processes have a lot of forks - which is common in a server, with things like Apache or PHP(fastcgi/FPM) processes.

Notice: smem can also (optionally) output graphs such as pie charts and the like. IMO you don't need any of that. If you just want to use it from the command line like you might use ps -A v, then you don't need to install the python-matplotlib recommended dependency.

One critical point about RSS is that most applications these days share a lot of code pages. Every shared library (e.g. libc and libstdc++) will be counted for every process using it. And if there are multiple instances of a process running, all of that code will be double-counted.
–
David C.Nov 7 '14 at 20:19

Precisely, which is why RSS over-reports in term of actual physical memory per process.
–
thomasrutterNov 8 '14 at 11:42

If you want to analyse memory usage of the whole system or to thoroughly analyse memory usage of one application (not just its heap usage), use exmap. For whole system analysis, find processes with the highest effective usage, they take the most memory in practice, find processes with the highest writable usage, they create the most data (and therefore possibly leak or are very ineffective in their data usage). Select such application and analyse its mappings in the second listview. See exmap section for more details. Also use xrestop to check high usage of X resources, especially if the process of the X server takes a lot of memory. See xrestop section for details.

If you want to detect leaks, use valgrind or possibly kmtrace.

If you want to analyse heap (malloc etc.) usage of an application, either run it in memprof or with kmtrace, profile the application and search the function call tree for biggest allocations. See their sections for more details.

There isn't a single answer for this because you can't pin point precisely the amount of memory a process uses. Most processes under linux use shared libraries. For instance, let's say you want to calculate memory usage for the 'ls' process. Do you count only the memory used by the executable 'ls' ( if you could isolate it) ? How about libc? Or all these other libs that are required to run 'ls'?

You could argue that they are shared by other processes, but 'ls' can't be run on the system without them being loaded.

Also, if you need to know how much memory a process needs in order to do capacity planning, you have to calculate how much each additional copy of the process uses. I think /proc/PID/status might give you enough info of the memory usage AT a single time. On the other hand, valgrind will give you a better profile of the memory usage throughout the lifetime of the program

Note that this is not supported on all platforms.
–
CashCowSep 16 '10 at 14:01

According to the Linux man page (linux.die.net/man/2/getrusage), getrusage is part of the SVr4, 4.3BSD and POSIX.1-2001 specs (noting that POSIX only specifies the utime and stime fields.) I wouldn't expect it to work on non-UNIX platforms (except, perhaps, via an environment like Cygwin that provides UNIX capabilities for other platforms.)
–
David C.Nov 9 '14 at 19:34

Valgrind can show detailed information but it slows down the target application significantly, and most of the time it changes the behavior of the app.Exmap was something I didn't know yet, but it seems that you need a kernel module to get the information, which can be an obstacle.

I assume what everyone wants to know WRT "memory usage" is the following...
In linux, the amount of physical memory a single process might use can be roughly divided into following categories.

ksh is a standard shell. It might not be installed by default on linux distros for desktop users or for minimalistic purposes, but it's only one command away in almost any unix-/linux OS. (i.e. on all BSDs, on all real UNIX, on RHEL, on SLES, on Debian, on Ubuntu, on OSX)
–
Florian HeiglDec 2 '13 at 14:01

This file is accessible, by default, only to root user.
–
Dmitry GinzburgDec 23 '14 at 10:34

A good test of the more "real world" usage is to open the application, then run vmstat -s and check the "active memory" statistic. Close the application, wait a few seconds and run vmstat -s again. However much active memory was freed was in evidently in use by the app.

If the process is not using up too much memory (either because you expect this to be the case, or some other command has given this initial indication), and the process can withstand being stopped for a short period of time, you can try to use the gcore command.

gcore <pid>

Check the size of the generated core file to get a good idea how much memory a particular process is using.

This won't work too well if process is using hundreds of megs, or gigs, as the core generation could take several seconds or minutes to be created depending on I/O performance. During the core creation the process is stopped (or "frozen") to prevent memory changes. So be careful.

Also make sure the mount point where the core is generated has plenty of disk space and that the system will not react negatively to the core file being created in that particular directory.

If you want something quicker than profiling with Valgrind and your kernel is older and you can't use smaps, a ps with the options to show the resident set of the process (with ps -o rss,command) can give you a quick and reasonable _aproximation_ of the real amount of non-swapped memory being used.

Beside the solutions listed in thy answers, you can use the Linux command "top"; It provides a dynamic real-time view of the running system, it gives the CPU and Memory usage, for the whole system along with for every program, in percentage:

top

to filter by a program pid:

top -p <PID>

to filter by a program name:

top | grep <PROCESS NAME>

"top" provides also some fields such as:

VIRT -- Virtual Image (kb) :The total amount of virtual memory used by the task

I will rather suggest that you to use atop. You can find everything about it on this page. It is capable of providing all the necessary KPI for your processes, and it can also capture to a file as well.

While this question seems to be about examining currently running process, I wanted to see the peak memory used by an application from start to finish. Besides valgrind, you can use tstime, which is much simpler. It measures the "highwater" memory usage (RSS and virtual). From this answer.

Most applications - that is, those that use malloc() and malloc-like memory libraries - don't return pages to the OS until proess termination. So the amount you see with PS (or any other tool that doesn't dig into the process's heap) will be the high-water mark.
–
David C.Nov 7 '14 at 20:24