I have a fresh server running Ubuntu 11.04 (Natty) (64-bit). I started off by installing openjdk and tomcat6. When the Tomcat server starts up, it immediately uses 480+ MB of memory. This seems way out of proportion and I'm wondering if anyone has a solution to get Tomcat to use 200-300 MB (or less) memory.

I used memtop to see this: (Note: I removed all but the large entries. The 499MB entry is Tomcat)

Does anyone have any ideas on what I could do to knock the memory consumption down to something more reasonable? I do not understand why the JVM and Tomcat together would need to consume this much memory.

EDITS

For the heck of it, I downloaded Tomcat 6 directly and ran the startup script. Keep in mind that no Xmx value was set this time. When running this, memtop shows that Tomcat is using 737 MB's worth of memory!

This leads me to believe that there is an issue with openjdk using a serious amount of memory for the JVM.

I tried the same thing with a fresh .zip download of Tomcat 7 -- same issue. It was using 740 MB of memory.

I installed the Sun JRE/JDK. Memory consumption dropped to around 400 MB (down from almost 500 MB). This is still more memory usage than what I would prefer!

Yes, vanilla Tomcat. From what I can tell it should be the most basic, light weight install possible--at least from a package management view point.
–
TannerAug 1 '11 at 18:32

Did you modify any scripts prior to starting the server?
–
homeAug 1 '11 at 18:43

No. The scripts were default. Vanilla as vanilla can be!
–
TannerAug 1 '11 at 18:46

2

Perhaps the memtop tool just sums all the memory mappings and the likes that's been mapped. Which is going to be a lot, but just because some memory mapping has been made, doesn't mean it's used. What does the RES and VIRT column say if you use top ?
–
nosAug 1 '11 at 19:11

1 Answer
1

That process then starts eight threads (which all have access to the allocated 1 GB of memory).

Someone runs a tool to determine how much memory is being used. The tool works as follows:

Find every scheduleable item (each thread).

See how much memory it can access.

Add that memory together

Report the sum.

The tool will report that the process is using 9 GB of memory, when it is (hopefully) obvious that there are eight spawned threads (and the thread for the original process) all using the same GB of memory.

It's a defect in how some tools report memory; however, it is not an easily fixable defect (as fixing it would require changing the output of some very old (but important) tools). I don't want to be the guy who rewrites top or ps, it would make the OS non-POSIX.

---- Original post follows ----
Some versions of memory reporting tools (like top) mistakenly confuse threads (which all have access to the same memory) with processes. As a result, a tomcat instance that spawns five threads will misreport it's memory consumption by five times.

The only way to make sure is to list out the processes with their memory consumption individually, and then read the memory for one of the threads (that is getting listed like a process). That way you know the true memory consumption of the application. If you rely on tools that do the addition for you, you will overestimate the amount of memory actually used by the number of threads referencing the same shared memory.

I've had boxes with 2 GB of memory (and 1 GB of swap) reporting that ~7GB of memory was in use on a particularly thread heavy application before.

For a better understanding of how memory is reported by many tools, look here. If they python code is parsing the text of one of these tools, or obtaining that data from the same system calls, then it is subject to the same errors in over reporting memory.

This might be a possibility, but how can I know for sure? The problem I am running into is that the VM is apparently reporting my memory usage the same as what memtop shows. So according to the hosting company, I am exceeding or about to exceed my memory usage.
–
TannerAug 1 '11 at 19:47

You told the jvm to not allocate more heap than X; so it won't. Apart from a massive bug in the JVM that only you have noticed, how else would you explain the JVM not honoring it's configuration parameters? Call me jaded, but I think others (including myself) would be in front of Oracle's offices with pitchforks and torches if such obvious (and operational critical) functionality broke.
–
Edwin BuckAug 1 '11 at 19:51

That's true, however, the other items contain so little overhead in comparison to a sizeable heap that you're not in danger of watching your JVM grow to six times the size of the heap (As you are).
–
Edwin BuckAug 1 '11 at 19:56

So you're saying it sounds like the -Xmx=128M is not being honored?
–
TannerAug 1 '11 at 20:00