OutOfMemoryExceptions

OutOfMemoryExceptions

This section attempts to cover some of the most common reasons why you may see an OutOfMemoryError from your JBoss application server. Interestingly, there are several cases where the JVM may report an OutOfMemoryError even if it is not really out of all of its available memory. For example, most modern Java virtual machines segment the memory heap into generations. Your virtual machine may complain about lack of memory when it has only exhausted one segment (a specific generation) from its total maximum heap size. Also under some conditions on Linux/Unix systems running out of some Operating System resources may yield an OutOfMemoryError for example the inability for the OS to create more new threads for the JVM).

Of course, it is also possible to get an old fashioned OutOfMemoryError when your Java virtual machine really does run out of its maximum heap. There could be few reasons for this, for example you may have a cache configuration that allows more instances to be kept in memory than the JVM really can fit into its heap. Or it may be simply that your JVM has been configured with a maximum heap size too small to run all your application server services.

Seemingly Bogus OOMEs

Running out of memory generates an Error that is not likely to be masked in a catch block because it is an Error rather than an Exception. This is important since one often sees theories expressed about OutOfMemoryError being reported erroneously. That is very unlikely, although OOMEs do occur when the heap has plenty of memory or plenty of recoverable memory.

An OOME is also thrown when the permanent memory is exhausted and that is not part of the heap per se. That is a JVM specific area of memory where information on loaded classes is maintained. If you have a mountain of classes (e.g, a lot of EJBs and JSP pages) you can easily exhaust this area. Oftentimes an application will fail to deploy or fail to redeploy. Increase your permanent memory space as follows to avoid OOMEs. The default with the -server switch is 64 megabytes:

-XX:MaxPermSize=128m (Note this is in addition to the heap. In this case we have 512M heap, 128M permanent space for a total of 640 megabytes. Don't forget the JVM itself takes up a chunk of system memory and there is also two megs per thread of stack space. That can add up with a lot of HTTP/S processors.)

-XX:MaxPermSize=128m -Xmx512m (total of 640 megabytes allocated from system - this is not the total size of the VM and does not include the space the VM allocates for the "C heap" or stack space)

On Windows, you can set this in the \bin\run.bat file:

set JAVA_OPTS=%JAVA_OPTS% -Xms128m -Xmx512m -XX:MaxPermSize=128m

WebLogic appears to have finally determined this was the source of the common OOME reports resulting from their heavy EJB footprint: loads the EJBC compiled classes and has an overall larger footprint than JBoss. They set the default for Sun to 128 megabytes. On the other hand the JRocket JVM uses a different strategy that does not produce permspace OOMEs.

Final note. The following super-excellent toolkit will give you a precise picture of your permanent memory space and the other segments of the heap. I used this to determine our permspace exhaustion problem and compare WebLogic to JBoss footprints. The toolkit is super-excellent however the documenation is sub-optimal as this is an experimental tool. Check the FAQ on setting your JVM switches. Highly recommend this thing:

If you add -XX:+PrintClassHistogram to your VM option (if Sun) you can have a classHistogram what would help on finding where the memory was consumed. You can have that while getting a StackTrace

OutOfMemoryError: unable to create native thread

This error will occur even when you have plenty of heap, but the OS cannot allocate more memory for the threadstack. You can reduce the size of the thread stack with -Xss128k (the default is 1mb on windows, ?? on linux). The total memory usage equation is: