I recently fixed three distinct issues and noticed a lot of posts that were related but never really got to the bottom of the problem. In particular a Out of Memory Error issue that has been a long-standing problem in all application servers where the heap actually looks fine: people exhaust hours tracking down phantom, non-existent leaks.

Rod Macpherson (a.k.a. treespace)

#1 Out of Processors

JBoss sets the processors to 5 for HTTPS in the tomcat jboss-service.xml and that is sure to cause problems for those switching from HTTP over to HTTPS for testing: HTTP has 80 processors! It can catch you off guard but it is pilot error. You will not get an out of memory error ? happily it is an error not an exception.

#2 Out of Connections

If you run out of connections you will get "no managed connections". You will not get an out of memory error. Increase your connections and check for availability in 3.2.3 using this, I'll use the oracle class JBoss ships with as an example. I believe this is strictly 9.2 > since if you look at the class it calls a driver ping method recently introduced: fast and efficient.

Also, checking for available connections and giving your users a busy signal isa good idea.

#3 Out of Memory .... THE BIG ONE!

JBoss is the least likely to exhibit the out of memory problem because it does not have the EJB compile generating all those extra classes (a wild guess) and a small footprint. WebLogic has this problem and only just figured it out: they haven't told anybody, they quietly changed the new default configuration in 8.1:

SOLUTION: -XX:MaxPermSize=128

I can watch the perm size overflow before my very eyes and throw an Out of Memory error. The default with the -server switch is 64M and as soon as you cross that, poof, Out of Memory or sometimes a Hotspot compiler error asking you to contact Sun. Note that JRocket does not have this problem, HP does as does Sun.

I have a very high confidence level this is the mystery Out of Memory that is often reported by applications with tons of classes. I have been able to reproduce it at will and was able to watch our application break the JVM by overflowing the perm space. Adding just a couple more megs of perm to the same test: problem goes away. That is where the JVM places information about loaded classes. Sometimes it is cleaned up, mostly it just grows until you break it.

I updated the wiki and found a bug. You used double quotes on a "word" rather than ''two'' single quotes to italicize. The save stopped saving when it hit that. Very wierd. Anyway, I started again and changed the double quotes to double-single-quotes, as it were.

"treespace" wrote:I recently fixed three distinct issues and noticed a lot of posts that were related but never really got to the bottom of the problem. In particular a Out of Memory Error issue that has been a long-standing problem in all application servers where the heap actually looks fine: people exhaust hours tracking down phantom, non-existent leaks.

Rod Macpherson (a.k.a. treespace)

#1 Out of Processors

JBoss sets the processors to 5 for HTTPS in the tomcat jboss-service.xml and that is sure to cause problems for those switching from HTTP over to HTTPS for testing: HTTP has 80 processors! It can catch you off guard but it is pilot error. You will not get an out of memory error ? happily it is an error not an exception.

#2 Out of Connections

If you run out of connections you will get "no managed connections". You will not get an out of memory error. Increase your connections and check for availability in 3.2.3 using this, I'll use the oracle class JBoss ships with as an example. I believe this is strictly 9.2 > since if you look at the class it calls a driver ping method recently introduced: fast and efficient.

Also, checking for available connections and giving your users a busy signal isa good idea.

#3 Out of Memory .... THE BIG ONE!

JBoss is the least likely to exhibit the out of memory problem because it does not have the EJB compile generating all those extra classes (a wild guess) and a small footprint. WebLogic has this problem and only just figured it out: they haven't told anybody, they quietly changed the new default configuration in 8.1:

SOLUTION: -XX:MaxPermSize=128

I can watch the perm size overflow before my very eyes and throw an Out of Memory error. The default with the -server switch is 64M and as soon as you cross that, poof, Out of Memory or sometimes a Hotspot compiler error asking you to contact Sun. Note that JRocket does not have this problem, HP does as does Sun.

I have a very high confidence level this is the mystery Out of Memory that is often reported by applications with tons of classes. I have been able to reproduce it at will and was able to watch our application break the JVM by overflowing the perm space. Adding just a couple more megs of perm to the same test: problem goes away. That is where the JVM places information about loaded classes. Sometimes it is cleaned up, mostly it just grows until you break it.