Thursday, January 05, 2006

I'm going through a lot of information lately about garbage collection, starting from the 1.3.1 JVM. Interesting process overall. The current project I am working in uses the 1.3.1 and is going to upgrade soon to a newer JVM.

The collection itself does not truly impact any server application until the loads get higher and the space settings suddenly start becoming very inefficient. Especially in the 1.3.1 VM and with a relatively very large memory heap maximum, the full gc can take up to 25 seconds to complete.

This is mostly due to inefficient use of Java objects and holding on to references. This happens when server applications decide to "cache" a lot of information to prevent database queries. If the sizing at the design stage is not properly assessed, then the application will perform fine for some 200 records, but if this increases to 20,000 then certainly someone needs to take another look.

My advice is to keep caching to a minimum or use a standard caching framework like 'ehcache' to do this for you instead. It is certainly better than making your own implementations in your application, plus that the caches there work together with Hibernate and exist closer to the persistence layer.

Java 1.4 is already improving this by parallelizing garbage collection. This has enormous positive impact on the server process, but only when the server really gets busy on CPU of course. I'm going to read up soon on 1.5 collection and hope to post some useful documents in the next 6 months to be posted on the Patty site, along with the performance tool that I am working on.

Regarding the performance analysis tool, it is coming along nicely. Already there is a framework for web access now that is going to be expanded and the agent library seems stable. I'll need to expand the testing application to also have some thread contention and to load more classes, libraries, maybe even include Derby + Hibernate and CGLib so that I can see how it performs with proxies and dynamic bytecode instrumentation.