The IBM Extensions for Memory Analyzer (IEMA) v1.1 are now available on developerWorks through ISA. This marks the graduation of the project from an alphaWorks evaluation project to an ISA Tech Preview. The 1.1 release contains the following highlights:

Garbage Analysis
The inclusion of two new queries: Find Garbage Fragments and Find Allocation Sites, makes it possible to identify fragments (chains) of objects that are eligable for collection, along with similar live data in the Java heap where the garbage fragments may previously have been allocated.

Improved WebSphere Application Class Loader Leak Analysis
Leaking application class loaders are now marked with "LEAKING LOADER" in all views, and a new more accurate "Path to Leaking Application Class Loaders" query has been provided to identify why they are being kepy alive

Mapping of WebSphere thread IDs to Java thread IDs
The Java Basics -> Thread Overview query has been updated to include a WAS Thread ID column allowing you to map the thread IDs used in the WebSphere logs to the threads IDs used in other diagnostic files.

Improved WebSphere Hung Thread detection
Threads that have been marked as hung by the WebSphere thread pool manager are now marked as [Hung] in all views, and a new section has been added to the "Thread Pool Analysis" query.

The move to ISA means that, for ISA users, there is now no need to set a new update site in the Update Preferences as the IBM extensions sit next to the other tools in the ISA tools catalog. For Eclipse Memory Analyzer Tool (MAT) users, you can install the IBM extensions by pointing your Eclipse at the ISA update site. Detailed instructions are on the new IBM Extensions for Memory Analyzer homepage.

Start Eclipse and go to "Help -> Install New Software..." and "Add..." a new update repository of:http://download.boulder.ibm.com/ibmdl/pub/software/isa/isa410/production/

Select to "Work with": "--All Available Sites--"

Select the following feature to be installed:label.component.tools.jvm -> IBM Monitoring and Diagnostic Tools for Java - Garbage Collection and Memory Vizualizer

This should update your Eclipse install with Garbage Collection and Memory Vizualiser, which you'll be able to find by opening:"Windows -> Open Perspective -> Other"
and selecting the "GC and Memory Visualizer Advanced Perspective".

Its now also possible to install IBM Memory Analyzer into an existing Eclipse (3.4.2 or later) workbench install, rather then being embedded into the IBM Support Assistant (ISA) or using the Eclipse Memory Analyzer Tool (MAT) RCP client. This can be done with a 64bit Eclipse workbench to give you a 64bit version of Memory Analyzer, which is the preferred mechanism over having to install the IBM DTFJ adapter separately into the Eclipse Memory Analyzer Tool RCP Client.

When running the "IBM Monitoring and Diagnostic Tools for Java - Memory Analyzer" (Memory Analyzer) in the IBM Support Assistant (ISA) against dumps from large Java heaps (2.5GB+), its not uncommon for OutOfMemoryErrors to occur. This occurs because ISA is only available as a 32bit application, which limits the memory usage of the tools inside them to around 1.25GB.

For Memory Analyzer its possible to build a 64bit version outside of ISA using the following steps:

The general guidance is that the data stored in an HTTP session should only be used to store the necessary data to main state between browser invocations, and that the amount of data stored should be as small as possible. However we often see that, over several iterations of additional development work and new features being added to a web application, the session sizes have grown as more and more data is being stored.

This means the corresponding session cache also grows over time, in some cases to the point that it is one of the major contributors to the Java heap memory usage.

So, are there any ways to find out how big the sessions are?

Find the session sizes using the Runtime Performance Advisor of the Tivoli Performance Viewer (TPV)

You can use either the Runtime Performance Advisor in the administrative console, or the Tivoli Performance Viewer (TPV) to look at the session cache size and the average session size, so this will give you an idea if you have a problem with the size of HTTP sessions and the corresponding size of session cache required.
Neither the Performance Advisor or TPV give you a good idea of application or which specific user sessions are large, of what specific data is being held in the HTTP sessions, which is useful if you want to understand whether its particular applications or actions that are causing large amounts of data to be stored, and in fact what that data is.

Having information on the sizes of individual sessions, what the user session ID is, and what application the session is for will allow you to understand whether the presence of large HTTP sessions is related to a specific application, or a specific set of user actions. One way of getting this information is using Memory Analyzer.

Memory Analyzer can run using either a PHD format heapdump, or a full operating system dump (eg. a core file) that has been post processed using the "jextract" utility that is present in the jre/bin directory of the IBM Java SDKs. In order have the information relating to application name and session ID for the HTTP sessions, the jextracted system dump is required.
You can generate a system dump from a running WebSphere instance by adding the following command line option to the JVM runtime:

-Xdump:system:events=user

and then sending a "kill -3" to the process to cause the dump to be written (you can do this in Windows using a utility such as sendsignal.exe).
Alternatively you can generate the system dump on an OutOfMemoryError using the following:

-Xdump:system:events=systhrow,filter=java/lang/OutOfMemoryError

Once you have the system dump, its been processed using jextract, and is loaded into Memory Analyzer, you can begin profiling the HTTP sessions.

Profiling the HTTP session sizes using OQL in Memory Analyzer

You can use the Object Query Language (OQL) in Memory Analyzer to quickly produce a table of all of the sessions that were in the WebSphere instance when the system dump was generated, along with information on: session size, application name, and session id.

This will show you the largest session data objects, along with the app name and session ID. The session ID for the user will either be available from the URL, or in a cookie

This means that you now have a sortable table of sessions and how they relate to specific applications and session IDs.

Finding out what values are stored in specific HTTP sessions

If you identify specific sessions that are larger than expected, and you want to understand what data is being stored in that session, you have the option of "drilling down" into the session and looking at the key/value pairs for the data in the sessions.

To do this you need to:

Right click on a row of interest and select "List objects -> with outgoing references"

This will bring up the individual MemorySessionData object

Expand the object down the following reference path: "mSwappableData -> table"

This gives you the Hashtable of data associated with the session.

You can now browse through the Hashtable, looking at the keys and values to see what data is being stored inside the session, and what is causing the session to be so large!