Ultimately, it will lead to OutOfMemory exceptions; leading up to it, you'll see the Old Gen pool hovering at 98% or 99% with the JVM spending the majority of its time in Garbage Collection. But even before the JVM starts to be negatively impacted, it can be easily seen by doing the following command (where [font=Courier New]<pid>[font] is the Java process ID):

[font=Courier New]jmap -histo:live <pid>[font]

When the problem is manifest, [font=Courier New]com.wily.introscope[font] classes will be the top consumers of memory in the histogram. For example:

Here's an "enhanced" version of the above that calculates the total amount of memory in the live histogram consumed by Introscope classes and includes a percentage (Unix / Linux only - sorry Windows folks!):

Additionally, generating and analyzing a heap dump will show results like the below:

One instance of "com.wily.introscope.agent.trace.hc2.WilyTransactionStructure" loaded by "<system class loader>" occupies 119,137,952 (33.99%) bytes. The memory is accumulated in one instance of "java.util.concurrent.ConcurrentHashMap$Segment[]" loaded by "<system class loader>".

One instance of "com.wily.introscope.agent.trace.hc2.WilyTransactionElement" loaded by "<system class loader>" occupies 42,362,536 (12.09%) bytes. The memory is accumulated in one instance of "java.util.concurrent.ConcurrentHashMap$Segment[]" loaded by "<system class loader>".

One instance of "com.wily.introscope.agent.trace.hc2.WilyTransactionElement" loaded by "<system class loader>" occupies 36,733,368 (10.48%) bytes. The memory is accumulated in one instance of "java.util.concurrent.ConcurrentHashMap$Segment[]" loaded by "<system class loader>".

112,614 instances of "java.lang.String", loaded by "<system class loader>" occupy 37,578,976 (10.72%) bytes.
I recommend the Eclipse Memory Analyzer Tool for performing the analysis of heap dumps, unless you have a license for a tool like YourKit. The below command can generate the heap dump, where [font=Courier New]<file_name> [font]is the name of the resulting heap dump and [font=Courier New]<pid>[font] is the Java process ID:

- any idea which agent it affects ( tomcat, jboss or all of them ).
- and where exactly the leak ( last one we detected it was in platform agent and sql agent ).
- Is there a workaround and steps to minimize

As far as I'm aware, it's affecting all agents. We are affected using the Tomcat agent. It appears to be related to the new 9.1 tracers and the aging out of traced transaction components after 4 minutes; some references are kept in place that are preventing GC. There is no known / tested workaround at this time.

Ultimately, it will lead to OutOfMemory exceptions; leading up to it, you'll see the Old Gen pool hovering at 98% or 99% with the JVM spending the majority of its time in Garbage Collection. But even before the JVM starts to be negatively impacted, it can be easily seen by doing the following command (where [font=Courier New]<pid>[font] is the Java process ID):

[font=Courier New]jmap -histo:live <pid>[font]

When the problem is manifest, [font=Courier New]com.wily.introscope[font] classes will be the top consumers of memory in the histogram. For example:

Here's an "enhanced" version of the above that calculates the total amount of memory in the live histogram consumed by Introscope classes and includes a percentage (Unix / Linux only - sorry Windows folks!):

Additionally, generating and analyzing a heap dump will show results like the below:

One instance of "com.wily.introscope.agent.trace.hc2.WilyTransactionStructure" loaded by "<system class loader>" occupies 119,137,952 (33.99%) bytes. The memory is accumulated in one instance of "java.util.concurrent.ConcurrentHashMap$Segment[]" loaded by "<system class loader>".

One instance of "com.wily.introscope.agent.trace.hc2.WilyTransactionElement" loaded by "<system class loader>" occupies 42,362,536 (12.09%) bytes. The memory is accumulated in one instance of "java.util.concurrent.ConcurrentHashMap$Segment[]" loaded by "<system class loader>".

One instance of "com.wily.introscope.agent.trace.hc2.WilyTransactionElement" loaded by "<system class loader>" occupies 36,733,368 (10.48%) bytes. The memory is accumulated in one instance of "java.util.concurrent.ConcurrentHashMap$Segment[]" loaded by "<system class loader>".

112,614 instances of "java.lang.String", loaded by "<system class loader>" occupy 37,578,976 (10.72%) bytes.
I recommend the Eclipse Memory Analyzer Tool for performing the analysis of heap dumps, unless you have a license for a tool like YourKit. The below command can generate the heap dump, where [font=Courier New]<file_name> [font]is the name of the resulting heap dump and [font=Courier New]<pid>[font] is the Java process ID:

Our engineering team is aware of a 9.1 agent memory leak issue and is working diligently on a fix. We will be posting solution timing and possible workarounds shortly on this community as well as on support.ca.com as we get more detail. In the mean time, have you logged a support ticket for what you are experiencing?

So, the fix for this will likely not be officially available until the 9.1.2 release, which is scheduled for early to mid October. There is a chance it might make into 9.1.1.1 at the end of this month. I will post when I hear for sure. There is a work-around, however. Here is what I received from CA Support and Sustaining Engineering for the work-around:

The 9.1.x agent offers a legacy mode option that will revert to old tracer definitions that leverage the traditional pre-APM 9.1 transaction blame stacks as opposed to the new transaction structure. Note that this is a supported but deprecated option in the APM 9.1 agent and should only be used temporarily until the recommended patch is available and an upgrade to the patch can be done. To configure the agent to use this legacy option, please follow the steps below:

1.
Stop the monitored application
2.
Archive and delete existing log files in the [font=Courier New]<Agent_Home>/logs[font] directory to prepare for new logs
3.
Back up existing .pbl and .pbd files in the [font=Courier New]<Agent_Home>/core/config[font] directory
4.
Back up the existing [font=Courier New]<Agent_Home>/core/config/IntroscopeAgent.profile[font]
5.
Copy the legacy .pbl and .pbd files from the [font=Courier New]<Agent_Home>/examples/legacy[font] directory to the [font=Courier New]<Agent Home>/core/config[font] directory.
6.
Open [font=Courier New]<Agent_Home>/core/config/IntroscopeAgent.profile[font] and make the following changes:
a.
Add a new property: [font=Courier New]introscope.agent.configuration.old=true[font]
b.
Update the agent property [font=Courier New]introscope.autoprobe.directivesFile[font] to point to the appropriate legacy .pbl and/or .pbd files that have been copied over. For example, replace [font=Courier New]spm.pbl [font]with [font=Courier New]spm-legacy.pbl[font]
7.
Restart the monitored application

@Srikant.Noorani, you marked this as resolved. Did you actually get a fix or you mark resolved to acknowledge the issue (below)?

jakbutler wrote:

Ultimately, it will lead to OutOfMemory exceptions; leading up to it, you'll see the Old Gen pool hovering at 98% or 99% with the JVM spending the majority of its time in Garbage Collection. But even before the JVM starts to be negatively impacted, it can be easily seen by doing the following command (where [font=Courier New]<pid>[font] is the Java process ID):

[font=Courier New]jmap -histo:live <pid>[font]

When the problem is manifest, [font=Courier New]com.wily.introscope[font] classes will be the top consumers of memory in the histogram. For example:

Here's an "enhanced" version of the above that calculates the total amount of memory in the live histogram consumed by Introscope classes and includes a percentage (Unix / Linux only - sorry Windows folks!):

Additionally, generating and analyzing a heap dump will show results like the below:

One instance of "com.wily.introscope.agent.trace.hc2.WilyTransactionStructure" loaded by "<system class loader>" occupies 119,137,952 (33.99%) bytes. The memory is accumulated in one instance of "java.util.concurrent.ConcurrentHashMap$Segment[]" loaded by "<system class loader>".

One instance of "com.wily.introscope.agent.trace.hc2.WilyTransactionElement" loaded by "<system class loader>" occupies 42,362,536 (12.09%) bytes. The memory is accumulated in one instance of "java.util.concurrent.ConcurrentHashMap$Segment[]" loaded by "<system class loader>".

One instance of "com.wily.introscope.agent.trace.hc2.WilyTransactionElement" loaded by "<system class loader>" occupies 36,733,368 (10.48%) bytes. The memory is accumulated in one instance of "java.util.concurrent.ConcurrentHashMap$Segment[]" loaded by "<system class loader>".

I recommend the Eclipse Memory Analyzer Tool for performing the analysis of heap dumps, unless you have a license for a tool like YourKit. The below command can generate the heap dump, where [font=Courier New]<file_name> [font]is the name of the resulting heap dump and [font=Courier New]<pid>[font] is the Java process ID:

Heard that 911 Agent has memory leak - Wondering if anyone has any information on that - We are checking with support also

ThanksTHere i

You might have a different issue - but here is the one I know about..

There is a known NATIVE memory issue when using IBM Java SDK 1.5 SR9 and a Profiling tool. This is caused because of a JITing issue that means finalization methods are not being called to clean up resources.

http://www-01.ibm.com/support/docview.wss?uid=swg1IZ99243.

The fix is to go to SR10 (WAS FP23) - which I havent done so cannot confirm.

OOM error requires update to IBM JDK SR10 - 71618
Due to a third-party issue, if you are using the CA APM agent to monitor applications that run IBM's Java 1.6 and are using the preferred –javaagent switch, you will experience severe memory overhead. This is due to a native memory leak in the JDK. The applications eventually run out of memory. The memory leak occurs irrespective of the max heap size (-Xmx) setting. This issue also occurs when Autoprobe instrumentation is disabled (introscope.autoprobe.enable=false) while still using –javaagent. The issue is specific to object finalization in the IBM JVM. Details of this issue are available from IBM’s website at http://www-01.ibm.com/support/docview.wss?uid=swg1IZ99243.
As a temporary workaround, you can use the AutoProbe Connector for IBM and deploy it in the bootstrap classpath using the –Xbootclasspath switch. Note that this is a temporary workaround only and it is recommended that you obtain IBM’s official patch that resolves this issue.

Correct; I received notification from CA Support and Sustaining Engineering yesterday that this will be resolved in the 9.1.1.1 patch, to be released on August 31st, 2012. Additionally, they will be issuing a Product Advisory notice for this bug.