I just caught that a node was down based on running nodetool status on a different node. I tried to ssh into the downed node at that time and it was very slow logging on. Looking at the gc.log file, there was a ParNew that only took 0.09 secs. Yet the overall application threads stop time is 315 seconds (5 minutes). Our cluster is handling alot of read requests.

If there were network hiccups, would that cause a delay in the Cassandra process when it tries to get to a safepoint? I assume Cassandra has threads running with lots of network activity and maybe taking a long time to reach a safepoint.

It's possible you were hit by the problems in this thread before, but it looks potentially like you may have other issues. Of course it may be that on G1 you have one issue and CMS another, but 27s is extreme even for G1, so it seems unlikely. If you're hitting these pause times in CMS and you get some more output from the safepoint tracing, please do contribute as I would love to get to the bottom of that, however is it possible you're experiencing paging activity? Have you made certain the VM memory is locked (and preferably that paging is entirely disabled, as the bloom filters and other memory won't be locked, although that shouldn't cause pauses during GC)

Note that mmapped file accesses and other native work shouldn't in anyway inhibit GC activity or other safepoint pause times, unless there's a bug in the VM. These threads will simply enter a safepoint as they return to the VM execution context, and are considered safe for the duration they are outside.

we tried to switch to G1 because we observed this behaviour on CMS too (27 seconds pause in G1 is quite an advise not to use it). Pauses with CMS were not easily traceable - JVM stopped even without stop-the-world pause scheduled (defragmentation, remarking). We thought the go-to-safepoint waiting time might have been involved (we saw waiting for safepoint resolution) - especially because access to mmpaped files is not preemptive, afaik, but it doesn't explain tens of seconds waiting times, even slow IO should read our sstables into memory in much less time. We switched to G1 out of desperation - and to try different code paths - not that we'd thought it was a great idea. So I think we were hit by the problem discussed in this thread, just the G1 report wasn't very clear, sorry.

It seems like your issue is much less difficult to diagnose: your collection times are long. At least, the pause you printed the time for is all attributable to the G1 pause.

Note that G1 has not generally performed well with Cassandra in our testing. There are a number of changes going in soon that may change that, but for the time being it is advisable to stick with CMS. With tuning you can no doubt bring your pauses down considerably.

we are seeing the same kind of long pauses in Cassandra. We tried to switch CMS to G1 without positive result. The stress test is read heavy, 2 datacenters, 6 nodes, 400reqs/sec on one datacenter. We see spikes in latency on 99.99 percentil and higher, caused by threads being stopped in JVM.

And the total thime for which application threads were stopped is 27.58 seconds.

CMS behaves in a similar manner. We thought it would be GC, waiting for mmaped files being read from disk (the thread cannot reach safepoint during this operation), but it doesn't explain the huge time.

We'll try jhiccup to see if it provides any additional information. The test was done on mixed aws/openstack environment, openjdk 1.7.0_45, cassandra 1.2.11. Upgrading to 2.0.x is no option for us.

Sorry, I have not had a chance to file a JIRA ticket. We have not been able to resolve the issue. But since Joel mentioned that upgrading to Cassandra 2.0.X solved it for them, we may need to upgrade. We are currently on Java 1.7 and Cassandra 1.2.8

It's possible that this is a JVM issue, but if so there may be some remedial action we can take anyway. There are some more flags we should add, but we can discuss that once you open a ticket. If you could include the strange JMX error as well, that might
be helpful.

It would be appreciated if you could inform this thread of the JIRA ticket number, for the benefit of the community and google searchers. :)