Re: Bootstrap OOM issues with Cassandra 3.11.1

Hi,

Thanks for the fast response!

We are not using any materialized views, but there are several indexes. I don't have a recent heap dump, and it will be about 24 before I can generate an interesting one, but most of the memory was allocated to byte buffers, so not entirely helpful.

nodetool cfstats is also below.

I also see a lot of flushing happening, but it seems like there are too many small allocations to be effective. Here are the messages I see,

Upgrading to 3.11.3 May fix it (there were some memory recycling bugs fixed recently), but analyzing the heap will be the best option

If you can print out the heap histogram and stack trace or open a heap dump in your kit or visualvm or MAT and show us what’s at the top of the reclaimed objects, we may be able to figure out what’s going on

I'm having JVM unstable / OOM errors when attempting to auto bootstrap a 9th node to an existing 8 node cluster (256 tokens). Each machine has 24 cores 148GB RAM and 10TB (2TB used). Under normal operation the 8 nodes have JVM memory configured with Xms35G and Xmx35G, and handle 2-4 billion inserts per day. There are never updates, deletes, or sparsely populated rows.

For the bootstrap node, I've tried memory values from 35GB to 135GB in 10GB increments. I've tried using both memtable_allocation_types (heap_buffers and offheap_buffers). I've not tried modifying the memtable_cleanup_threshold but instead have tried memtable_flush_writers from 2 to 8. I've tried memtable_(off)heap_space_in_mb from 20000 to 60000. I've tried both CMS and G1 garbage collection with various settings.

Typically, after streaming about ~2TB of data, CPU load will hit a maximum, and the "nodetool info" heap memory will, over the course of an hour, approach the maximum. At that point, CPU load will drop to a single thread with minimal activity until the system becomes unstable and eventually the OOM error occurs.

Excerpt of the system log is below, and what I consistently see is the MemtableFlushWriter and the MemtableReclaimMemory pending queues grow as the memory becomes depleted, but the number of completed seems to stop changing a few minutes after the CPU load spikes.

One other data point is there seems to be a huge number of mutations that occur after most of the stream has occured. Concurrent_writes is set at 256 with the queue getting as high as 200K before dropping down.

Any suggestions for yaml changes or jvm changes? JVM.options is currently the default with the memory set to the max, the current YAML file is below.