incubator-cassandra-user mailing list archives

Maybe I should ask the question a different way.
Currently, if all index samples do not fit in the java heap the jvm will
eventually OOM and the process will crash. The proposed change sounds like
it will move the index samples to off-heap storage but that if that can't
hold all samples, the process will be killed.
Can the index sample storage be treated more like key cache or row cache
where the total space used can be limited to something less than all
available system ram, and space is recycled using an LRU (or configurable)
algorithm?
-Bryan
On Mon, May 13, 2013 at 9:10 PM, Bryan Talbot <btalbot@aeriagames.com>wrote:
> So will cassandra provide a way to limit its off-heap usage to avoid
> unexpected OOM kills? I'd much rather have performance degrade when 100%
> of the index samples no longer fit in memory rather than the process being
> killed with no way to stabilize it without adding hardware or removing data.
>
> -Bryan
>
>
> On Fri, May 10, 2013 at 7:44 PM, Edward Capriolo <edlinuxguru@gmail.com>wrote:
>
>> If you use your off heap memory linux has an OOM killer, that will kill a
>> random tasks.
>>
>>
>> On Fri, May 10, 2013 at 11:34 AM, Bryan Talbot <btalbot@aeriagames.com>wrote:
>>
>>> If off-heap memory (for indes samples, bloom filters, row caches, key
>>> caches, etc) is exhausted, will cassandra experience a memory allocation
>>> error and quit? If so, are there plans to make the off-heap usage more
>>> dynamic to allow less used pages to be replaced with "hot" data and the
>>> paged-out / "cold" data read back in again on demand?
>>>
>>>
>>>