Farshid Ghods (Inactive)
added a comment - 05/Feb/12 5:58 PM this happens when node has to eject items when it reaches the low and high water mark
the mem_used reports 6 GB but OS reports that memcached.exe process is using 10GB .
we dont have enough evidence that this happens during rebalancing but ejection and disk fetch increase the swap usage and memcached.exe memory usage.

Our memory accounting stat "mem_used" is incremented or decremented within the constructor / deconstructor of Blob value class that are automatically invoked when memory is allocated or deallocated. Therefore, if "mem_used" stat shows 6GB while memcached resident memory is 10GB, this means that even if we explicitly release memory to the windows memory allocator, the allocator doesn't use these freed memory areas. This is mostly caused by memory fragmentation issues on the windows default allocator.

Chiyoung Seo
added a comment - 05/Feb/12 6:28 PM Our memory accounting stat "mem_used" is incremented or decremented within the constructor / deconstructor of Blob value class that are automatically invoked when memory is allocated or deallocated. Therefore, if "mem_used" stat shows 6GB while memcached resident memory is 10GB, this means that even if we explicitly release memory to the windows memory allocator, the allocator doesn't use these freed memory areas. This is mostly caused by memory fragmentation issues on the windows default allocator.

Adding this here, because I know I'll never be able to find this again in my webmail....

> Trond,
>
> I didn't hear about this possibility before
> Regardless , Is that something I can do during the runtime or we have
> to recompile epengine with that option ?
> If you have the binary compiled with that option I can run some tests
> on it today to confirm standard malloc itself is sufficient
>

I believe all you need to do is to export two variables at runtime:
./ns_server/couchbase-server.sh.in:GLIBCPP_FORCE_NEW=1
./ns_server/couchbase-server.sh.in:export GLIBCPP_FORCE_NEW
./ns_server/couchbase-server.sh.in:GLIBCXX_FORCE_NEW=1
./ns_server/couchbase-server.sh.in:export GLIBCXX_FORCE_NEW

Trond

> Farshid
>
> On Feb 17, 2012, at 11:32 PM, Trond Norbye <trond.norbye@couchbase.com> wrote:
>
>> Did we ever try running the test with the standard malloc and using the FORCE_NEW flag? I've asked this question a number of times now, but I haven't gotten an answer yet. Replacing the standard memory allocator with tcmalloc didn't solve the problem itself, so it seemed that it root of the problem was the memory allocator inside the C++ layer.
>>
>> From the experiments we've had during the integration it didn't feel that the windows support was that mature and well tested. It feels a bit "risky business" to me to give a version to a customer where we change such a central component on platform with so little testing. I would personally sleep way better at night using Microsofts memory allocator if it solves the problem.
>>
>> just my 0.5 nok
>>
>> trond

Steve Yen
added a comment - 20/Feb/12 12:13 PM Adding this here, because I know I'll never be able to find this again in my webmail....
> Trond,
>
> I didn't hear about this possibility before
> Regardless , Is that something I can do during the runtime or we have
> to recompile epengine with that option ?
> If you have the binary compiled with that option I can run some tests
> on it today to confirm standard malloc itself is sufficient
>
I believe all you need to do is to export two variables at runtime:
./ns_server/couchbase-server.sh.in:GLIBCPP_FORCE_NEW=1
./ns_server/couchbase-server.sh.in:export GLIBCPP_FORCE_NEW
./ns_server/couchbase-server.sh.in:GLIBCXX_FORCE_NEW=1
./ns_server/couchbase-server.sh.in:export GLIBCXX_FORCE_NEW
Trond
> Farshid
>
> On Feb 17, 2012, at 11:32 PM, Trond Norbye <trond.norbye@couchbase.com> wrote:
>
>> Did we ever try running the test with the standard malloc and using the FORCE_NEW flag? I've asked this question a number of times now, but I haven't gotten an answer yet. Replacing the standard memory allocator with tcmalloc didn't solve the problem itself, so it seemed that it root of the problem was the memory allocator inside the C++ layer.
>>
>> From the experiments we've had during the integration it didn't feel that the windows support was that mature and well tested. It feels a bit "risky business" to me to give a version to a customer where we change such a central component on platform with so little testing. I would personally sleep way better at night using Microsofts memory allocator if it solves the problem.
>>
>> just my 0.5 nok
>>
>> trond