Description

Two out of sixteen nodes are ejecting active items because their mem_used is above the high water mark. The other nodes are well below. Customer says that keys are of various sizes, but the larger ones should be spread out randomly across the different nodes. Number of keys on all nodes is roughly equal.

The two problem nodes show ep_value_size much larger than a healthy node. However, looking at the sqlite data files, there's no significant difference in size of the files on disk (as seen, for example, in */membase.log).

FYI, the rise in data size seems to have started on these two nodes after a different node, 10.254.7.150, stopped responding to REST and membase was restarted (with 'service membase-server restart').

The mbcollect_info data for these servers are in the S3 . The logs are named:

membase 16: a good node, for comparison
membase 07 and membase 14: the trouble nodes that are ejecting items due to large memory usage
membase 11: the node that was restarted on Saturday

Can someone please take a look at this, and help me understand why the ep_value_size might be bloating up for these two nodes?

Farshid Ghods (Inactive)
added a comment - 22/Nov/11 8:04 PM Tim,
I will have a look at the diags tomorrow morning but it would be helpful to get these info from the customer:
1- number of items per node
2- mem_used on those nodes that are not ejecting active items
3- current state of the cluster after it did flish out the active items

Farshid Ghods (Inactive)
added a comment - 23/Nov/11 3:42 PM from Chiyoung:
Keeping 2 checkpoints only applies to the master node. The slave node cannot create its new checkpoint for itself, but instead will receive a checkpoint_start (_end) messages from the master.
In this case, cluster has two replicas (master A -> slave B -> slave C), and the replication cursor for C on B got stuck and didn't move forward.

One thing that I had a question on regarding this. Is the cursor expected to be stuck completely, or will it eventually clear itself out? At the customer, we are seeing everything eventually resolve itself...I just want to make sure we're looking at the same issue.

Perry Krug
added a comment - 20/Dec/11 1:39 PM One thing that I had a question on regarding this. Is the cursor expected to be stuck completely, or will it eventually clear itself out? At the customer, we are seeing everything eventually resolve itself...I just want to make sure we're looking at the same issue.

Alexander Petrossian (PAF)
added a comment - 27/Jan/12 3:11 AM - edited Thanks for great news, Chiyoung!
One small Q:
We understand there is no way to learn the state of replication – it should "just work", and that was fixed, right?