See above diag0 is from master node of 3->4 rebalance and diag3 is from node being rebalanced in.

I just ran the following experiment:

3 million small items (about 100 bytes) are loaded into bucket having 256 vbuckets in 3 nodes cluster. Everything is persisted to disk. And everything is memory-resident.

added 4th node

started very tiny load - ~130 sets (all of them are updates) per second

and clicked rebalance button

No indexes are defined.

I was able to observe much slower (30 seconds versus 4 minutes) rebalance speed compared to both:

index aware rebalance disabled

and rebalance without any load

Note when index aware rebalance is enabled, even without indexes defined we're waiting for persistence of items on vbucket move destination node. In fact this happens twice in order to enable consistent views. We do that by creating checkpoint and waiting for it to be persisted.

In logs attached above I'm seeing that first wait for persistence is slow (we expect that as it includes backfill).

But after first checkpoint is persisted we create another one and wait for it to be persisted. Because our load is very small we know that new checkpoint can have less then 10 items. So we don't expect replicating and persisting it to take any significant time. But logs indicate that this is not the case. I.e.

(1) rebalance from one to two nodes took 20 minutes with the consistent view enabled, but without any loads from the client

(2) rebalance from one to two nodes took 2 hours with the consistence view enabled and with small loads (2K SET ops / sec) from the client

For the further debugging, I removed couchstore API calls (couchstore_save_docs, couchstore_commit) from the ep-engine flusher execution path to see if this is caused by the slow write throughput from couchstore. I saw that the rebalance with the same load (2K SET ops / sec) just took 21 minutes.

I also looked at the timings of couchstore_save_docs and couchstore_commit from the above (2) case, and saw that calling those two APIs to persist 10 -15 dirty items per vbucket takes 5 - 10ms on average, which infers 1.5. - 2K drain rate per sec. The disk write queue size usually remained in 5 - 10K, which indicates that a single vbucket takeover frequently takes 4 - 5 sec.

I need to further investigate this issue, but it seems to me that this is not caused by the ep-engine layer at this time.

Chiyoung Seo
added a comment - 07/Oct/12 4:51 PM I've debugged the following cases:
(1) rebalance from one to two nodes took 20 minutes with the consistent view enabled, but without any loads from the client
(2) rebalance from one to two nodes took 2 hours with the consistence view enabled and with small loads (2K SET ops / sec) from the client
For the further debugging, I removed couchstore API calls (couchstore_save_docs, couchstore_commit) from the ep-engine flusher execution path to see if this is caused by the slow write throughput from couchstore. I saw that the rebalance with the same load (2K SET ops / sec) just took 21 minutes.
I also looked at the timings of couchstore_save_docs and couchstore_commit from the above (2) case, and saw that calling those two APIs to persist 10 -15 dirty items per vbucket takes 5 - 10ms on average, which infers 1.5. - 2K drain rate per sec. The disk write queue size usually remained in 5 - 10K, which indicates that a single vbucket takeover frequently takes 4 - 5 sec.
I need to further investigate this issue, but it seems to me that this is not caused by the ep-engine layer at this time.

Interesting data, Chiyoung. I'd like to point out however that while your finding of 5-10 ms per commit looks like disk's rotational delay, in practice I know we're running with barriers disabled and "short" fsyns (which is the case here) do not really wait for disk. So there could be something else that's slowing us down somewhere. In mccouch or couchstore, for example.

Aleksey Kondratenko (Inactive)
added a comment - 08/Oct/12 10:38 AM Interesting data, Chiyoung. I'd like to point out however that while your finding of 5-10 ms per commit looks like disk's rotational delay, in practice I know we're running with barriers disabled and "short" fsyns (which is the case here) do not really wait for disk. So there could be something else that's slowing us down somewhere. In mccouch or couchstore, for example.

1) I commented out the fsync calls in couchstore_commit, and the rebalance time with the constant load from the client took 25 minutes.

2) I made some changes in the ep-engine so that we can batch writes for all vbuckets and issue couchstore_commit requests at the end of transaction, but didn't see much difference. The rebalance took 2 hours.

So, for the cases where the rebalance takes 2 hours, the wall clock times for couchstore low level system calls (fsync, pwrite, pread) are as follows:

Actually, given that each our fdatasync call is extending file size, it means we modify file metadata, list of free blocks etc.. Thus fdatasync call is actually doing at least one physical write to filesystem journal (to journal metadata update) and then physical write to actual file. And there's potentially very long disk seek in between. Thus 100 millseconds per fsync is perhaps not that unexpected.

I was able to speed up commit times 2 times by doing reserving space with linux's fallocate (with FL_KEEP_SIZE otherwise our constant re-opening will cause huge slowness finding last usable header). We can remove more of metadata syncing overhead by pre-extending with actual 0 bytes. That way our fdatasync will not have to update file metadata at all, but as I pointed out above today this will hit problems with re-opening of .couch files and with finding last valid header.

Anyway, we clearly need order of magnitude improvement rather than 2x. So we need something else.

Aleksey Kondratenko (Inactive)
added a comment - 09/Oct/12 2:16 PM I was able to speed up commit times 2 times by doing reserving space with linux's fallocate (with FL_KEEP_SIZE otherwise our constant re-opening will cause huge slowness finding last usable header). We can remove more of metadata syncing overhead by pre-extending with actual 0 bytes. That way our fdatasync will not have to update file metadata at all, but as I pointed out above today this will hit problems with re-opening of .couch files and with finding last valid header.
Anyway, we clearly need order of magnitude improvement rather than 2x. So we need something else.
Dirty patch can be found here: http://paste.debian.net/198227/

Aleksey Kondratenko (Inactive)
added a comment - 09/Oct/12 2:18 PM I think we have enough data that proves that indeed fsync is the problem here. I think we should have Damien and Peter involved to seek a solution.

There's another thing I don't understand. I'm doing around 130 sets (alll of them updates) per second. Stats show >200 disk updates per second (which is ok consider replica writes too).

Even more weird I'm seeing 4.5 megs of writes per second in iotop. Even for 300 disk update per second that's 15k of bytes in disk writes per item. My items are small btw, less then 100 bytes. Kinda big imho. I.e. 4 disk blocks per item.

We know however that overhead will be more or less stable for higher write rates.

Aleksey Kondratenko (Inactive)
added a comment - 09/Oct/12 3:08 PM There's another thing I don't understand. I'm doing around 130 sets (alll of them updates) per second. Stats show >200 disk updates per second (which is ok consider replica writes too).
Even more weird I'm seeing 4.5 megs of writes per second in iotop. Even for 300 disk update per second that's 15k of bytes in disk writes per item. My items are small btw, less then 100 bytes. Kinda big imho. I.e. 4 disk blocks per item.
We know however that overhead will be more or less stable for higher write rates.

Added to RN as: Prioritize flushing pending vbuckets
over regular vbuckets.
This is a performance improvement used
for rebalancing buckets that have no
views or design docs when consistent view mode is enabled.

kzeller
added a comment - 26/Oct/12 3:00 PM Added to RN as: Prioritize flushing pending vbuckets
over regular vbuckets.
This is a performance improvement used
for rebalancing buckets that have no
views or design docs when consistent view mode is enabled.