Determine Proper Defaults for Compaction ThreadPools

Details

Description

With the introduction of HBASE-1476, we now have multithreaded compactions + 2 different ThreadPools for large and small compactions. However, this is disabled by default until we can determine a proper default throttle point. Opening this JIRA to log all discussion on how to select a good default for this case.

Activity

For some data point. In our cluster, we do not automatically split regions and keep our region count low. Therefore, we have StoreFiles that reach in the 10GB range. Obviously, if all the compaction threads were processing a 10GB compaction, the queue would get stopped up. We put the throttle point at 500MB. Since compactions are network-bound. We have 1Gbps network links & are seeing roughly 40MBps speed (3x == 1Gbps), so about 12 sec per compaction max on the small threadpool. Therefore, our use case doesn't directly correspond to the common auto-split use case.

Note that the default split/flush ratio is 4, so this number should be in the middle. Since most users do compression, the actual flush size should be ~20% of the MemStore size (so flushSize*2 is really more like flushSize*10). I will submit a patch with this default. Please feel free to chime in with your experience using it and we'll see if we can improve this default.

Nicolas Spiegelberg
added a comment - 11/May/11 21:26 For some data point. In our cluster, we do not automatically split regions and keep our region count low. Therefore, we have StoreFiles that reach in the 10GB range. Obviously, if all the compaction threads were processing a 10GB compaction, the queue would get stopped up. We put the throttle point at 500MB. Since compactions are network-bound. We have 1Gbps network links & are seeing roughly 40MBps speed (3x == 1Gbps), so about 12 sec per compaction max on the small threadpool. Therefore, our use case doesn't directly correspond to the common auto-split use case.
My original thought is to default the throttle to:
min( "hbase.hregion.memstore.flush.size" * 2, "hbase.hregion.max.filesize" / 2)
Note that the default split/flush ratio is 4, so this number should be in the middle. Since most users do compression, the actual flush size should be ~20% of the MemStore size (so flushSize*2 is really more like flushSize*10). I will submit a patch with this default. Please feel free to chime in with your experience using it and we'll see if we can improve this default.

Summary:
We recently had a production issue where our compactions fell
behind because our compaction throttle was improperly tuned and
accidentally upgraded all compactions to the large pool. The default
from HBASE-3877 makes 1 bad assumption: the default number of flushed
files in a compaction. MinFilesToCompact should be taken into
consideration. As a default, it is less damaging for the large thread
to be slightly higher than it needs to be and only get timed-majors
versus having everything accidentally promoted.

Hudson
added a comment - 15/May/12 23:54 Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #5 (See https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/5/ )
HBASE-5867 Improve Compaction Throttle Default
Summary:
We recently had a production issue where our compactions fell
behind because our compaction throttle was improperly tuned and
accidentally upgraded all compactions to the large pool. The default
from HBASE-3877 makes 1 bad assumption: the default number of flushed
files in a compaction. MinFilesToCompact should be taken into
consideration. As a default, it is less damaging for the large thread
to be slightly higher than it needs to be and only get timed-majors
versus having everything accidentally promoted.
Test Plan: - mvn test
Reviewers: JIRA, Kannan, Liyin
Reviewed By: Kannan
CC: stack
Differential Revision: https://reviews.facebook.net/D2943 (Revision 1338809)
Result = FAILURE
nspiegelberg :
Files :
/hbase/trunk/src/main/java/org/apache/hadoop/hbase/regionserver/CompactSplitThread.java
/hbase/trunk/src/main/java/org/apache/hadoop/hbase/regionserver/Store.java

This issue was closed as part of a bulk closing operation on 2015-11-20. All issues that have been resolved and where all fixVersions have been released have been closed (following discussions on the mailing list).

Lars Francke
added a comment - 20/Nov/15 12:43 This issue was closed as part of a bulk closing operation on 2015-11-20. All issues that have been resolved and where all fixVersions have been released have been closed (following discussions on the mailing list).