2-pass compression support

Details

Description

Quoting from BigTable paper: "Many clients use a two-pass custom compression scheme. The first pass uses Bentley and McIlroy's scheme, which compresses long common strings across a large window. The second pass uses a fast compression algorithm that looks for repetitions in a small 16 KB window of the data. Both compression passes are very fast—they encode at 100-200 MB/s, and decode at 400-1000 MB/s on modern machines."

The goal of this patch is to integrate a similar compression scheme in HBase.

stack
added a comment - 02/Jun/10 15:08 @Pirroh Ignore my questions in previous issue. I see 'BMZ' here.
So, if bmz is not available, we'll just keep throwing runtime exceptions?
I wonder if instead we should fall back to default no-compression w/ a warn that bmz is missing.
Good stuff.

w.r.t. runtime exception: at the moment, it's happening the same for LZO. I just reproduced the behavior I found in that class smile
Default fallback to NONE might be an option as well, but it would let you create the table anyway - so people that are using hbase shell SCRIPT to create tables might experience some regressions. Matter of tastes I'd say!
Anyway, if you agree on that I'll create another jira to deal with Compression.java and update this patch as well.

Michele Catasta
added a comment - 02/Jun/10 15:38 Right,
{NAME=>'cfamily', COMPRESSION=>'BMZ'}
will do the job.
w.r.t. runtime exception: at the moment, it's happening the same for LZO. I just reproduced the behavior I found in that class smile
Default fallback to NONE might be an option as well, but it would let you create the table anyway - so people that are using hbase shell SCRIPT to create tables might experience some regressions. Matter of tastes I'd say!
Anyway, if you agree on that I'll create another jira to deal with Compression.java and update this patch as well.

@Michele Understood. Fellas have complained about the way broke lzo manifests itself. HBase will actually take on writes. Its only when it goes to flush that it drops the edits and in a way that is essentially hidden to the client – exceptions are thrown in the regionserver log. So, i'd say, make another issue if you don't mind but its not for you to fix, not unless you are inclined. It'd be about better user experience around choosing a compression that is not supported or not properly installed.

stack
added a comment - 02/Jun/10 15:48 @Michele Understood. Fellas have complained about the way broke lzo manifests itself. HBase will actually take on writes. Its only when it goes to flush that it drops the edits and in a way that is essentially hidden to the client – exceptions are thrown in the regionserver log. So, i'd say, make another issue if you don't mind but its not for you to fix, not unless you are inclined. It'd be about better user experience around choosing a compression that is not supported or not properly installed.
You know of this page in wiki? http://wiki.apache.org/hadoop/UsingLzoCompression? You might want to add a note on end pointing at your new fancy stuff. You might even change the pointer over in the wiki home page to include BMdiff.
Good stuff.

Michele Catasta
added a comment - 07/Jun/10 17:21 @stack: addressed the user experience problem you were talking about in HBASE-2681 . I updated the patch to let it depend on that code change (and the JIRA issue as well)