Posted
by
Soulskillon Tuesday February 21, 2012 @06:10PM
from the ever-faster-ever-smaller dept.

An anonymous reader writes "Paul Tyma, creator of Mailinator, writes about a greedy algorithm to analyze the huge amount of email Mailinator receives and finds ways to reduce its memory footprint by 90%. Quoting: 'I grabbed a few hundred megs of the Mailinator stream and ran it through several compressors. Mostly just stuff I had on hand 7z, bzip, gzip, etc. Venerable zip reduced the file by 63%. Not bad. Then I tried the LZMA/2 algorithm (7z) which got it down by 85%! Well. OK! Article is over! Everyone out! 85% is good enough. Actually — there were two problems with that result. One was that, LZMA, like many compression algorithms build their dictionary based on a fixed dataset. As it compresses it builds a dictionary of common sequences and improves and uses that dictionary to compress everything thereafter. That works great on static files — but Mailinator is not a static file. Its a big, honking, several gigabyte cache of ever changing email. If I compressed a million emails, and then some user wanted to read email #502,922 — I'd have to "seek" through the preceding half-million or so to build the dictionary in order to decompress it. That's probably not feasible.'"

Alright, I apologize. I was in the wrong. It also came off significantly more sarcastic than I meant it too. The point that I tried (and failed) to make was that there really is nothing that should make anyone feel dumb, it's really just a lack of learning that can be fixed. Thank you for calling me out.

Just code "Prince of Nigeria" for 1 bit and you've got (17*8==136):1 compression. Continue with that line of thinking... "expand your manhood", "Pass this along to a friend", "Dear beloved in Christ", etc.

Related anecdote: Way back when, as relatively innocent SW listeners, some friends and I thought it would be awesome to listen in on phone calls. They were all over; radiotelephone, on C-band satellite, etc. You just had to figure out where they were. Well, after about an hour of actual listening, we deter

I used to think that the difference between the bright ones and the not so bright was really just education. That intelligence was really a measure of curiosity and drive to learn. Attempting to teach people material has cured me of this fantasy.

For every person who just needs education or needs the material presented to them in another way that clicks for them there are thousands who simply don't and won't get it no matter how it is presented.

Close, actually. Very close. They split the headers it's true, but he compresses the emails together so that emails that are exactly (or almost exactly) the same ("get viagra now!" or a newsletter) don't have to be stored in different places in memory. Only large emails get LZMA (much better than bzip fyi).

Actually, the core of his compression scheme seems to be constructing an LZW dictionary but using line patterns instead of bits or letters. The reason it works is because he jumps ahead of all those arrangements that would have probably been made letter by letter a, an, and, and , and h, (etc) which is what makes lzw and it's variants slow.

It was clever, but any Theory of Information course should tell you that choosing your "default" symbols correctly is very important. He did just that (:

For text Bzip2 is actually quite good. It is also substantially faster than LZMA, which means he may have been able to hit his 10MB/s mark and compress everything. Further, Bzip2 actually operates in blocks (max 900kb) using up to 6 dictionaries. I'd actually assume pretty much all compression algorithms at least support a mode amenable to streaming, if it's not baked in from the get go. In general, more dictionaries are actually better, if you can get away with the overhead. A super giant dictionary, for g

Close enough. He essentially used deduplication instead of real compression (well, he also used some compression in the end). He essentially split messages into headers/body, then used line deduplication. Then compressed any messages above a certain threshold.

Meh. Deduplication is replacing identical byte sequences with a more optimal representation (it's a vocabulary transformation), while deduplication is replacing identical byte sequences with a singular token (it's a storage transformation). While the approaches are very similar, the difference is in the size of the atom and the scope of the data set.

Compression in computing already has a very specific meaning (it's a stream operation), and in general technical people do not like overloading (cue the "copyri

I agree with your sentiment wholeheartedly: precision in language and conversation is important.

Compression in computing already has a very specific meaning (it's a stream operation)

Compression is an aspect of Information Theory or Entropy. From this perspective, it is the reduction of redundant bits of information in a given corpus (which is usually a stream because that's natural, but I don't know that there is an inherent requirement). All software

The end result is that he made his own compression-for-emails, where it scans strings in every email and stores the same strings in memory, with the emails storing only pointers to the strings.For large emails (he says >20k as an estimate), he applies LZMA on top of that, with a sliding dictionary based on the emails from the last few hours or so.

All in all a very good read for someone (like me) who has an interest in data compression but knows little about it yet. I like to read other people's thought processes.

If you enjoyed reading that, you might also enjoy reading this [ejohn.org] and the follow-up [ejohn.org] about efficiently storing a dictionary of words and dealing with memory v.s processing trade-offs.

I did very much enjoy reading that, thank you. The genius of many people trying to solve their own specific problems never ceases to amaze me. (I was also regrettably unaware of what a "trie" was until now- learn something new every day)

Tries and Bloomfilters are wonderful algorithms, because they are simple, if you want something a tad bit more complicated use Locality-Sensitive Hashing [stanford.edu] to find similar documents from a big set of documents.

Patrick appears to be in the wrong in that article, incidentally. He acknowledges the fact that filesystems use more space for multiple files than for a single file, but doesnt seem to understand that the only way his "compressor" could have worked is by using a delimiter or marker of some kind to indicate where data was stripped, and that said delimiter must take up some amount of space.

His rebuttal is basically "its not my fault that modern filesystems have to use non-zero space to store information".

He would have done a much better job checking if the original.dat could be found to have a square, cube, etc number in the first N characters. I mean, if the first 520 hex characters comprise a hex number that you can take the cube (or higher root) of, you would be able to use that root as a magic number, and the operation to "exponentize" it again would contain the "hidden information". With a large enough number and a large enough root, the difference between the two might be large enough to net you s

Im tempted to try to write a compressor based on this now to try to win that challenge:

for N=200 to bytesInFile Do ( if (
IsInt( cubeRoot(readBytes(N))
) Then (
Output("Magic number is " & cubeRoot(readBytes(N))
TruncateBytesFromFile(N)
))

Decompressor would be astonishingly small, just an append cube(MagicNumber) to the original file.

In fact, as I think about this, given enough CPU time and a large enough file (lets say 50MB), there is almost no file that you could not compress-- set the minimum length hex number to check, and look for roots that yield integers starting with X^0.33, and counting down to X^0.01. Eventually you would find a root that would work, and with the size of the numbers you would be working with the space savings would be incredible. You could even write a general decompressor, and make the first 20 bytes of the file what the magic number was, and what the exponent was.

A quick check (cubing 0x9999 9999 9999) reveals that you could drop from 47 bytes to 12 bytes if your first "hit" was at byte 47. Imagine if your first hit was at 200 bytes:)

Can anyone comment on how this is working, and where the information is being hidden in this scheme?

Part of the problem is that Mike Goldman makes as if to outline precise technical constraints on the problem (data file of such size, you tell me this, I send you that, you send me those, they output so-and-so) but includes without being explicit the spirit of the bet. The challenge is about compression, yes, but if you start to give precise constraints on how the bet can be won, you start to imply that any activity within the constraints is fair game.

One was that, LZMA, like many compression algorithms build their dictionary based on a fixed dataset. As it compresses it builds a dictionary of common sequences and improves and uses that dictionary to compress everything thereafter.

What?! LZMA keeps a dictionary of recent data, not a "fixed dataset".

Its a big, honking, several gigabyte cache of ever changing email. If I compressed a million emails, and then some user wanted to read email #502,922 — I'd have to "seek" through the preceding half-million or so to build the dictionary in order to decompress it. That's probably not feasible.'"

This is called a solid archive; what the author wants is a non-solid archive.

LZMA2 automagically does it's dictionary thing, and the non-solid nature does it per file, or if you limit solid block size it does it per group of n files or per group of files that fit in size x or both. If you have a lot of duplication across files so far apart that they won't share a dictionary und

He said 7zip was too slow/CPU-intensive, and got worse compression with a solid archive (85%) than his custom solution (90%). AFAICT, going non-solid and backing off the compression setting would make it even worser, right?

And W/R/T this:

If you have a lot of duplication across files so far apart that they won't share a dictionary under LZMA2, you can get some improvement by first creating a master dictionary (across all files, ignore non-solid mode or solid block limits) for those duplicated chunks and then writing down all the pointer locations for them, then sending the rest of the data to LZMA2 to be compressed.

Which would more-or-less do what he's accomplishing, with two very big differences:

Your way, every file gets compressed with LZMA2 -- forcing you to back off compression all the time to keep peak CPU usage acceptable. His custom solution uses an LZMA2 pass ove

He said 7zip was too slow/CPU-intensive, and got worse compression with a solid archive (85%) than his custom solution (90%). AFAICT, going non-solid and backing off the compression setting would make it even worser, right?

And W/R/T this:

If you have a lot of duplication across files so far apart that they won't share a dictionary under LZMA2, you can get some improvement by first creating a master dictionary (across all files, ignore non-solid mode or solid block limits) for those duplicated chunks and then writing down all the pointer locations for them, then sending the rest of the data to LZMA2 to be compressed.

Which would more-or-less do what he's accomplishing, with two very big differences:

Your way, every file gets compressed with LZMA2 -- forcing you to back off compression all the time to keep peak CPU usage acceptable. His custom solution uses an LZMA2 pass over selected (largest) emails, and just skips over some when it's backlogged -- the overall compression level is thus adaptive to load.

Even more significantly, your approach completely ignores that mailinator is a giant ring buffer -- as new mails come in, old ones are deleted. Since you generate a dictionary for all the mails in the directory, what happens if a burst of "v1agr4" spam was in the system then, but next a new burst of "c1ali5" spam comes in, and the "v1agr4" gets dropped? Your dictionary is now mismatched to the new data, and a new one must be generated, forcing you to reprocess all the email that's still retained. His LRU cache handles this automatically.

You can tune the performance however you want, and use whatever filters in whatever order you want.

If you would RTFA for 7Zip, you would realize that filters can have multiple output streams. You can have an "already compressed" stream that skips the LZMA2 compressor, you can have a "requires compression" stream that gets hit by LZMA2 afterward, you can have debug/control streams, whatever the fuck you want.

I simply gave a basic example of how to use 7zip with 2 encoding methods. PPMd is specifically for

I use mailinator all the time, it is fantasticly useful. Sometimes I encounter a website that won't accept mailinator addresses, some even go to the effort of tracking the alternate domains he uses and blocking them too. I find mailinator so useful that when a website refuses mailionator addresses, I just won't use that website.

The Mailinator Man's blog is also pretty good, the guy is articulate and has a knack for talking about interesting architectural stuff. This latest entry is just another in a great series, if you like this sort of stuff and haven't read his previous entries you should take the time to read through them.

If I compressed a million emails, and then some user wanted to read email #502,922 — I'd have to "seek" through the preceding half-million or so to build the dictionary in order to decompress it. That's probably not feasible.

What the summary does not say was that, email number 502,922 is special cased and is stored in plain text at the head of the compression dictionary. So it will trivially fetch email number 502,922.

Mailinator can achieve high compression rates because most people use it for registration emails. Those mails differ from each other in only a few words, making the data set highly redundant, and easily compressible.

Mailinator can achieve high compression rates because most people use it for registration emails. Those mails differ from each other in only a few words, making the data set highly redundant, and easily compressible.

The accomplishment here is that he determined a very tactical set of strategies for solving a real world problem of large scale. No, it didn't take a math PhD with some deep understanding of Fourier analysis to invent this algorithm, but it most certainly took a software developer who was knowledgeable, creative, and passionate for his task. So yeah... it's not the 90% compression that's impressive, it's the real-time performance that's cool.

Mailinator can achieve high compression rates because most people use it for registration emails. Those mails differ from each other in only a few words, making the data set highly redundant, and easily compressible.

Paul Tyma, creator of Mailinator, writes about a greedy algorithm to analyze the huge amount of email Mailinator receives and finds ways to reduce its memory footprint by 90%. Quoting: 'I grabbed a few hundred megs of the Mailinator stream and ran it through several compressors. Mostly just stuff I had on hand 7z, bzip, gzip, etc. Venerable zip reduced the file by 63%. Not bad. Then I tried the LZMA/2 algorithm (7z) which got it down by 85%! Well. OK! Article is over!

I run a similar (though waaaay less popular) site - http://dudmail.com/ [dudmail.com] My mail is stored on disk in a mysql db so I don't have quite the same memory constraints as this.

I had originally created this site naively stashing the uncompressed source straight into the db. For the ~100,000 mails I'd typically retain this would take up anywhere from 800mb to slightly over a gig.

At a recent rails camp, I was in need of a mini project so decided that some sort of compression was in order. Not being quite so clever I

But using Blobs for storing email isn't usually a good idea in a db. More easier to save it as a separate file referenced by the record on the db.

Not really - each file uses up an inode, so there's 4k gone per file. A better solution would be to store an arbitrary number of emails in each file, compressed and then concatenated, and just store the filename:offset:length of each one in the db. Each individual email is quickly recovered, way fewer inodes used.

yeah, this was going to be my original approach. (I had a previous project where I had stored images in a db, which showed the limitations of this approach).

However, I ended up chucking them in the database for simplicity. I'm able to just move database dumps from production to dev and that's a complete snapshot of the application - no need to worry about also having to sync an emails directory. It also means I don't have to worry about error handling for when an email body is not found (if the db record i