We currently support gzip/deflate through libz or libslz (memory-less compression), but the most important issue for haproxy as a reverse proxy is performance. We don’t want to spend too much CPU cycles and memory on compression (which is the whole point of libslz).

That doesn’t mean brotli is not interesting for haproxy - it is. But the 10 - 15% file size improvement on small static files is not something that makes brotli very interesting at this point. HTTP/2 is certainly more important right now.

Anyone interested in contributing should coordinate on the mailing list first (and check the file CONTRIBUTING).

If you are worried about cpu speed, brotli is one of the fastest compression algorithms out there. Brotli quality 0 is about three times faster than gzip at lowest quality setting, and compresses more, too.

Brotli typically gives 25 % size savings on large static files, 30 % if it is Asiatic or other multi-byte UTF-8 heavy html doc. On lowest quality compression, you can be both faster and compress quite a lot more – or choose a compromise where you are just a lot faster for the same density. Even you can choose to be worse in density than gzip and just a lot faster. Brotli has quite a lot more dynamics in this regard.

Typically one core can brotli compress 450+ MB/s (compared to 150 MB/s with zlib), fully saturating one gbps NIC with compressed bytes. If you are running a well-designed proxy/webserver it tends to be I/O heavy, and you can actually increase the throughput by compressing more.

Since we are not interested in the slowest option for on-the-fly compression anyway (whether it’s gzip or brotli), the 25-30% size reduction is really not that relevant to this use case, and the article you are referring talks about 10 - 20 % size reduction while maintaining compression and decompression times:

Brotli looks very effective for static compression (on the backend webserver), but for on-the-fly compression the leap doesn’t appear that big to me. Most of the CDN’s do not support it (pretty sure they would, if it would be such a gamechanger), it’s only usable on HTTPS (because of the middleboxes) and it appears to perform worse than gzip for smaller files (< 32KB).

Brotli support in haproxy would be great, IF we have all the necessary knobs to configure it appropriately (like requiring a minimal content-length for brotli to be effective) and proper gzip coexistence.

However, I disagree that it is a gamechanger, especially for on-the-fly compression (as opposed to static compression on the webserver).

Be that as it may, someone has to actually write the code for it, and I’m not aware anyone is planning to do so. If someone wants to do this, I suggest reading CONTRIBUTING and coordinating the implementation on the mailing list.

Something is off again. Brotli should be increasingly more beneficial for the smallest files. For example this chart shows behavior that I can see in my experiments: https://blogs.dropbox.com/tech/2017/04/deploying-brotli-for-static-content/ shows roughly the same performance for below 32 kB and above. Both the smallest and the largest files compress more densely than with gzip.

If you have a corpus of files where brotli compresses worse than gzip, I’d love to have my hands on it. Might be easy to fix the encoder if you give me such files. Even if you have one file where this happens, I’d be interested getting the link.

No compression algorithm is a “gamechanger”, it is just simple bytes-in-bytes-out. There are just subtle differences in the cost that the user has as the amount of data transferred and in user experienced latency. At world level something like brotli can map to savings of ten billion annually – if everyone used it instead of gzip.

Compression is one of the most important tools CloudFlare has to accelerate website performance. Compressed content takes less time to transfer, and consequently reduces load times. On expensive mobile data plans, compression even saves money for...

Like I said, brotli support in haproxy would be great. But someone has to write the code for it.

You can safely ignore that blog post. They developed a new technique to compare compression algorithms. A method that no one else has used, and for good reasons. I couldn’t reproduce anything like it with proper methods. As one example of bad methodology, their main result interpretation equation ((size A) - (size B))/((time B)-(time A)) is both unstable and commutitative. It is a mathematical impossibility to use a commutitative algorithm to make differences betweeen A and B, since the result is the same for op(A,B) and op(B,A).