Subscribe to receive notifications of new posts:

Results of experimenting with Brotli for dynamic web content

10/23/2015

17 min read

Compression is one of the most important tools CloudFlare has to accelerate website performance. Compressed content takes less time to transfer, and consequently reduces load times. On expensive mobile data plans, compression even saves money for consumers. However, compression is not free—it comes at a price. It is one of the most compute expensive operations our servers perform, and the better the compression rate we want, the more effort we have to spend.

The most popular compression format on the web is gzip. We put a great deal of effort into improving the performance of the gzip compression, so we can perform compression on the fly with fewer CPU cycles. Recently a potential replacement for gzip, called Brotli, was announced by Google. Being early adopters for many technologies, we at CloudFlare want to see for ourselves if it is as good as claimed.

This post takes a look at a bit of history behind gzip and Brotli, followed by a performance comparison.

Compression 101

Many popular lossless compression algorithms rely on LZ77 and Huffman coding, so it’s important to have a basic understanding of these two techniques before getting into gzip or Brotli.

LZ77

LZ77 is a simple technique developed by Abraham Lempel and Jacob Ziv in 1977 (hence the original name). Let's call the input to the algorithm a string (a sequence of bytes, not necessarily letters) and each consecutive sequence of bytes in the input a substring. LZ77 compresses the input string by replacing some of its substrings by pointers (or backreferences) to an identical substring previously encountered in the input.

The pointer usually has the form of <length, distance>, where length indicates the number of identical bytes found, and distance indicates how many bytes separate the current occurrence of the substring from the previous one. For example the string abcdeabcdf can be compressed with LZ77 to abcde<4,5>f and aaaaaaaaaa can be compressed to simply a<9, 1>. The decompressor when encountering a backreference will simply copy the required number of bytes from the already decompressed output, which makes it very fast for decompression. This is a nice illustration of LZ77 from my previous blog Improving compression with a preset DEFLATE dictionary:

Little bunnyFooFoo
Wenthoppingthroug
htheforestScoopin
gupthefieldmice
Andboppingthemon
theheadDowncamet
heGoodFairy,ands
hesaid"Littlebunn
yFooFooIdon'twa
nttoseeyouScoopi
ngupthefieldmice
Andboppingthemon
thehead."

Output (length tokens are blue, distance tokens are red):

LittlebunnyFoo54We
nthoppingthrough38
eforestScoo528up623i
eldmiceAndb958emo
n535headDowncame519G
oodFairy,a355s320sai
d"L20149Idon'twa3157to
seeyou56141."

The deflate algorithm managed to reduce the original text from 251 characters, to just 152 tokens! Those tokens are later compressed further by Huffman coding.

Huffman Coding

Huffman coding is another lossless compression algorithm. Developed by David Huffman back in the 50s, it is used for many compression algorithms, including JPEG. A Huffman code is a type of prefix coding, where, given an alphabet and an input, frequently occurring characters are replaced by shorter bit sequences and rarely occurring characters are replaced with longer sequences.

The code can be expressed as a binary tree, where the leaf nodes are the literals of the alphabet and the two edges from each node are marked with 0 and 1. To decode the next character, the decompressor can parse the tree from the root until it encounters a literal in the tree.

Each compression format uses Huffman coding differently, and for our little example we will create a Huffman code for an alphabet that includes only the literals and the length codes we actually used. To start, we must count the frequency of each letter in the LZ77 compressed text:

(space) - 19, o -14, e - 11, n - 8, t - 7, a - 6, d - 6, i - 6, 3 - 4, 5 - 4, h - 4, s - 4, u - 4, c - 3, m - 3, p - 3, r - 3, y - 3, (") - 2, F - 2, L - 2, b - 2, g - 2, l - 2, w - 2, (') - 1, (,) - 1, (.) - 1, A - 1, D - 1, G - 1, I - 1, 9 - 1, 20 - 1, S - 1, 56 - 1, W - 1, 6 - 1, f - 1.

We can then use the algorithm from Wikipedia to build this Huffman code (binary):

(space) - 101, o - 000, e - 1101, n - 0101, t - 0011, a - 11101, d - 11110, i - 11100, 3 - 01100, 5 - 01111, h - 10001, s - 11000, u - 10010, c - 00100, m - 111111, p - 110011, r - 110010, y - 111110, (") - 011010, F - 010000, L - 100000, b - 011101, g - 010011, l - 011100, w - 011011, (') - 0100101, (,) - 1001110, (.) - 1001100, A - 0100011, D - 0100010, G - 1000011, I - 0010110, 9 - 1000010, 20 - 0010111, S - 0010100, 56 - 0100100, W - 0010101, 6 - 1001101, f - 1001111.

Here the most frequent letters - space and 'o' got the shortest codes, only 3 bit long, whereas the letters that occur only once get 7 bit codes. If we were to represent the alphabet of 256 bytes and some length tokens, we would require 9 bits per every letter.

Now apply the code to the LZ77 output, (leaving the distance tokens untouched) and we get:

100000 11100 0011 0011 011100 1101 101 011101 10010 0101 0101 111110 101 010000 000 000 01111 4 0010101 1101 0101 0011 101 10001 000 110011 110011 11100 0101 010011 101 0011 10001 110010 000 10010 010011 01100 8 10001 1101 101 1001111 000 110010 1101 11000 0011 101 0010100 00100 000 000 01111 28 10010 110011 1001101 23 11100 1101 011100 11110 101 111111 11100 00100 1101 101 0100011 0101 11110 101 011101 1000010 58 1101 111111 101 000 0101 01111 35 10001 1101 11101 11110 101 0100010 000 011011 0101 101 00100 11101 111111 1101 01111 19 1000011 000 000 11110 101 010000 11101 11100 110010 111110 1001110 101 11101 01100 55 11000 01100 20 11000 11101 11100 11110 101 011010 100000 0010111 149 0010110 101 11110 000 0101 0100101 0011 101 011011 11101 01100 157 0011 000 101 11000 1101 1101 101 111110 000 10010 0100100 141 1001100 011010

Assuming all the distance tokens require one byte to store, the total output length is 898 bits. Compared to the 1356 bits we would require to store the LZ77 output, and 2008 bits for the original input. We achieved a compression ratio of 31% which is very good. In reality this is not the case, since we must also encode the Huffman tree we used, otherwise it would be impossible to decompress the text.

gzip

The compression algorithm used in gzip is called "deflate," and it’s a combination of the LZ77 and Huffman algorithms discussed above.

On the LZ77 side, gzip has a minimum size of 3 bytes and a maximum of 258 bytes for the length tokens and a maximum of 32768 bytes for the distance tokens. The maximum distance also defines the sliding window size the implementation uses for compression and decompression.

For Huffman coding, deflate has two alphabets. The first alphabet includes the input literals ("letters" 0-255), the end-of-block symbol (the "letter" 256) and the length tokens ("letters" 257-285). There are only 29 letters to encode all the possible lengths and the tokens 265-284 will always have additional bits encoded in the stream to cover the entire range from 3 to 258. For example the letter 257 indicates the minimal length of 3, whereas the letter 265 will indicate the length 11 if followed by the bit 0, and the length 12 if followed by the bit 1.

The second alphabet is for the distance tokens only. Its letters are the codewords 0 through 29. Similar to the length tokens, the codes 4-29 are followed by 1 to 13 additional bits to cover the whole range from 1 to 32768.

Using two distinct alphabets makes a lot of sense, because it allows us to represent the distance code with fewer bits while avoiding any ambiguity, since the distance codes always follow the length codes.

The most common implementation of gzip compression is the zlib library. At CloudFlare, we use a custom version of zlib optimized for our servers.

zlib has 9 preset quality settings for the deflate algorithm, labeled from 1 to 9. It can also be run with quality set to 0, in which case it does not perform any compression. These quality settings can be divided into two categories:

  • Fast compression (levels 1-3): When a sufficiently long backreference is found, it is emitted immediately, and the search moves to the position at the end of the matched string. If the match is longer than a few bytes, the entire matched string will not be hashed, meaning its substrings will never be referenced in the future. Clearly, this reduces the compression ratio.

  • Slow compression (levels 4-9): Here, every substring is hashed; therefore, any substring can be referenced in the future. In addition slow compression enables "lazy" matches. If at a given position a sufficiently long match is found, it will not be immediately emitted. Instead, the algorithm attempts to find a match at the next position. If a longer match is found, the algorithm will emit a single literal at the current position instead the shorter match, and continue to the next position. As a rule, this results in a better compression ratio than levels 1-3.

Other differences between the quality levels are how far to search for a backreference and how long a match should be before stopping.

A very decent compression rate can be observed at levels 3-4, which are also quite fast. Increasing compression quality from level 4 and up gives incrementally smaller compression gains, while requiring substantially more time. Often, levels 8 and 9 will produce similarly compressed output, but level 9 will require more time to do it.

It is important to understand that the zlib implementation does not guarantee the best compression possible with the format, even when the quality setting is set to 9. Instead, it uses a set of heuristics and optimizations that allow for very good compression at reasonable speed.

Better gzip compression can be achieved by the zopfli library, but it is significantly slower than zlib.

Brotli

The Brotli format was developed by Google and has been refined for a while now. Here at CloudFlare, we built an nginx module that performs dynamic Brotli compression, and we deployed it on our test server that supports HTTP2 and other new features.

To check the Brotli compression on the test server you can use the nightly build of the Firefox browser, which also supports this format.

Brotli, deflate, and gzip

Brotli and deflate are very closely related. Brotli also uses the LZ77 and Huffman algorithms for compression. Both algorithms use a sliding window for backreferences. Gzip uses a fixed size, 32KB window, and Brotli can use any window size from 1KB to 16MB, in powers of 2 (minus 16 bytes). This means that the Brotli window can be up to 512 times larger window than the deflate window. This difference is almost irrelevant in web-server context, as text files larger than 32KB are the minority.

Other differences include smaller minimal match length (2 bytes minimum in Brotli, compared to 3 bytes minimum in deflate) and larger maximal match length (16779333 bytes in Broli, compared to 258 bytes in deflate).

Static dictionary

Brotli also features a static dictionary. The "dictionary" supported by deflate can greatly improve compression, but has to be supplied independently and can only be addressed as part of the sliding window. The Brotli dictionary is part of the implementation and can be referenced from anywhere in the stream, somewhat increasing its efficiency for larger files. Moreover different transformations can be applied to words of the dictionary effectively increasing its size.

Context modeling

Brotli also supports something called context modeling. Context modeling is a feature that allows multiple Huffman trees for the same alphabet in the same block. For example, in deflate each block consists of a series of literals (bytes that could not be compressed by backreferencing) and <length, distance> pairs that define a backreference for copying. Literals and lengths form a single alphabet, while the distances are a different alphabet.

In Brotli, each block is composed of "commands". A command consists of 3 parts. The first part of each command is a word <insert, copy>. "Insert" defines the number of literals that will follow the word, and it may have the value of 0. "Copy" defines the number of literals to copy from a back reference. The word <insert,copy> is followed by a sequence of "insert" literals: <lit> … <lit>. Finally the command ends with a <distance>. Distance defines the backreference from which to copy the previously defined number of bytes. Unlike the distance in deflate, Brotli distance can have additional meanings, such as references to the static dictionary, or references to recently used distances.

There are three alphabets here. One is for <lit>s and it simply covers all the possible byte values from 0 to 255. The other one is for <distance>s, and its size depends on the size of the sliding window and other parameters. The third alphabet is for the <insert,copy> length pairs, with 704 letters. Here <insert, copy> indicates a single letter in the alphabet, as opposed to <length,distance> pairs in deflate where length and distance are letters in distinct alphabets.

So why do we care about context modeling? It means that for any of the alphabets—up to 256 different Huffman trees—can be used in the same block. The switch between different trees is determined by "context". This can be useful when the compressed file consists of different types of characters. For example binary data interleaved with UTF-8 strings, or a multilingual dictionary.

Whereas the basic idea behind Brotli remains identical to that of deflate, the way the data is encoded is very different. Those improvements allow for significantly better compression, but they also require a significant amount of processing. To alleviate the performance cost somewhat, Brotli drops the error detection CRC check present in gzip.

Benchmarking

The heaviest operation in both deflate and Brotli is the search for backward references. A higher level in zlib generally means that the backward reference search will attempt to find a better match for longer, but not necessarily succeed. This leads to significant increase in processing time. In contrast, Brotli trades longer searches for lighter operations that give better ROI, such as context modeling, dictionary references and more efficient data encoding in general. For that reason Brotli is, in theory, capable of outperforming zlib at some stage for similar compression rates, as well as giving better compression at its maximal levels.

We decided to put those theories to the test. At CloudFlare, one of the primary use cases for Brotli/gzip is on-the-fly compression of textual web assets like HTML, CSS, and JavaScript, so that’s what we’ll be testing.

There is a tradeoff between compression speed and transfer speed. It is only beneficial to increase your compression ratio if you can reduce the number of bytes you have to transfer faster than you would actually transfer them. Slower compression will actually slow the connection down. CloudFlare currently uses our own zlib implementation with quality set to 8, and that is our benchmarking baseline.

The benchmark set consists of 10,655 HTML, CSS, and JavaScript files. The benchmarks were performed on an Intel 3.5GHz, E3-1241 v3 CPU.

The files were grouped into several size groups, since different websites have different size characteristics, and we must optimize for them all.

Although the quality setting in the Brotli implementation states the possible values are 0 to 11, we didn't see any differences between 0 and 1, or between 10 and 11, therefore only quality settings 1 to 10 are reported.

Compression quality

Compression quality is measured as (total size of all files after compression)/(total size of all files before compression)*100%. The columns represent the size distribution of the HTML, CSS, and JavaScript files used in the test in bytes (the file size ranges are shown using the [x,y) notation).

[20,
1024)
[1024,
2048)
[2048,
3072)
[3072,
4096)
[4096,
8192)
[8192,
16384)
[16384,
32768)
[32768,
65536)
[65536,
+∞)
All files
zlib 165.0%46.8%42.8%38.7%34.4%32.0%29.9%31.1%31.4%31.5%
zlib 264.9%46.5%42.5%38.3%33.8%31.4%29.2%30.3%30.5%30.6%
zlib 364.9%46.4%42.3%38.0%33.6%31.1%28.8%29.8%29.9%30.1%
zlib 464.5%45.8%41.6%37.1%32.6%30.0%27.8%28.5%28.7%28.9%
zlib 564.5%45.5%41.3%36.7%32.1%29.4%27.1%27.8%27.8%28.0%
zlib 664.4%45.5%41.3%36.7%32.0%29.3%27.0%27.6%27.6%27.8%
zlib 764.4%45.5%41.3%36.6%32.0%29.2%26.9%27.5%27.5%27.7%
zlib 864.4%45.5%41.3%36.6%32.0%29.2%26.9%27.5%27.4%27.7%
zlib 964.4%45.5%41.3%36.6%32.0%29.2%26.9%27.5%27.4%27.7%
brotli 161.3%46.6%42.7%38.4%33.8%30.8%28.6%29.5%28.6%29.0%
brotli 261.9%46.9%42.8%38.3%33.6%30.6%28.3%29.1%28.3%28.6%
brotli 361.8%46.8%42.7%38.2%33.3%30.4%28.1%28.9%28.0%28.4%
brotli 453.8%40.8%38.6%34.8%30.9%28.7%27.0%28.0%27.5%27.7%
brotli 549.9%37.7%35.7%32.3%28.7%26.6%25.2%26.2%26.0%26.1%
brotli 650.0%37.7%35.7%32.3%28.6%26.5%25.1%26.0%25.7%25.9%
brotli 750.0%37.6%35.6%32.3%28.5%26.4%25.0%25.9%25.5%25.7%
brotli 850.0%37.6%35.6%32.3%28.5%26.4%25.0%25.9%25.4%25.6%
brotli 950.0%37.6%35.5%32.2%28.5%26.4%25.0%25.8%25.3%25.5%
brotli 1046.8%33.4%32.5%29.4%26.0%23.9%22.9%23.8%23.0%23.3%

Clearly, the compression possible by Brotli is significant. On average, Brotli at the maximal quality setting produces 1.19X smaller results than zlib at the maximal quality. For files smaller than 1KB the result is 1.38X smaller on average, a very impressive improvement, that can probably be attributed to the use of static dictionary.

Compression speed

Compression speed is measured as (total size of files before compression)/(total time to compress all files) and is reported in MB/s.

[20,
1024)
[1024,
2048)
[2048,
3072)
[3072,
4096)
[4096,
8192)
[8192,
16384)
[16384,
32768)
[32768,
65536)
[65536,
+∞)
All files
zlib 15.921.834.443.462.189.8117.9127.9139.6125.5
zlib 25.921.734.343.061.287.6114.3123.1130.7118.9
zlib 35.921.734.042.460.584.8108.0114.5114.9106.9
zlib 45.820.932.239.854.874.993.397.596.190.7
zlib 55.820.631.438.351.668.482.081.373.271.6
zlib 65.820.631.237.950.664.073.770.257.558.0
zlib 75.820.531.037.449.660.867.464.651.052.0
zlib 85.820.531.037.248.853.256.656.541.643.1
zlib 95.820.630.837.348.651.756.654.240.441.9
brotli 13.412.820.425.937.857.380.094.1105.891.3
brotli 23.412.419.524.435.252.371.282.089.078.8
brotli 33.412.319.023.734.049.867.476.381.573.0
brotli 42.07.611.915.222.233.144.751.958.551.0
brotli 52.05.28.010.315.022.029.733.332.830.3
brotli 61.83.85.57.010.516.323.528.628.425.6
brotli 71.52.33.13.74.97.210.715.519.616.2
brotli 81.42.32.73.14.05.37.110.615.112.2
brotli 91.32.12.42.83.44.35.57.010.68.8
brotli 100.20.40.40.50.50.60.60.60.50.5
On average for all files, we can see that Brotli at quality level 4 is slightly faster than zlib at quality level 8 (and 9) while having comparable compression ratio. However that is misleading. Most files are smaller than 64KB, and if we look only at those files then Brotli 4 is actually 1.48X slower than zlib level 8!

Connection speedup

For on-the-fly compression, the most important question is how much time to invest in compression to make data transfer faster. Because increasing compression quality only gives incremental improvement over a given level, we need the added compressed bytes to outweigh the additional time spent compressing those bytes.

Again, CloudFlare uses compression quality 8 with zlib, so that’s our baseline. For each quality setting of Brotli starting at 4 (which is somewhat comparable to zlib 8 in terms of both time and compression ratio), we compute the added compression speed as: ((total size for zlib 8) - (total size after compression with Brotli))/((total time for Brotli)-(total time for zlib 8)).

The results are reported in MB/s. Negative numbers indicate lower compression ratio.

[20,
1024)
[1024,
2048)
[2048,
3072)
[3072,
4096)
[4096,
8192)
[8192,
16384)
[16384,
32768)
[32768,
65536)
[65536,
+∞)
All files
brotli 40.330.560.520.470.420.44-0.24-2.860.020.00
brotli 50.440.550.600.610.710.971.011.082.241.58
brotli 60.360.360.370.370.450.630.690.861.521.12
brotli 70.280.200.190.180.190.230.240.340.720.52
brotli 80.260.200.170.150.150.170.150.210.480.35
brotli 90.250.190.150.130.130.130.110.130.300.24
brotli 100.030.040.040.040.030.030.020.020.020.02

Those numbers are quite low, due to the slow speed of Brotli. To get any speedup we really want to see those numbers being greater than the connection speed. It seems that for files greater than 64KB, Brotli at quality setting 5 can speed up slow connections.

Keep in mind, that on a real server, compression is only one of many tasks that share the CPU, and compression speeds would be slower there.

Conclusions

The current state of Brotli gives us some mixed impressions. There is no yes/no answer to the question "Is Brotli better than gzip?". It definitely looks like a big win for static content compression, but on the web where the content is dynamic we also need to consider on-the-fly compression.

The way I see it, Brotli already has an advantage over zlib for large files (larger than 64KB) on slow connections. However, those constitute only 20% of our sampled dataset (and 80% of the total size).

Our Brotli module has a minimal size setting for Brotli compression that allows us to use gzip for smaller files and Brotli only for large ones.

It is important to remember that zlib has the advantage of being the optimization target for years by the entire web community, while Brotli is the development effort of a small but capable and talented team. There is no doubt that the current implementation will only improve with time.

We protect entire corporate networks, help customers build Internet-scale applications efficiently, accelerate any website or Internet application, ward off DDoS attacks, keep hackers at bay, and can help you on your journey to Zero Trust.

Visit 1.1.1.1 from any device to get started with our free app that makes your Internet faster and safer.

To learn more about our mission to help build a better Internet, start here. If you're looking for a new career direction, check out our open positions.
CompressionTech TalksOptimizationSpeed & ReliabilityDeep Dive

Follow on X

Cloudflare|@cloudflare

Related posts

September 20, 2019 3:53 PM

When TCP sockets refuse to die

We noticed something weird - the TCP sockets which we thought should have been closed - were lingering around. We realized we don't really understand when TCP sockets are supposed to time out! We naively thought enabling TCP keepalives would be enough... but it isn't!...