If you have experienced HTTP/2 for yourself, you are probably aware of the visible performance gains possible with HTTP/2 due to features like stream multiplexing, explicit stream dependencies, and Server Push.
There is however one important feature that is not obvious to the eye. This is the HPACK header compression. Current implementation of nginx, as well edge networks and CDNs using it, do not support the full HPACK implementation. We have, however, implemented the full HPACK in nginx, and upstreamed the part that performs Huffman encoding.
CC BY 2.0 image by Conor Lawless
This blog post gives an overview of the reasons for the development of HPACK, and the hidden bandwidth and latency benefits it brings.
Some Background
As you probably know, a regular HTTPS connection is in fact an overlay of several connections in the multi-layer model. The most basic connection you usually care about is the TCP connection (the transport layer), on top of that you have the TLS connection (mix of transport/application layers), and finally the HTTP connection (application layer).
In the the days of yore, HTTP compression was performed in the TLS layer, using gzip. Both headers and body were compressed indiscriminately, because the lower TLS layer was unaware of the transferred data type. In practice it meant both were compressed with the DEFLATE algorithm.
Then came SPDY with a new, dedicated, header compression algorithm. Although specifically designed for headers, including the use of a preset dictionary, it was still using DEFLATE, including dynamic Huffman codes and string matching.
Unfortunately both were found to be vulnerable to the CRIME attack, that can extract secret authentication cookies from compressed headers: because DEFLATE uses backward string matches and dynamic Huffman codes, an attacker that can control part of the request headers, can gradually recover the full cookie by modifying parts of the request and seeing how the total size of the request changes during compression.
Most edge networks, including Cloudflare, disabled header compression because of CRIME. That’s until HTTP/2 came along.
HPACK
HTTP/2 supports a new dedicated header compression algorithm, called HPACK. HPACK was developed with attacks like CRIME in mind, and is therefore considered safe to use.
HPACK is resilient to CRIME, because it does not use partial backward string matches and dynamic Huffman codes like DEFLATE. Instead, it uses these three methods of compression:
Static Dictionary: A predefined dictionary of 61 commonly used header fields, some with predefined values.
Dynamic Dictionary: A list of actual headers that were encountered during the connection. This dictionary has limited size, and when new entries are added, old entries might be evicted.
Huffman Encoding: A static Huffman code can be used to encode any string: name or value. This code was computed specifically for HTTP Response/Request headers - ASCII digits and lowercase letters are given shorter encodings. The shortest encoding possible is 5 bits long, therefore the highest compression ratio achievable is 8:5 (or 37.5% smaller).
HPACK flow
When HPACK needs to encode a header in the format name:value, it will first look in the static and dynamic dictionaries. If the full name:value is present, it will simply reference the entry in the dictionary. This will usually take one byte, and in most cases two bytes will suffice! A whole header encoded in a single byte! How crazy is that?
Since many headers are repetitive, this strategy has a very high success rate. For example, headers like :authority:www.cloudflare.com or the sometimes huge cookie headers are the usual suspects in this case.
When HPACK can't match a whole header in a dictionary, it will attempt to find a header with the same name. Most of the popular header names are present in the static table, for example: content-encoding, cookie, etag. The rest are likely to be repetitive and therefore present in the dynamic table. For example, Cloudflare assigns a unique cf-ray header to each response, and while the value of this field is always different, the name can be reused!
If the name was found, it can again be expressed in one or two bytes in most cases, otherwise the name will be encoded using either raw encoding or the Huffman encoding: the shorter of the two. The same goes for the value of the header.
We found that the Huffman encoding alone saves almost 30% of header size.
Although HPACK does string matching, for the attacker to find the value of a header, they must guess the entire value, instead of a gradual approach that was possible with DEFLATE matching, and was vulnerable to CRIME.
Request Headers
The gains HPACK provides for HTTP request headers are more significant than for response headers. Request headers get better compression, due to much higher duplication in the headers. For example, here are two requests for our own blog, using Chrome:
Request #1:
**:authority:**blog.cloudflare.com**:method:**GET:path: /**:scheme:**https**accept:**text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,/;q=0.8**accept-encoding:**gzip, deflate, sdch, br**accept-language:**en-US,en;q=0.8cookie: 297 byte cookie**upgrade-insecure-requests:**1**user-agent:**Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/55.0.2853.0 Safari/537.36
I marked in red the headers that can be compressed with the use of the static dictionary. Three fields: :method:GET, :path:/ and :scheme:https are always present in the static dictionary, and will each be encoded in a single byte. Then some fields will only have their names compressed in a byte: :authority, accept, accept-encoding, accept-language, cookie and user-agent are present in the static dictionary.
Everything else, marked in green will be Huffman encoded.
Headers that were not matched, will be inserted into the dynamic dictionary for the following requests to use.
Let's take a look at a later request:
Request #2:
:authority:blog.cloudflare.com****:method:GET:path:/assets/images/cloudflare-sprite-small.png**:scheme:**https**accept:**image/webp,image/,/*;q=0.8**accept-encoding:**gzip, deflate, sdch, br**accept-language:**en-US,en;q=0.8**cookie:**same 297 byte cookiereferer:https://blog.cloudflare.com/assets/css/screen.css?v=2237be22c2**user-agent:**Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/55.0.2853.0 Safari/537.36
Here I added blue encoded fields. Those indicate fields that were matched from the dynamic dictionary. It is clear that most fields repeat between requests. In this case two fields are again present in the static dictionary and five more are repeated and therefore present in the dynamic dictionary, that means they can be encoded in one or two bytes each. One of those is the ~300 byte cookie header, and ~130 byte user-agent. That is 430 bytes encoded into mere 4 bytes, 99% compression!
All in all for the repeat request, only three short strings will be Huffman encoded.
This is how ingress header traffic appears on the Cloudflare edge network during a six hour period:
On average we are seeing a 76% compression for ingress headers. As the headers represent the majority of ingress traffic, it also provides substantial savings in the total ingress traffic:
We can see that the total ingress traffic is reduced by 53% as the result of HPACK compression!
In fact today, we process about the same number of requests for HTTP/1 and HTTP/2 over HTTPS, yet the ingress traffic for HTTP/2 is only half that of HTTP/1.
Response Headers
For the response headers (egress traffic) the gains are more modest, but still spectacular:
Response #1:
**cache-control:**public, max-age=30**cf-cache-status:**HITcf-h2-pushed:</assets/css/screen.css?v=2237be22c2>,</assets/js/jquery.fitvids.js?v=2237be22c2>**cf-ray:**2ded53145e0c1ffa-DFW**content-encoding:**gzip**content-type:**text/html; charset=utf-8**date:**Wed, 07 Sep 2016 21:41:23 GMT**expires:**Wed, 07 Sep 2016 21:41:53 GMTlink: <//cdn.bizible.com/scripts/bizible.js>; rel=preload; as=script,<https://code.jquery.com/jquery-1.11.3.min.js>; rel=preload; as=script**server:**cloudflare-nginxstatus:200**vary:**Accept-Encoding**x-ghost-cache-status:**From Cache**x-powered-by:**Express
The majority of the first response will be Huffman encoded, with some of the field names being matched from the static dictionary.
Response #2:
**cache-control:**public, max-age=31536000**cf-bgj:imgq:**100**cf-cache-status:**HITcf-ray:2ded53163e241ffa-DFW**content-type:**image/png**date:**Wed, 07 Sep 2016 21:41:23 GMT**expires:**Thu, 07 Sep 2017 21:41:23 GMT**server:**cloudflare-nginx**status:**200**vary:**Accept-Encoding**x-ghost-cache-status:**From Cachex-powered-by:Express
Again, the blue color indicates matches from the dynamic table, red indicate matches from the static table, and the green ones represent Huffman encoded strings.
On the second response it is possible to fully match seven of twelve headers. For four of the remaining five, the name can be fully matched, and six strings will be efficiently encoded using the static Huffman encoding.
Although the two expires headers are almost identical, they can only be Huffman compressed, because they can't be matched in full.
The more requests are being processed, the bigger the dynamic table becomes, and more headers can be matched, leading to increased compression ratio.
This is how egress header traffic appears on the Cloudflare edge:
On average egress headers are compressed by 69%. The savings for the total egress traffic are not that significant however:
It is difficult to see, but we get 1.4% savings in the total egress HTTP/2 traffic. While it does not look like much, it is still more than increasing the compression level for data would give in many cases. This number is also significantly skewed by websites that serve very large files: we measured savings of well over 15% for some websites.
Test your HPACK
If you have nghttp2 installed, you can test the efficiency of HPACK compression on your website with a bundled tool called h2load.
For example:
h2load https://blog.cloudflare.com | tail -6 |head -1
traffic: 18.27KB (18708) total, 538B (538) headers (space savings 27.98%), 17.65KB (18076) data
We see 27.98% space savings in the headers. That is for a single request, and the gains are mostly due to the Huffman encoding. To test if the website utilizes the full power of HPACK, we need to issue two requests, for example:
h2load https://blog.cloudflare.com -n 2 | tail -6 |head -1
traffic: 36.01KB (36873) total, 582B (582) headers (space savings 61.15%), 35.30KB (36152) data
If for two similar requests the savings are 50% or more then it is very likely full HPACK compression is utilized.
Note that compression ratio improves with additional requests:
h2load https://blog.cloudflare.com -n 4 | tail -6 |head -1
traffic: 71.46KB (73170) total, 637B (637) headers (space savings 78.68%), 70.61KB (72304) data
Conclusion
By implementing HPACK compression for HTTP response headers we've seen a significant drop in egress bandwidth. HPACK has been enabled for all Cloudflare customers using HTTP/2, all of whom benefit from faster, smaller HTTP responses.