
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/">
    <channel>
        <title><![CDATA[ The Cloudflare Blog ]]></title>
        <description><![CDATA[ Get the latest news on how products at Cloudflare are built, technologies used, and join the teams helping to build a better Internet. ]]></description>
        <link>https://blog.cloudflare.com</link>
        <atom:link href="https://blog.cloudflare.com/" rel="self" type="application/rss+xml"/>
        <language>en-us</language>
        
        <lastBuildDate>Sat, 04 Apr 2026 14:31:33 GMT</lastBuildDate>
        <item>
            <title><![CDATA[Resolving a Mutual TLS session resumption vulnerability]]></title>
            <link>https://blog.cloudflare.com/resolving-a-mutual-tls-session-resumption-vulnerability/</link>
            <pubDate>Fri, 07 Feb 2025 20:13:14 GMT</pubDate>
            <description><![CDATA[ Cloudflare patched a Mutual TLS (mTLS) vulnerability (CVE-2025-23419) reported via its Bug Bounty Program. The flaw in session resumption allowed client certificates to authenticate across different ]]></description>
            <content:encoded><![CDATA[ <p>On January 23, 2025, Cloudflare was notified via its <a href="https://www.cloudflare.com/en-gb/disclosure/"><u>Bug Bounty Program</u></a> of a vulnerability in Cloudflare’s <a href="https://www.cloudflare.com/en-gb/learning/access-management/what-is-mutual-tls/"><u>Mutual TLS</u></a> (mTLS) implementation. </p><p>The vulnerability affected customers who were using mTLS and involved a flaw in our session resumption handling. Cloudflare’s investigation revealed <b>no</b> evidence that the vulnerability was being actively exploited. And tracked as<a href="https://nvd.nist.gov/vuln/detail/CVE-2025-23419"> <u>CVE-2025-23419</u></a>, Cloudflare mitigated the vulnerability within 32 hours after being notified. Customers who were using Cloudflare’s API shield in conjunction with <a href="https://developers.cloudflare.com/waf/custom-rules/"><u>WAF custom rules</u></a> that validated the issuer's Subject Key Identifier (<a href="https://developers.cloudflare.com/ruleset-engine/rules-language/fields/reference/cf.tls_client_auth.cert_issuer_ski/"><u>SKI</u></a>) were not vulnerable. Access policies such as identity verification, IP address restrictions, and device posture assessments were also not vulnerable.</p>
    <div>
      <h2>Background</h2>
      <a href="#background">
        
      </a>
    </div>
    <p>The bug bounty report detailed that a client with a valid mTLS certificate for one Cloudflare zone could use the same certificate to resume a TLS session with another Cloudflare zone using mTLS, without having to authenticate the certificate with the second zone.</p><p>Cloudflare customers can implement mTLS through Cloudflare <a href="https://developers.cloudflare.com/api-shield/security/mtls/"><u>API Shield</u></a> with Custom Firewall Rules and the <a href="https://developers.cloudflare.com/cloudflare-one/identity/devices/access-integrations/mutual-tls-authentication/"><u>Cloudflare Zero Trust</u></a> product suite. Cloudflare establishes the TLS session with the client and forwards the client certificate to Cloudflare’s Firewall or Zero Trust products, where customer policies are enforced.</p><p>mTLS operates by extending the standard TLS handshake to require authentication from both sides of a connection - the client and the server. In a typical TLS session, a client connects to a server, which presents its <a href="https://www.cloudflare.com/application-services/products/ssl/">TLS certificate</a>. The client verifies the certificate, and upon successful validation, an encrypted session is established. However, with mTLS, the client also presents its own TLS certificate, which the server verifies before the connection is fully established. Only if both certificates are validated does the session proceed, ensuring bidirectional trust.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2FXDaK0R6cpH4IZwSlCyXk/e8f6764656d2672f9eadf4e60851614f/BLOG-2667_2.png" />
          </figure><p>mTLS is useful for <a href="https://developers.cloudflare.com/api-shield/security/mtls/"><u>securing API communications</u></a>, as it ensures that only legitimate and authenticated clients can interact with backend services. Unlike traditional authentication mechanisms that rely on credentials or <a href="https://www.cloudflare.com/en-gb/learning/access-management/token-based-authentication/"><u>tokens</u></a>, mTLS requires possession of a valid certificate and its corresponding private key.</p><p>To improve TLS connection performance, Cloudflare employs <a href="https://blog.cloudflare.com/tls-session-resumption-full-speed-and-secure/"><u>session resumption</u></a>. Session resumption speeds up the handshake process, reducing both latency and resource consumption. The core idea is that once a client and server have successfully completed a TLS handshake, future handshakes should be streamlined — assuming that fundamental parameters such as the cipher suite or TLS version remain unchanged.</p><p>There are two primary mechanisms for session resumption: session IDs and session tickets. With session IDs, the server stores the session context and associates it with a unique session ID. When a client reconnects and presents this session ID in its ClientHello message, the server checks its cache. If the session is still valid, the handshake is resumed using the cached state.</p><p>Session tickets function in a stateless manner. Instead of storing session data, the server encrypts the session context and sends it to the client as a session ticket. In future connections, the client includes this ticket in its ClientHello, which the server can then decrypt to restore the session, eliminating the need for the server to maintain session state.</p><p>A resumed mTLS session leverages previously established trust, allowing clients to reconnect to a protected application without needing to re-initiate an mTLS handshake.</p>
    <div>
      <h3>The mTLS resumption vulnerability</h3>
      <a href="#the-mtls-resumption-vulnerability">
        
      </a>
    </div>
    <p>In Cloudflare’s mTLS implementation, however, session resumption introduced an unintended behavior.  <a href="https://boringssl.googlesource.com/boringssl"><u>BoringSSL</u></a>, the TLS library that Cloudflare uses, will store the client certificate from the originating, full TLS handshake in the session. Upon resuming that session, the client certificate is not revalidated against the full chain of trust, and the original handshake's verification status is respected. To avoid this situation, BoringSSL provides an API to partition session caches/tickets between different “contexts” defined by the application. Unfortunately, Cloudflare’s use of this API was not correct, which allowed TLS sessions to be resumed when they shouldn’t have been. </p><p>To exploit this vulnerability, the security researcher first set up two zones on Cloudflare and configured them behind Cloudflare’s proxy with mTLS enabled. Once their domains were configured, the researcher authenticated to the first zone using a valid client certificate, allowing Cloudflare to issue a TLS session ticket against that zone. </p><p>The researcher then changed the TLS Server Name Indication (SNI) and HTTP Host header from the first zone (which they had authenticated with) to target the second zone (which they had <i>not</i> authenticated with). The researcher then presented the session ticket when handshaking with the second Cloudflare-protected mTLS zone. This resulted in Cloudflare resuming the session with the second zone and reporting verification status for the cached client certificate as successful,bypassing the mTLS authentication that would normally be required to initiate a session.</p><p>If you were using additional validation methods in your API Shield or Access policies – for example, checking the issuers SKI, identity verification, IP address restrictions, or device posture assessments – these controls continued to function as intended. However, due to the issue with TLS session resumption, the mTLS checks mistakenly returned a passing result without re-evaluating the full certificate chain.</p>
    <div>
      <h2>Remediation and next steps</h2>
      <a href="#remediation-and-next-steps">
        
      </a>
    </div>
    <p>We have disabled TLS session resumption for all customers that have mTLS enabled. As a result, Cloudflare will no longer allow resuming sessions that cache client certificates and their verification status.</p><p>We are exploring ways to bring back the performance improvements from TLS session resumption for mTLS customers.</p>
    <div>
      <h2>Further hardening</h2>
      <a href="#further-hardening">
        
      </a>
    </div>
    <p>Customers can further harden their mTLS configuration and add enhanced logging to detect future issues by using Cloudflare's <a href="https://developers.cloudflare.com/rules/transform/"><u>Transform Rules</u></a>, logging, and firewall features.</p><p>While Cloudflare has mitigated the issue by disabling session resumption for mTLS connections, customers may want to implement additional monitoring at their origin to enforce stricter authentication policies. All customers using mTLS can also enable additional request headers using our <a href="https://developers.cloudflare.com/rules/transform/managed-transforms/reference/#add-tls-client-auth-headers"><u>Managed Transforms</u></a> product. Enabling this feature allows us to pass additional metadata to your origin with the details of the client certificate that was used for the connection.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7eYFaZUrBYTESAZEQHsnHS/8bdb9135ab58648529cb8339c48ebb2b/BLOG-2667_3.png" />
          </figure><p>Enabling this feature allows you to see the following headers where mTLS is being utilized on a request.</p>
            <pre><code>{
  "headers": {
    "Cf-Cert-Issuer-Dn": "CN=Taskstar Root CA,OU=Taskstar\\, Inc.,L=London,ST=London,C=UK",
    "Cf-Cert-Issuer-Dn-Legacy": "/C=UK/ST=London/L=London/OU=Taskstar, Inc./CN=Taskstar Root CA",
    "Cf-Cert-Issuer-Dn-Rfc2253": "CN=Taskstar Root CA,OU=Taskstar\\, Inc.,L=London,ST=London,C=UK",
    "Cf-Cert-Issuer-Serial": "7AB07CC0D10C38A1B554C728F230C7AF0FF12345",
    "Cf-Cert-Issuer-Ski": "A5AC554235DBA6D963B9CDE0185CFAD6E3F55E8F",
    "Cf-Cert-Not-After": "Jul 29 10:26:00 2025 GMT",
    "Cf-Cert-Not-Before": "Jul 29 10:26:00 2024 GMT",
    "Cf-Cert-Presented": "true",
    "Cf-Cert-Revoked": "false",
    "Cf-Cert-Serial": "0A62670673BFBB5C9CA8EB686FA578FA111111B1B",
    "Cf-Cert-Sha1": "64baa4691c061cd7a43b24bccb25545bf28f1111",
    "Cf-Cert-Sha256": "528a65ce428287e91077e4a79ed788015b598deedd53f17099c313e6dfbc87ea",
    "Cf-Cert-Ski": "8249CDB4EE69BEF35B80DA3448CB074B993A12A3",
    "Cf-Cert-Subject-Dn": "CN=MB,OU=Taskstar Admins,O=Taskstar,L=London,ST=Essex,C=UK",
    "Cf-Cert-Subject-Dn-Legacy": "/C=UK/ST=Essex/L=London/O=Taskstar/OU=Taskstar Admins/CN=MB ",
    "Cf-Cert-Subject-Dn-Rfc2253": "CN=MB,OU=Taskstar Admins,O=Taskstar,L=London,ST=London,C=UK",
    "Cf-Cert-Verified": "true",
    "Cf-Client-Cert-Sha256": "083129c545d7311cd5c7a26aabe3b0fc76818495595cea92efe111150fd2da2",
    }
}
</code></pre>
            <p>Enterprise customers can also use our <a href="https://developers.cloudflare.com/logs/"><u>Cloudflare Log</u></a> products to add these headers via the Logs <a href="https://developers.cloudflare.com/logs/reference/custom-fields/"><u>Custom Fields</u></a> feature. For example:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3D864CsepB5U2wM1AWhYVu/ca7d3d1ca144bc4fb7ac7edddfdf5987/BLOG-2667_4.png" />
          </figure><p>This will add the following information to Cloudflare Logs.</p>
            <pre><code>"RequestHeaders": {
    "cf-cert-issuer-ski": "A5AC554235DBA6D963B9CDE0185CFAD6E3F55E8F",
    "cf-cert-sha256": "528a65ce428287e91077e4a79ed788015b598deedd53f17099c313e6dfbc87ea"
  },
</code></pre>
            <p>Customers already logging this information — either at their origin or via Cloudflare Logs — can retroactively check for unexpected certificate hashes or issuers that did not trigger any security policy.</p><p>Users are also able to use this information within their <a href="https://developers.cloudflare.com/learning-paths/application-security/firewall/custom-rules/"><u>WAF custom rules</u></a> to conduct additional checks. For example, checking the <a href="https://developers.cloudflare.com/ruleset-engine/rules-language/fields/reference/cf.tls_client_auth.cert_issuer_ski/"><u>Issuer's SKI</u></a> can provide an extra layer of security.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1YWZe9P1hhYEPJrWH4gpqi/b0a6f3c70a203032404c1ca0e2fc517c/BLOG-2667_5.png" />
          </figure><p>Customers who enabled this <a href="https://developers.cloudflare.com/api-shield/security/mtls/configure/#expression-builder"><u>additional check</u></a> were not vulnerable.</p>
    <div>
      <h2><b>Conclusion</b></h2>
      <a href="#conclusion">
        
      </a>
    </div>
    <p>We would like to thank Sven Hebrok, Felix Cramer, Tim Storm, Maximilian Radoy, and Juraj Somorovsky of Paderborn University who responsibly disclosed this issue via our <a href="https://hackerone.com/cloudflare?type=team"><u>HackerOne Bug Bounty Program</u></a>, allowing us to identify and mitigate the vulnerability. We welcome further submissions from our community of researchers to continually improve our products' security.</p><p>Finally, we want to apologize to our mTLS customers. Security is at the core of everything we do at Cloudflare, and we deeply regret any concerns this issue may have caused. We have taken immediate steps to resolve the vulnerability and have implemented additional safeguards to prevent similar issues in the future. </p>
    <div>
      <h2><b>Timeline </b></h2>
      <a href="#timeline">
        
      </a>
    </div>
    <p><i>All timestamps are in UTC</i></p><ul><li><p><b>2025-01-23 15:40</b> – Cloudflare is notified of a vulnerability in Mutual TLS and the use of session resumption.</p></li><li><p><b>2025-01-23 16:02 to 21:06</b> – Cloudflare validates Mutual TLS vulnerability and prepares a release to disable session resumption for Mutual TLS.</p></li><li><p><b>2025-01-23 21:26</b> – Cloudflare begins rollout of remediation.</p></li><li><p><b>2025-01-24 20:15</b> – Rollout completed. Vulnerability is remediated.</p></li></ul><p></p> ]]></content:encoded>
            <category><![CDATA[Vulnerabilities]]></category>
            <category><![CDATA[WAF]]></category>
            <category><![CDATA[Zero Trust]]></category>
            <category><![CDATA[SASE]]></category>
            <category><![CDATA[TLS]]></category>
            <category><![CDATA[Network Services]]></category>
            <guid isPermaLink="false">4gJhafUsmUjkevKu55304a</guid>
            <dc:creator>Matt Bullock</dc:creator>
            <dc:creator>Rushil Mehra</dc:creator>
            <dc:creator>Alessandro Ghedini</dc:creator>
        </item>
        <item>
            <title><![CDATA[Encrypted Client Hello - the last puzzle piece to privacy]]></title>
            <link>https://blog.cloudflare.com/announcing-encrypted-client-hello/</link>
            <pubDate>Fri, 29 Sep 2023 13:00:52 GMT</pubDate>
            <description><![CDATA[ We're excited to announce a contribution to improving privacy for everyone on the Internet. Encrypted Client Hello, a new standard that prevents networks from snooping on which websites a user is visiting, is now available on all Cloudflare plans.  ]]></description>
            <content:encoded><![CDATA[ <p></p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4tab7Qbtn2MGjXuTbcAsux/c40ac1cfda644402d4022573129a588b/image2-29.png" />
            
            </figure><p>Today we are excited to announce a contribution to improving privacy for everyone on the Internet. Encrypted Client Hello, a <a href="https://datatracker.ietf.org/doc/draft-ietf-tls-esni/">new proposed standard</a> that prevents networks from snooping on which websites a user is visiting, is now available on all Cloudflare plans.  </p><p>Encrypted Client Hello (ECH) is a successor to <a href="https://www.cloudflare.com/learning/ssl/what-is-encrypted-sni/">ESNI</a> and masks the Server Name Indication (SNI) that is used to negotiate a TLS handshake. This means that whenever a user visits a website on Cloudflare that has ECH enabled, no one except for the user, Cloudflare, and the website owner will be able to determine which website was visited. Cloudflare is a big proponent of privacy for everyone and is excited about the prospects of bringing this technology to life.</p>
    <div>
      <h3>Browsing the Internet and your privacy</h3>
      <a href="#browsing-the-internet-and-your-privacy">
        
      </a>
    </div>
    <p>Whenever you visit a website, your browser sends a request to a web server. The web server responds with content and the website starts loading in your browser. Way back in the early days of the Internet this happened in 'plain text', meaning that your browser would just send bits across the network that everyone could read: the corporate network you may be browsing from, the Internet Service Provider that offers you Internet connectivity and any network that the request traverses before it reaches the web server that hosts the website. Privacy advocates have long been concerned about how much information could be seen in "plain text":  If any network between you and the web server can see your traffic, that means they can also see exactly what you are doing. If you are initiating a bank transfer any intermediary can see the destination and the amount of the transfer.</p><p>So how to start making this data more private? To prevent eavesdropping, encryption was introduced in the form of <a href="https://www.cloudflare.com/learning/ssl/what-is-ssl/">SSL</a> and later <a href="https://www.cloudflare.com/learning/ssl/transport-layer-security-tls/">TLS</a>. These are amazing protocols that safeguard not only your privacy but also ensure that no intermediary can tamper with any of the content you view or upload. But encryption only goes so far.</p><p>While the actual content (which particular page on a website you're visiting and any information you upload) is encrypted and shielded from intermediaries, there are still ways to determine what a user is doing. For example, the DNS request to determine the address (IP) of the website you're visiting and the <a href="https://www.cloudflare.com/learning/ssl/what-is-sni/">SNI</a> are both common ways for intermediaries to track usage.</p><p>Let's start with <a href="https://www.cloudflare.com/learning/dns/what-is-dns/">DNS</a>. Whenever you visit a website, your operating system needs to know which IP address to connect to. This is done through a DNS request. DNS by default is unencrypted, meaning anyone can see which website you're asking about. To help users shield these requests from intermediaries, Cloudflare introduced <a href="/dns-encryption-explained/">DNS over HTTPS</a> (DoH) in 2019. In 2020, we went one step further and introduced <a href="/oblivious-dns/">Oblivious DNS over HTTPS</a> which prevents even Cloudflare from seeing which websites a user is asking about.</p><p>That leaves SNI as the last unencrypted bit that intermediaries can use to determine which website you're visiting. After performing a DNS query, one of the first things a browser will do is perform a <a href="https://www.cloudflare.com/learning/ssl/what-happens-in-a-tls-handshake/">TLS handshake</a>. The handshake constitutes several steps, including which cipher to use, which TLS version and which certificate will be used to verify the web server's identity. As part of this handshake, the browser will indicate the name of the server (website) that it intends to visit: the Server Name Indication.</p><p>Due to the fact that the session is not encrypted yet, and the server doesn't know which certificate to use, the browser must transmit this information in plain text. Sending the SNI in plaintext means that any intermediary can view which website you’re visiting simply by checking the first packet for a connection:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/74acW9qWQyuJFltzJagc2S/5693f381e386d44283121cdd9dd65106/pasted-image-0--8--2.png" />
            
            </figure><p>This means that despite the amazing efforts of TLS and DoH, which websites you’re visiting on the Internet still isn't truly private. Today, we are adding the final missing piece of the puzzle with ECH. With ECH, the browser performs a TLS handshake with Cloudflare, but not a customer-specific hostname. This means that although intermediaries will be able to see that you are visiting <i>a</i> website on Cloudflare, they will never be able to determine which one.</p>
    <div>
      <h3>How does ECH work?</h3>
      <a href="#how-does-ech-work">
        
      </a>
    </div>
    <p>In order to explain how ECH works, it helps to first understand how TLS handshakes are performed. A TLS handshake starts with a <a href="https://www.cloudflare.com/learning/ssl/what-happens-in-a-tls-handshake/">ClientHello</a> part, which allows a client to say which ciphers to use, which TLS version and most importantly, which server it's trying to visit (the SNI).</p><p>With ECH, the ClientHello message part is split into two separate messages: an inner part and an outer part. The outer part contains the non-sensitive information such as which ciphers to use and the TLS version. It also includes an "outer SNI". The inner part is encrypted and contains an "inner SNI".</p><p>The outer SNI is a common name that, in our case, represents that a user is trying to visit an encrypted website on Cloudflare. We chose cloudflare-ech.com as the SNI that all websites will share on Cloudflare. Because Cloudflare controls that domain we have the appropriate certificates to be able to negotiate a TLS handshake for that server name.</p><p>The inner SNI contains the actual server name that the user is trying to visit. This is encrypted using a public key and can only be read by Cloudflare. Once the handshake completes the web page is loaded as normal, just like any other website loaded over TLS.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/MOw4dKZFQbisxTiD3PIrD/6b173801a10ece203e82d8bf3ed28c0b/pasted-image-0-8.png" />
            
            </figure><p>In practice, this means that any intermediary that is trying to establish which website you’re visiting will simply see normal TLS handshakes with one caveat: any time you visit an ECH enabled website on Cloudflare the server name will look the same. Every TLS handshake will appear identical in that it looks like it's trying to load a website for cloudflare-ech.com, as opposed to the actual website. We've solved the last puzzle-piece in preserving privacy for users that don't like intermediaries seeing which websites they are visiting.</p><p>Visist our introductory blog for full details on the nitty-gritty of <a href="/encrypted-client-hello/">ECH technology</a>.</p>
    <div>
      <h3>The future of privacy</h3>
      <a href="#the-future-of-privacy">
        
      </a>
    </div>
    <p>We're excited about what this means for privacy on the Internet. Browsers like <a href="https://chromestatus.com/feature/6196703843581952">Google Chrome</a> and <a href="https://groups.google.com/a/mozilla.org/g/dev-platform/c/uv7PNrHUagA/m/BNA4G8fOAAAJ">Firefox</a> are starting to ramp up support for ECH already. If you're a website, and you care about users visiting your website in a fashion that doesn't allow any intermediary to see what users are doing, enable ECH today on Cloudflare. We've enabled ECH for all free zones already. If you're an existing paying customer, just head on over to the Cloudflare dashboard and <a href="https://dash.cloudflare.com/?to=/:account/:zone/ssl-tls/edge-certificates">apply for the feature</a>. We’ll be enabling this for everyone that signs up over the coming few weeks.</p><p>Over time, we hope others will follow our footsteps, leading to a more private Internet for everyone. The more providers that offer ECH, the harder it becomes for anyone to listen in on what users are doing on the Internet. Heck, we might even solve privacy for good.</p><p>If you're looking for more information on ECH, how it works and how to enable it head on over to our <a href="https://developers.cloudflare.com/ssl/edge-certificates/ech/">developer documentation on ECH</a>.</p> ]]></content:encoded>
            <category><![CDATA[Birthday Week]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[Encrypted SNI]]></category>
            <category><![CDATA[Research]]></category>
            <guid isPermaLink="false">3qwZPNqFZ0Cj1noi8cSLG9</guid>
            <dc:creator>Achiel van der Mandele</dc:creator>
            <dc:creator>Alessandro Ghedini</dc:creator>
            <dc:creator>Christopher Wood</dc:creator>
            <dc:creator>Rushil Mehra</dc:creator>
        </item>
        <item>
            <title><![CDATA[Speeding up HTTPS and HTTP/3 negotiation with... DNS]]></title>
            <link>https://blog.cloudflare.com/speeding-up-https-and-http-3-negotiation-with-dns/</link>
            <pubDate>Wed, 30 Sep 2020 13:00:00 GMT</pubDate>
            <description><![CDATA[ A look at a new DNS resource record intended to speed-up negotiation of HTTP security and performance features and how it will help make the web faster. ]]></description>
            <content:encoded><![CDATA[ <p>In late June 2019, Cloudflare's resolver team noticed a spike in DNS requests for the 65479 Resource Record thanks to data exposed through <a href="/introducing-cloudflare-radar/">our new Radar service</a>. We began investigating and found these to be a part of <a href="https://developer.apple.com/videos/play/wwdc2020/10111/">Apple’s iOS14 beta release</a> where they were testing out a new SVCB/HTTPS record type.</p><p>Once we saw that Apple was requesting this record type, and while the iOS 14 beta was still on-going, we rolled out support across the Cloudflare customer base.</p><p>This blog post explains what this new record type does and its significance, but there’s also a deeper story: Cloudflare customers get automatic support for new protocols like this.</p><p>That means that today if you’ve enabled HTTP/3 on an Apple device running iOS 14, when it needs to talk to a Cloudflare customer (say you browse to a Cloudflare-protected website, or use an app whose API is on Cloudflare) it can find the best way of making that connection automatically.</p><p>And if you’re a Cloudflare customer you have to do… absolutely nothing… to give Apple users the best connection to your Internet property.</p>
    <div>
      <h3>Negotiating HTTP security and performance</h3>
      <a href="#negotiating-http-security-and-performance">
        
      </a>
    </div>
    <p>Whenever a user types a URL in the browser box without specifying a scheme (like “https://” or “http://”), the browser cannot assume, without prior knowledge such as a <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Strict-Transport-Security">Strict-Transport-Security (HSTS)</a> cache or preload list entry, whether the requested website supports HTTPS or not. The browser will first try to fetch the resources using plaintext HTTP, and only if the website redirects to an HTTPS URL, or if it specifies an HSTS policy in the initial HTTP response, the browser will then fetch the resource again over a secure connection.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7iz2JFuI19whcuN931y8pk/8d46dd114b946940a6cdcd49502d7b07/image4.gif" />
            
            </figure><p>This means that the latency incurred in fetching the initial resource (say, the index page of a website) is doubled, due to the fact that the browser needs to re-establish the connection over TLS and request the resource all over again. But worse still, the initial request is leaked to the network in plaintext, which could potentially be modified by malicious on-path attackers (think of all those unsecured public WiFi networks) to redirect the user to a completely different website. In practical terms, this weakness is sometimes used by said unsecured public WiFi network operators to sneak advertisements into people’s browsers.</p><p>Unfortunately, that’s not the full extent of it. This problem also impacts <a href="/http3-the-past-present-and-future/">HTTP/3</a>, the newest revision of the HTTP protocol that provides increased performance and security. <a href="https://www.cloudflare.com/learning/performance/what-is-http3/">HTTP/3</a> is advertised using the <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Alt-Svc">Alt-Svc</a> HTTP header, which is only returned after the browser has already contacted the origin using a different and potentially less performant HTTP version. The browser ends up missing out on using faster HTTP/3 on its first visit to the website (although it does store the knowledge for later visits).</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1ulWkZdFUIrgAg7WSKuOwc/acf044f1633f1116567dfd1a0c3d15e0/image2-18.png" />
            
            </figure><p>The fundamental problem comes from the fact that negotiation of HTTP-related parameters (such as whether HTTPS or HTTP/3 can be used) is done through HTTP itself (either via a redirect, HSTS and/or Alt-Svc headers). This leads to a chicken and egg problem where the client needs to use the most basic HTTP configuration that has the best chance of succeeding for the initial request. In most cases this means using plaintext HTTP/1.1. Only after it learns of parameters can it change its configuration for the following requests.</p><p>But before the browser can even attempt to connect to the website, it first needs to resolve the website’s domain to an IP address via DNS. This presents an opportunity: what if additional information required to establish a connection could be provided, in addition to IP addresses, with DNS?</p><p>That’s what we’re excited to be announcing today: Cloudflare has rolled out initial support for HTTPS records to our edge network. Cloudflare’s DNS servers will now automatically generate HTTPS records on the fly to advertise whether a particular zone supports HTTP/3 and/or HTTP/2, based on whether those features are enabled on the zone.</p>
    <div>
      <h3>Service Bindings via DNS</h3>
      <a href="#service-bindings-via-dns">
        
      </a>
    </div>
    <p><a href="https://tools.ietf.org/html/draft-ietf-dnsop-svcb-https-01">The new proposal</a>, currently discussed by the Internet Engineering Task Force (IETF) defines a family of DNS resource record types (“SVCB”) that can be used to negotiate parameters for a variety of application protocols.</p><p>The generic DNS record “SVCB” can be instantiated into records specific to different protocols. The draft specification defines one such instance called “HTTPS”, specific to the HTTP protocol, which can be used not only to signal to the client that it can connect in over a secure connection (skipping the initial unsecured request), but also to advertise the different HTTP versions supported by the website. In the future, potentially even more features could be advertised.</p>
            <pre><code>example.com 3600 IN HTTPS 1 . alpn=”h3,h2”</code></pre>
            <p>The DNS record above advertises support for the HTTP/3 and HTTP/2 protocols for the example.com origin.</p><p>This is best used alongside DNS over HTTPS or DNS over TLS, and DNSSEC, to again prevent malicious actors from manipulating the record.</p><p>The client will need to fetch not only the typical A and AAAA records to get the origin’s IP addresses, but also the HTTPS record. It can of course do these lookups in parallel to avoid additional latency at the start of the connection, but this could potentially lead to A/AAAA and HTTPS responses diverging from each other. For example, in cases where the origin makes use of <a href="https://www.cloudflare.com/learning/performance/what-is-dns-load-balancing/">DNS load-balancing</a>: if an origin can be served by multiple <a href="https://www.cloudflare.com/learning/cdn/what-is-a-cdn/">CDNs</a> it might happen that the responses for A and/or AAAA records come from one CDN, while the HTTPS record comes from another. In some cases this can lead to failures when connecting to the origin (say, if the HTTPS record from one of the CDNs advertises support for HTTP/3, but the CDN the client ends up connecting to doesn’t support it).</p><p>This is solved by the SVCB and HTTPS records by providing the IP addresses directly, without the need for the client to look at A and AAAA records. This is done via the “ipv4hint” and “ipv6hint” parameters that can optionally be added to these records, which provide lists of IPv4 and IPv6 addresses that can be used by the client in lieu of the addresses specified in A and AAAA records. Of course clients will still need to query the A and AAAA records, to support cases where no SVCB or HTTPS record is available, but these IP hints provide an additional layer of robustness.</p>
            <pre><code>example.com 3600 IN HTTPS 1 . alpn=”h3,h2” ipv4hint=”192.0.2.1” ipv6hint=”2001:db8::1”</code></pre>
            <p>In addition to all this, SVCB and HTTPS can also be used to define alternative endpoints that are authoritative for a service, in a similar vein to SRV records:</p>
            <pre><code>example.com 3600 IN HTTPS 1 example.net alpn=”h3,h2”
example.com 3600 IN HTTPS 2 example.org alpn=”h2”</code></pre>
            <p>In this case the “example.com” HTTPS service can be provided by both “example.net” (which supports both HTTP/3 and HTTP/2, in addition to HTTP/1.x) as well as “example.org” (which only supports HTTP/2 and HTTP/1.x). The client will first need to fetch A and AAAA records for “example.net” or “example.org” before being able to connect, which might increase the connection latency, but the service operator can make use of the IP hint parameters discussed above in this case as well, to reduce the amount of required DNS lookups the client needs to perform.</p><p>This means that SVCB and HTTPS records might finally provide a way for SRV-like functionality to be supported by popular browsers and other clients that have historically not supported SRV records.</p>
    <div>
      <h3>There is always room at the top apex</h3>
      <a href="#there-is-always-room-at-the-top-apex">
        
      </a>
    </div>
    <p>When setting up a website on the Internet, it’s common practice to use a “www” subdomain (like in “<a href="http://www.cloudflare.com">www.cloudflare.com</a>”) to identify the site, as well as the “apex” (or “root”) of the domain (in this case, “cloudflare.com”). In order to avoid duplicating the DNS configuration for both domains, the “www” subdomain can typically be configured as a <a href="/introducing-cname-flattening-rfc-compliant-cnames-at-a-domains-root/#cnamesforthewin">CNAME (Canonical Name) record</a>, that is, a record that maps to a different DNS record.</p>
            <pre><code>cloudflare.com.   3600 IN A 192.0.2.1
cloudflare.com.   3600 IN AAAA 2001:db8::1
www               3600 IN CNAME cloudflare.com.</code></pre>
            <p>This way the list of IP addresses of the websites won’t need to be duplicated all over again, but clients requesting A and/or AAAA records for “<a href="http://www.cloudflare.com">www.cloudflare.com</a>” will still get the same results as “cloudflare.com”.</p><p>However, there are some cases where using a CNAME might seem like the best option, but ends up subtly breaking the DNS configuration for a website. For example when setting up services such as <a href="https://docs.gitlab.com/ee/user/project/pages/">GitLab Pages</a>, <a href="https://docs.github.com/en/github/working-with-github-pages">GitHub Pages</a> or <a href="https://www.netlify.com/">Netlify</a> with a custom domain, the user is generally asked to add an A (and sometimes AAAA) record to the DNS configuration for their domain. Those IP addresses are hard-coded in users’ configurations, which means that if the provider of the service ever decides to change the addresses (or add new ones), even if just to provide some form of load-balancing, all of their users will need to manually change their configuration.</p><p>Using a CNAME to a more stable domain which can then have variable A and AAAA records might seem like a better option, and some of these providers do support that, but it’s important to note that this generally only works for subdomains (like “www” in the previous example) and not apex records. This is because the DNS specification that defines CNAME records states that when a CNAME is defined on a particular target, there can’t be any other records associated with it. This is fine for subdomains, but apex records will need to have additional records defined, such as SOA and NS, for the DNS configuration to work properly and could also have records such as MX to make sure emails get properly delivered. In practical terms, this means that defining a CNAME record at the apex of a domain might appear to be working fine in some cases, but be subtly broken in ways that are not immediately apparent.</p><p>But what does this all have to do with SVCB and HTTPS records? Well, it turns out that those records can also solve this problem, by defining an alternative format called “alias form” that behaves in the same manner as a CNAME in all the useful ways, but without the annoying historical baggage. A domain operator will be able to define a record such as:</p>
            <pre><code>example.com. 3600 IN HTTPS example.org.</code></pre>
            <p>and expect it to work as if a CNAME was defined, but without the subtle side-effects.</p>
    <div>
      <h3>One more thing</h3>
      <a href="#one-more-thing">
        
      </a>
    </div>
    <p><a href="/encrypted-sni/">Encrypted SNI</a> is an extension to TLS intended to improve privacy of users on the Internet. You might remember how it makes use of a custom DNS record to advertise the server’s public key share used by clients to then derive the secret key necessary to actually encrypt the SNI. In newer revisions of the specification (which is now called “Encrypted ClientHello” or “ECH”) the custom TXT record used previously is simply replaced by a new parameter, called “echconfig”, for the SVCB and HTTPS records.</p><p>This means that SVCB/HTTPS are a requirement to support newer revisions of Encrypted SNI/Encrypted ClientHello. More on this later this year.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4EBQrn9Y1mOFSs0PtAWtri/f77c27a9e29ef4492e11e0b055e4eacf/image1-28.png" />
            
            </figure>
    <div>
      <h3>What now?</h3>
      <a href="#what-now">
        
      </a>
    </div>
    <p>This all sounds great, but what does it actually mean for Cloudflare customers? As mentioned earlier, we have enabled initial support for HTTPS records across our edge network. Cloudflare’s DNS servers will automatically generate HTTPS records on the fly to advertise whether a particular zone supports HTTP/3 and/or HTTP/2, based on whether those features are enabled on the zone, and we will later also add Encrypted ClientHello support.</p><p>Thanks to Cloudflare’s large network that spans millions of web properties (<a href="https://w3techs.com/technologies/history_overview/dns_server">we happen to be one of the most popular DNS providers</a>), serving these records on our customers' behalf will help build a more secure and performant Internet for anyone that is using a supporting client.</p><p>Adopting new protocols requires cooperation between multiple parties. We have been working with various browsers and clients to increase the support and adoption of HTTPS records. Over the last few weeks, Apple’s iOS 14 release has included <a href="https://mailarchive.ietf.org/arch/msg/dnsop/eeP4H9fli712JPWnEMvDg1sLEfg/">client support for HTTPS records</a>, allowing connections to be upgraded to QUIC when the HTTP/3 parameter is returned in the DNS record. Apple has reported that so far, of the population that has manually enabled HTTP/3 on iOS 14, 8% of the QUIC connections had the HTTPS record response.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7tcR9Q2DOynyF2kGZDNwKz/47cb2bc54fdd25f8778e3cd6f94d773e/image3-15.png" />
            
            </figure><p>Other browser vendors, such as <a href="https://groups.google.com/a/chromium.org/forum/#!msg/blink-dev/brZTXr6-2PU/g0g8wWwCAwAJ">Google</a> and <a href="https://bugzilla.mozilla.org/show_bug.cgi?id=1634793">Mozilla</a>, are also working on shipping support for HTTPS records to their users, and we hope to be hearing more on this front soon.</p> ]]></content:encoded>
            <category><![CDATA[Birthday Week]]></category>
            <category><![CDATA[HTTPS]]></category>
            <category><![CDATA[HTTP3]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[Performance]]></category>
            <guid isPermaLink="false">7KRAY9gQONDIoKm8SZ3a8y</guid>
            <dc:creator>Alessandro Ghedini</dc:creator>
        </item>
        <item>
            <title><![CDATA[Accelerating UDP packet transmission for QUIC]]></title>
            <link>https://blog.cloudflare.com/accelerating-udp-packet-transmission-for-quic/</link>
            <pubDate>Wed, 08 Jan 2020 17:08:00 GMT</pubDate>
            <description><![CDATA[ Significant work has gone into optimizing TCP, UDP hasn't received as much attention, putting QUIC at a disadvantage. Let's explore a few tricks that help mitigate this. ]]></description>
            <content:encoded><![CDATA[ <p><i>This was originally published on </i><a href="https://calendar.perfplanet.com/2019/accelerating-udp-packet-transmission-for-quic/"><i>Perf Planet's 2019 Web Performance Calendar</i></a><i>.</i></p><p><a href="/the-road-to-quic/">QUIC</a>, the new Internet transport protocol designed to accelerate HTTP traffic, is delivered on top of <a href="https://www.cloudflare.com/learning/ddos/glossary/user-datagram-protocol-udp/">UDP datagrams</a>, to ease deployment and avoid interference from network appliances that drop packets from unknown protocols. This also allows QUIC implementations to live in user-space, so that, for example, browsers will be able to implement new protocol features and ship them to their users without having to wait for operating systems updates.</p><p>But while a lot of work has gone into optimizing TCP implementations as much as possible over the years, including building offloading capabilities in both software (like in operating systems) and hardware (like in network interfaces), UDP hasn't received quite as much attention as TCP, which puts QUIC at a disadvantage. In this post we'll look at a few tricks that help mitigate this disadvantage for UDP, and by association QUIC.</p><p>For the purpose of this blog post we will only be concentrating on measuring throughput of QUIC connections, which, while necessary, is not enough to paint an accurate overall picture of the performance of the QUIC protocol (or its implementations) as a whole.</p>
    <div>
      <h3>Test Environment</h3>
      <a href="#test-environment">
        
      </a>
    </div>
    <p>The client used in the measurements is h2load, <a href="https://github.com/nghttp2/nghttp2/tree/quic">built with QUIC and HTTP/3 support</a>, while the server is NGINX, built with <a href="/experiment-with-http-3-using-nginx-and-quiche/">the open-source QUIC and HTTP/3 module provided by Cloudflare</a> which is based on quiche (<a href="https://github.com/cloudflare/quiche">github.com/cloudflare/quiche</a>), Cloudflare's own <a href="/enjoy-a-slice-of-quic-and-rust/">open-source implementation of QUIC and HTTP/3</a>.</p><p>The client and server are run on the same host (my laptop) running Linux 5.3, so the numbers don’t necessarily reflect what one would see in a production environment over a real network, but it should still be interesting to see how much of an impact each of the techniques have.</p>
    <div>
      <h3>Baseline</h3>
      <a href="#baseline">
        
      </a>
    </div>
    <p>Currently the code that implements QUIC in NGINX uses the <code>sendmsg()</code> system call to send a single UDP packet at a time.</p>
            <pre><code>ssize_t sendmsg(int sockfd, const struct msghdr *msg,
    int flags);</code></pre>
            <p>The <code>struct msghdr</code> carries a <code>struct iovec</code> which can in turn carry multiple buffers. However, all of the buffers within a single iovec will be merged together into a single UDP datagram during transmission. The kernel will then take care of encapsulating the buffer in a UDP packet and sending it over the wire.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/774r0FpU47qMQ5bbxIOPL5/3560c09b55949e3c406ad958498e7fd4/sendmsg.png" />
            
            </figure><p>The throughput of this particular implementation tops out at around 80-90 MB/s, as measured by h2load when performing 10 sequential requests for a 100 MB resource.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4V8JtNxw1ogFryQ7xBVcgk/74bf3551df4837143a4cb2dbc53e3f84/sendmsg-chart.png" />
            
            </figure>
    <div>
      <h3>sendmmsg()</h3>
      <a href="#sendmmsg">
        
      </a>
    </div>
    <p>Due to the fact that <code>sendmsg()</code> only sends a single UDP packet at a time, it needs to be invoked quite a lot in order to transmit all of the QUIC packets required to deliver the requested resources, as illustrated by the following bpftrace command:</p>
            <pre><code>% sudo bpftrace -p $(pgrep nginx) -e 'tracepoint:syscalls:sys_enter_sendm* { @[probe] = count(); }'
Attaching 2 probes...
 
 
@[tracepoint:syscalls:sys_enter_sendmsg]: 904539</code></pre>
            <p>Each of those system calls causes an expensive context switch between the application and the kernel, thus impacting throughput.</p><p>But while <code>sendmsg()</code> only transmits a single UDP packet at a time for each invocation, its close cousin <code>sendmmsg()</code> (note the additional “m” in the name) is able to batch multiple packets per system call:</p>
            <pre><code>int sendmmsg(int sockfd, struct mmsghdr *msgvec,
    unsigned int vlen, int flags);</code></pre>
            <p>Multiple <code>struct mmsghdr</code> structures can be passed to the kernel as an array, each in turn carrying a single <code>struct msghdr</code> with its own <code>struct iovec</code> , with each element in the <code>msgvec</code> array representing a single UDP datagram.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5QO4EdUwmU9MJhSCY2wwC1/ac6bc505e3b1f3b4d571910f13905131/sendmmsg.png" />
            
            </figure><p>Let's see what happens when NGINX is updated to use <code>sendmmsg()</code> to send QUIC packets:</p>
            <pre><code>% sudo bpftrace -p $(pgrep nginx) -e 'tracepoint:syscalls:sys_enter_sendm* { @[probe] = count(); }'
Attaching 2 probes...
 
 
@[tracepoint:syscalls:sys_enter_sendmsg]: 2437
@[tracepoint:syscalls:sys_enter_sendmmsg]: 15676</code></pre>
            <p>The number of system calls went down dramatically, which translates into an increase in throughput, though not quite as big as the decrease in syscalls:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6fv3ZaxYLZmteczFchjfiO/e0056c40dacea0422ed53edaf8158869/sendmmsg-chart.png" />
            
            </figure>
    <div>
      <h3>UDP segmentation offload</h3>
      <a href="#udp-segmentation-offload">
        
      </a>
    </div>
    <p>With <code>sendmsg()</code> as well as <code>sendmmsg()</code>, the application is responsible for separating each QUIC packet into its own buffer in order for the kernel to be able to transmit it. While the implementation in NGINX uses static buffers to implement this, so there is no overhead in allocating them, all of these buffers need to be traversed by the kernel during transmission, which can add significant overhead.</p><p>Linux supports a feature, Generic Segmentation Offload (GSO), which allows the application to pass a single "super buffer" to the kernel, which will then take care of segmenting it into smaller packets. The kernel will try to postpone the segmentation as much as possible to reduce the overhead of traversing outgoing buffers (some NICs even support hardware segmentation, but it was not tested in this experiment due to lack of capable hardware). Originally GSO was only supported for TCP, but support for UDP GSO was recently added as well, in Linux 4.18.</p><p>This feature can be controlled using the <code>UDP_SEGMENT</code> socket option:</p>
            <pre><code>setsockopt(fd, SOL_UDP, UDP_SEGMENT, &amp;gso_size, sizeof(gso_size)))</code></pre>
            <p>As well as via ancillary data, to control segmentation for each <code>sendmsg()</code> call:</p>
            <pre><code>cm = CMSG_FIRSTHDR(&amp;msg);
cm-&gt;cmsg_level = SOL_UDP;
cm-&gt;cmsg_type = UDP_SEGMENT;
cm-&gt;cmsg_len = CMSG_LEN(sizeof(uint16_t));
*((uint16_t *) CMSG_DATA(cm)) = gso_size;</code></pre>
            <p>Where <code>gso_size</code> is the size of each segment that form the "super buffer" passed to the kernel from the application. Once configured, the application can provide one contiguous large buffer containing a number of packets of <code>gso_size</code> length (as well as a final smaller packet), that will then be segmented by the kernel (or the NIC if hardware segmentation offloading is supported and enabled).</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3vQo11I0RupCQ4msUqj0Ve/dcec6ac6c0bea7c737aa9fa822e69d0a/sendmsg-gso.png" />
            
            </figure><p><a href="https://github.com/torvalds/linux/blob/80a0c2e511a97e11d82e0ec11564e2c3fe624b0d/include/linux/udp.h#L94">Up to 64 segments</a> can be batched with the <code>UDP_SEGMENT</code> option.</p><p>GSO with plain <code>sendmsg()</code> already delivers a significant improvement:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4q2OxEsgZcsw2JXc8JcAfk/a64c9b48cad41378122e7d7c5a88e67a/gso-chart.png" />
            
            </figure><p>And indeed the number of syscalls also went down significantly, compared to plain <code>sendmsg()</code> :</p>
            <pre><code>% sudo bpftrace -p $(pgrep nginx) -e 'tracepoint:syscalls:sys_enter_sendm* { @[probe] = count(); }'
Attaching 2 probes...
 
 
@[tracepoint:syscalls:sys_enter_sendmsg]: 18824</code></pre>
            <p>GSO can also be combined with <code>sendmmsg()</code> to deliver an even bigger improvement. The idea being that each <code>struct msghdr</code> can be segmented in the kernel by setting the <code>UDP_SEGMENT</code> option using ancillary data, allowing an application to pass multiple “super buffers”, each carrying up to 64 segments, to the kernel in a single system call.</p><p>The improvement is again fairly significant:</p>
    <div>
      <h3>Evolving from AFAP</h3>
      <a href="#evolving-from-afap">
        
      </a>
    </div>
    <p>Transmitting packets as fast as possible is easy to reason about, and there's much fun to be had in optimizing applications for that, but in practice this is not always the best strategy when optimizing protocols for the Internet</p><p>Bursty traffic is more likely to cause or be affected by congestion on any given network path, which will inevitably defeat any optimization implemented to increase transmission rates.</p><p>Packet pacing is an effective technique to squeeze out more performance from a network flow. The idea being that adding a short delay between each outgoing packet will smooth out bursty traffic and reduce the chance of congestion, and packet loss. For TCP this was originally implemented in Linux via the fq packet scheduler, and later by the BBR congestion control algorithm implementation, which implements its own pacer.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1PkaZcKDkzzjUDhFLRT1jw/a4247010827f1763bd7560894e30938e/afap.png" />
            
            </figure><p>Due to the nature of current QUIC implementations, which reside entirely in user-space, pacing of QUIC packets conflicts with any of the techniques explored in this post, because pacing each packet separately during transmission will prevent any batching on the application side, and in turn batching will prevent pacing, as batched packets will be transmitted as fast as possible once received by the kernel.</p><p>However Linux provides some facilities to offload the pacing to the kernel and give back some control to the application:</p><ul><li><p><b>SO_MAX_PACING_RATE</b>: an application can define this socket option to instruct the fq packet scheduler to pace outgoing packets up to the given rate. This works for UDP sockets as well, but it is yet to be seen how this can be integrated with QUIC, as a single UDP socket can be used for multiple QUIC connections (unlike TCP, where each connection has its own socket). In addition, this is not very flexible, and might not be ideal when implementing the BBR pacer.</p></li><li><p><b>SO_TXTIME / SCM_TXTIME</b>: an application can use these options to schedule transmission of specific packets at specific times, essentially instructing fq to delay packets until the provided timestamp is reached. This gives the application a lot more control, and can be easily integrated into sendmsg() as well as sendmmsg(). But it does not yet support specifying different times for each packet when GSO is used, as there is no way to define multiple timestamps for packets that need to be segmented (each segmented packet essentially ends up being sent at the same time anyway).</p></li></ul><p>While the performance gains achieved by using the techniques illustrated here are fairly significant, there are still open questions around how any of this will work with pacing, so more experimentation is required.</p> ]]></content:encoded>
            <category><![CDATA[QUIC]]></category>
            <category><![CDATA[UDP]]></category>
            <category><![CDATA[TCP]]></category>
            <category><![CDATA[HTTP3]]></category>
            <guid isPermaLink="false">3pwKBhG2s8cT4COiXLHTyT</guid>
            <dc:creator>Alessandro Ghedini</dc:creator>
        </item>
        <item>
            <title><![CDATA[Even faster connection establishment with QUIC 0-RTT resumption]]></title>
            <link>https://blog.cloudflare.com/even-faster-connection-establishment-with-quic-0-rtt-resumption/</link>
            <pubDate>Wed, 20 Nov 2019 16:30:00 GMT</pubDate>
            <description><![CDATA[ One of the more interesting features introduced by TLS 1.3, the latest revision of the TLS protocol, was the so called “zero roundtrip time connection resumption”, a mode of operation that allows a client to start sending application data, such as HTTP requests ]]></description>
            <content:encoded><![CDATA[ <p>One of the more interesting features introduced by <a href="/rfc-8446-aka-tls-1-3/">TLS 1.3</a>, the latest revision of the TLS protocol, was the so called “zero <a href="https://www.cloudflare.com/learning/cdn/glossary/round-trip-time-rtt/">roundtrip time</a> connection resumption”, a mode of operation that allows a client to start sending application data, such as HTTP requests, without having to wait for the TLS handshake to complete, thus reducing the latency penalty incurred in establishing a new connection.</p><p>The basic idea behind 0-RTT connection resumption is that if the client and server had previously established a TLS connection between each other, they can use information cached from that session to establish a new one without having to negotiate the connection’s parameters from scratch. Notably this allows the client to compute the private encryption keys required to protect application data before even talking to the server.</p><p>However, in the case of TLS, “zero roundtrip” only refers to the TLS handshake itself: the client and server are still required to first establish a TCP connection in order to be able to exchange TLS data.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7ni3h0ZnlDFEvNicARvJmV/061a4032d80cbf5c030feeecf890fcf9/HTTP-request-over-TCP-_3x.png" />
            
            </figure>
    <div>
      <h3>Zero means zero</h3>
      <a href="#zero-means-zero">
        
      </a>
    </div>
    <p><a href="/the-road-to-quic/">QUIC</a> goes a step further, and allows clients to send application data in the very first roundtrip of the connection, without requiring any other handshake to be completed beforehand.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5DCnCIKcCkYFvfHbx66Sxg/c620d9e4586b0a3d461ba4f9cf294e66/request-over-quic-0-RTT_3x.png" />
            
            </figure><p>After all, QUIC already <a href="/the-road-to-quic/#builtinsecurityandperformance">shaved a full round-trip off of a typical connection’s handshake</a> by merging the transport and cryptographic handshakes into one. By reducing the handshake by an additional roundtrip, QUIC achieves real 0-RTT connection establishment.</p><p>It literally can’t get any faster!</p>
    <div>
      <h3>Attack of the clones</h3>
      <a href="#attack-of-the-clones">
        
      </a>
    </div>
    <p>Unfortunately, 0-RTT connection resumption is not all smooth sailing, and it comes with caveats and risks, which is why <b>Cloudflare does not enable 0-RTT connection resumption by default</b>. Users should consider the risks involved and decide whether to use this feature or not.</p><p>For starters, 0-RTT connection resumption does not provide forward secrecy, meaning that a compromise of the secret parameters of a connection will trivially allow compromising the application data sent during the 0-RTT phase of new connections resumed from it. Data sent after the 0-RTT phase, meaning after the handshake has been completed, would still be safe though, as TLS 1.3 (and QUIC) will still perform the normal key exchange algorithm (which is forward secret) for data sent after the handshake completion.</p><p>More worryingly, application data sent during 0-RTT can be captured by an on-path attacker and then replayed multiple times to the same server. In many cases this is not a problem, as the attacker wouldn’t be able to decrypt the data, which is why 0-RTT connection resumption is useful, but in some cases this can be dangerous.</p><p>For example, imagine a bank that allows an authenticated user (e.g. using HTTP cookies, or other HTTP authentication mechanisms) to send money from their account to another user by making an HTTP request to a specific API endpoint. If an attacker was able to capture that request when 0-RTT connection resumption was used, they wouldn’t be able to see the plaintext and get the user’s credentials, because they wouldn’t know the secret key used to encrypt the data; however they could still potentially drain that user’s bank account by replaying the same request over and over:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5z9QMHK9VzFVOtVATVvE7B/b63202a2de73e6517ad493639b7d0ed2/Bank-API-replay-attack-_3x.png" />
            
            </figure><p>Of course this problem is not specific to banking APIs: any non-<a href="https://en.wikipedia.org/wiki/Idempotence">idempotent</a> request has the potential to cause undesired side effects, ranging from slight malfunctions to serious security breaches.</p><p>In order to help mitigate this risk, Cloudflare will always reject 0-RTT requests that are obviously not idempotent (like POST or PUT requests), but in the end it’s up to the application sitting behind Cloudflare to decide which requests can and cannot be allowed with 0-RTT connection resumption, as even innocuous-looking ones can have side effects on the origin server.</p><p>To help origins detect and potentially disallow specific requests, Cloudflare also follows the techniques described in <a href="https://tools.ietf.org/html/rfc8470">RFC8470</a>. Notably, Cloudflare will add the <code>Early-Data: 1</code> HTTP header to requests received during 0-RTT resumption that are forwarded to origins.</p><p>Origins able to understand this header can then decide to answer the request with the <a href="https://tools.ietf.org/html/rfc8470#section-5.2">425 (Too Early)</a> HTTP status code, which will instruct the client that originated the request to retry sending the same request but only after the TLS or QUIC handshake have fully completed, at which point there is no longer any risk of replay attacks. This could even be implemented as part of a <a href="https://workers.cloudflare.com/">Cloudflare Worker</a>.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5HFpNn8buQqXN5RiCVMgfD/5b6adf72a5d14f3a7772b4ae197e6950/425-too-early-_3x.png" />
            
            </figure><p>This makes it possible for origins to allow 0-RTT requests for endpoints that are safe, such as a website’s index page which is where 0-RTT is most useful, as that is typically the first request a browser makes after establishing a connection, while still protecting other endpoints such as APIs and form submissions. But if an origin does not provide any of those non-idempotent endpoints, no action is required.</p>
    <div>
      <h3>One stop shop for all your 0-RTT needs</h3>
      <a href="#one-stop-shop-for-all-your-0-rtt-needs">
        
      </a>
    </div>
    <p>Just like we previously did for TLS 1.3, we now support 0-RTT resumption for QUIC as well. In honor of this event, we have dusted off the user-interface controls that allow Cloudflare users to enable this feature for their websites, and introduced a dedicated toggle to control whether 0-RTT connection resumption is enabled or not, which can be found under the “Network” tab on the Cloudflare dashboard:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6sYNvqFyRCrc88Esrkm3A/971767abe13746b70754b8b17f1dad18/2019-11-07-133312_3087x508_scrot.png" />
            
            </figure><p>When TLS 1.3 and/or QUIC (via the HTTP/3 toggle) are enabled, 0-RTT connection resumption will be automatically offered to clients that support it, and the replay mitigation mentioned above will also be applied to the connections making use of this feature.</p><p>In addition, if you are a user of our <a href="/experiment-with-http-3-using-nginx-and-quiche/">open-source HTTP/3 patch for NGINX</a>, after updating the patch to the latest version, you’ll be able to enable support for 0-RTT connection resumption in your own NGINX-based <a href="https://www.cloudflare.com/learning/performance/what-is-http3/">HTTP/3</a> deployment by using the built-in <a href="https://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_early_data">“ssl_early_data” option</a>, which will work for both TLS 1.3 and QUIC+HTTP/3.</p> ]]></content:encoded>
            <category><![CDATA[QUIC]]></category>
            <category><![CDATA[TLS]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[Speed & Reliability]]></category>
            <guid isPermaLink="false">3ljtGm0iifxH7GrGX4FSRN</guid>
            <dc:creator>Alessandro Ghedini</dc:creator>
        </item>
        <item>
            <title><![CDATA[Experiment with HTTP/3 using NGINX and quiche]]></title>
            <link>https://blog.cloudflare.com/experiment-with-http-3-using-nginx-and-quiche/</link>
            <pubDate>Thu, 17 Oct 2019 14:00:00 GMT</pubDate>
            <description><![CDATA[ Just a few weeks ago we announced the availability on our edge network of HTTP/3, the new revision of HTTP intended to improve security and performance on the Internet. Everyone can now enable HTTP/3 on their Cloudflare zone ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Just a few weeks ago <a href="/http3-the-past-present-and-future/">we announced</a> the availability on our edge network of <a href="https://www.cloudflare.com/learning/performance/what-is-http3/">HTTP/3</a>, the new revision of HTTP intended to improve security and performance on the Internet. Everyone can now enable HTTP/3 on their Cloudflare zone and experiment with it using <a href="/http3-the-past-present-and-future/#using-google-chrome-as-an-http-3-client">Chrome Canary</a> as well as <a href="/http3-the-past-present-and-future/#using-curl">curl</a>, among other clients.</p><p>We have previously made available <a href="https://github.com/cloudflare/quiche/blob/master/examples/http3-server.rs">an example HTTP/3 server as part of the quiche project</a> to allow people to experiment with the protocol, but it’s quite limited in the functionality that it offers, and was never intended to replace other general-purpose web servers.</p><p>We are now happy to announce that <a href="/enjoy-a-slice-of-quic-and-rust/">our implementation of HTTP/3 and QUIC</a> can be integrated into your own installation of NGINX as well. This is made available <a href="https://github.com/cloudflare/quiche/tree/master/extras/nginx">as a patch</a> to NGINX, that can be applied and built directly with the upstream NGINX codebase.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/RGn7FpVUT1wQ1v5yu74c3/d6db309a2e2d99da184b3bbb123f3fb5/quiche-banner-copy_2x.png" />
            
            </figure><p>It’s important to note that <b>this is not officially supported or endorsed by the NGINX project</b>, it is just something that we, Cloudflare, want to make available to the wider community to help push adoption of QUIC and HTTP/3.</p>
    <div>
      <h3>Building</h3>
      <a href="#building">
        
      </a>
    </div>
    <p>The first step is to <a href="https://nginx.org/en/download.html">download and unpack the NGINX source code</a>. Note that the HTTP/3 and QUIC patch only works with the 1.16.x release branch (the latest stable release being 1.16.1).</p>
            <pre><code> % curl -O https://nginx.org/download/nginx-1.16.1.tar.gz
 % tar xvzf nginx-1.16.1.tar.gz</code></pre>
            <p>As well as quiche, the underlying implementation of HTTP/3 and QUIC:</p>
            <pre><code> % git clone --recursive https://github.com/cloudflare/quiche</code></pre>
            <p>Next you’ll need to apply the patch to NGINX:</p>
            <pre><code> % cd nginx-1.16.1
 % patch -p01 &lt; ../quiche/extras/nginx/nginx-1.16.patch</code></pre>
            <p>And finally build NGINX with HTTP/3 support enabled:</p>
            <pre><code> % ./configure                          	\
   	--prefix=$PWD                       	\
   	--with-http_ssl_module              	\
   	--with-http_v2_module               	\
   	--with-http_v3_module               	\
   	--with-openssl=../quiche/deps/boringssl \
   	--with-quiche=../quiche
 % make</code></pre>
            <p>The above command instructs the NGINX build system to enable the HTTP/3 support ( <code>--with-http_v3_module</code>) by using the quiche library found in the path it was previously downloaded into ( <code>--with-quiche=../quiche</code>), as well as TLS and HTTP/2. Additional build options can be added as needed.</p><p>You can check out the full instructions <a href="https://github.com/cloudflare/quiche/tree/master/extras/nginx#readme">here</a>.</p>
    <div>
      <h3>Running</h3>
      <a href="#running">
        
      </a>
    </div>
    <p>Once built, NGINX can be configured to accept incoming HTTP/3 connections by adding the <code>quic</code> and <code>reuseport</code> options to the <a href="https://nginx.org/en/docs/http/ngx_http_core_module.html#listen">listen</a> configuration directive.</p><p>Here is a minimal configuration example that you can start from:</p>
            <pre><code>events {
    worker_connections  1024;
}

http {
    server {
        # Enable QUIC and HTTP/3.
        listen 443 quic reuseport;

        # Enable HTTP/2 (optional).
        listen 443 ssl http2;

        ssl_certificate      cert.crt;
        ssl_certificate_key  cert.key;

        # Enable all TLS versions (TLSv1.3 is required for QUIC).
        ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3;
        
        # Add Alt-Svc header to negotiate HTTP/3.
        add_header alt-svc 'h3-23=":443"; ma=86400';
    }
}</code></pre>
            <p>This will enable both HTTP/2 and HTTP/3 on the TCP/443 and UDP/443 ports respectively.</p><p>You can then use one of the available HTTP/3 clients (such as <a href="/http3-the-past-present-and-future/#using-google-chrome-as-an-http-3-client">Chrome Canary</a>, <a href="/http3-the-past-present-and-future/#using-curl">curl</a> or even the <a href="/http3-the-past-present-and-future/#using-quiche-s-http3-client">example HTTP/3 client provided as part of quiche</a>) to connect to your NGINX instance using HTTP/3.</p><p>We are excited to make this available for everyone to experiment and play with HTTP/3, but it’s important to note that <b>the implementation is still experimental</b> and it’s likely to have bugs as well as limitations in functionality. Feel free to submit a ticket to the <a href="https://github.com/cloudflare/quiche">quiche project</a> if you run into problems or find any bug.</p> ]]></content:encoded>
            <category><![CDATA[NGINX]]></category>
            <category><![CDATA[QUIC]]></category>
            <category><![CDATA[Chrome]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[HTTP3]]></category>
            <guid isPermaLink="false">2M0hyPXVNiYWjSUGQRypv2</guid>
            <dc:creator>Alessandro Ghedini</dc:creator>
        </item>
        <item>
            <title><![CDATA[HTTP/3: the past, the present, and the future]]></title>
            <link>https://blog.cloudflare.com/http3-the-past-present-and-future/</link>
            <pubDate>Thu, 26 Sep 2019 13:00:00 GMT</pubDate>
            <description><![CDATA[ We are now happy to announce that QUIC and HTTP/3 support is available on the Cloudflare edge network. We’re excited to be joined in this announcement by Google Chrome and Mozilla Firefox, two of the leading browser vendors and partners in our effort to make the web faster and more reliable for all. ]]></description>
            <content:encoded><![CDATA[ <p>During last year’s Birthday Week <a href="/the-quicening/">we announced preliminary support for QUIC and HTTP/3</a> (or “HTTP over QUIC” as it was known back then), the new standard for the web, enabling faster, more reliable, and more secure connections to web endpoints like websites and <a href="https://www.cloudflare.com/learning/security/api/what-is-an-api/">APIs</a>. We also let our customers join a waiting list to try QUIC and <a href="https://www.cloudflare.com/learning/performance/what-is-http3/">HTTP/3</a> as soon as they became available.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5vl1aHmtQdfvJSyQEDaHJu/1f07700e6de58e6928debfc3e502fb6a/http3-tube_2x.png" />
            
            </figure><p>Since then, we’ve been working with industry peers through the <a href="https://ietf.org/">Internet Engineering Task Force</a>, including Google Chrome and Mozilla Firefox, to iterate on the HTTP/3 and QUIC standards documents. In parallel with the standards maturing, we’ve also worked on <a href="/enjoy-a-slice-of-quic-and-rust/">improving support</a> on our network.</p><p><b>We are now happy to announce that QUIC and HTTP/3 support is available on the Cloudflare edge network.</b> We’re excited to be joined in this announcement by Google Chrome and Mozilla Firefox, two of the leading browser vendors and partners in our effort to make the web faster and more reliable for all.</p><p>In the words of Ryan Hamilton, Staff Software Engineer at Google, “HTTP/3 should make the web better for everyone. The Chrome and Cloudflare teams have worked together closely to bring HTTP/3 and QUIC from nascent standards to widely adopted technologies for improving the web. Strong partnership between industry leaders is what makes Internet standards innovations possible, and we look forward to our continued work together.”</p><p>What does this mean for you, a Cloudflare customer who uses our services and edge network to make your web presence faster and more secure? Once HTTP/3 support is <a href="#how-do-i-enable-http-3-for-my-domain">enabled for your domain in the Cloudflare dashboard</a>, your customers can interact with your websites and APIs using HTTP/3. We’ve been steadily inviting customers on our HTTP/3 waiting list to turn on the feature (so keep an eye out for an email from us), and in the coming weeks we’ll make the feature available to everyone.</p><p>What does this announcement mean if you’re a user of the Internet interacting with sites and APIs through a browser and other clients? Starting today, you can <a href="#using-google-chrome-as-an-http-3-client">use Chrome Canary</a> to interact with Cloudflare and other servers over HTTP/3. For those of you looking for a command line client, <a href="#using-curl">curl also provides support for HTTP/3</a>. Instructions for using Chrome and curl with HTTP/3 follow later in this post.</p>
    <div>
      <h2>The Chicken and the Egg</h2>
      <a href="#the-chicken-and-the-egg">
        
      </a>
    </div>
    <p>Standards innovation on the Internet has historically been difficult because of a chicken and egg problem: which needs to come first, server support (like Cloudflare, or other large sources of response data) or client support (like browsers, operating systems, etc)? Both sides of a connection need to support a new communications protocol for it to be any use at all.</p><p>Cloudflare has a long history of driving web standards forward, from <a href="/introducing-http2/">HTTP/2</a> (the version of HTTP preceding HTTP/3), to <a href="https://www.cloudflare.com/learning/ssl/why-use-tls-1.3/">TLS 1.3</a>, to things like <a href="https://www.cloudflare.com/learning/ssl/what-is-encrypted-sni/">encrypted SNI</a>. We’ve pushed standards forward by partnering with like-minded organizations who share in our desire to help build a better Internet. Our efforts to move HTTP/3 into the mainstream are no different.</p><p>Throughout the HTTP/3 standards development process, we’ve been working closely with industry partners to build and validate client HTTP/3 support compatible with our edge support. We’re thrilled to be joined by Google Chrome and curl, both of which can be used today to make requests to the Cloudflare edge over HTTP/3. Mozilla Firefox expects to ship support in a nightly release soon as well.</p><p>Bringing this all together: today is a good day for Internet users; widespread rollout of HTTP/3 will mean a faster web experience for all, and today’s support is a large step toward that.</p><p>More importantly, today is a good day for the Internet: Chrome, curl, and Cloudflare, and soon, Mozilla, rolling out experimental but functional, support for HTTP/3 in quick succession shows that the Internet standards creation process works. Coordinated by the Internet Engineering Task Force, industry partners, competitors, and other key stakeholders can come together to craft standards that benefit the entire Internet, not just the behemoths.</p><p>Eric Rescorla, CTO of Firefox, summed it up nicely: “Developing a new network protocol is hard, and getting it right requires everyone to work together. Over the past few years, we've been working with Cloudflare and other industry partners to test TLS 1.3 and now HTTP/3 and QUIC. Cloudflare's early server-side support for these protocols has helped us work the interoperability kinks out of our client-side Firefox implementation. We look forward to advancing the security and performance of the Internet together.”</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6GH3p4lKpUIwOiWKDsgamN/6ab9009925489ca8a06ff935108401cc/HTTP3-partnership_2x-1.png" />
            
            </figure>
    <div>
      <h2>How did we get here?</h2>
      <a href="#how-did-we-get-here">
        
      </a>
    </div>
    <p>Before we dive deeper into HTTP/3, let’s have a quick look at the <a href="/http-3-from-root-to-tip/">evolution of HTTP over the years</a> in order to better understand why HTTP/3 is needed.</p><p>It all started back in 1996 with the publication of the <a href="https://tools.ietf.org/html/rfc1945">HTTP/1.0 specification</a> which defined the basic HTTP textual wire format as we know it today (for the purposes of this post I’m pretending HTTP/0.9 never existed). In HTTP/1.0 a new TCP connection is created for each request/response exchange between clients and servers, meaning that all requests incur a latency penalty as the TCP and <a href="https://www.cloudflare.com/learning/ssl/what-happens-in-a-tls-handshake/">TLS handshakes</a> are completed before each request.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/oo0toRnPpU4dLMrkXUBk7/cc8d46c65edd60f5d9fc18c282000b97/http-request-over-tcp-tls_2x.png" />
            
            </figure><p>Worse still, rather than sending all outstanding data as fast as possible once the connection is established, TCP enforces a warm-up period called “slow start”, which allows the TCP congestion control algorithm to determine the amount of data that can be in flight at any given moment before congestion on the network path occurs, and avoid flooding the network with packets it can’t handle. But because new connections have to go through the slow start process, they can’t use all of the network bandwidth available immediately.</p><p>The <a href="https://tools.ietf.org/html/rfc2616">HTTP/1.1 revision of the HTTP specification</a> tried to solve these problems a few years later by introducing the concept of “keep-alive” connections, that allow clients to reuse TCP connections, and thus amortize the cost of the initial connection establishment and slow start across multiple requests. But this was no silver bullet: while multiple requests could share the same connection, they still had to be serialized one after the other, so a client and server could only execute a single request/response exchange at any given time for each connection.</p><p>As the web evolved, browsers found themselves needing more and more concurrency when fetching and rendering web pages as the number of resources (CSS, JavaScript, images, …) required by each web site increased over the years. But since HTTP/1.1 only allowed clients to do one HTTP request/response exchange at a time, the only way to gain concurrency at the network layer was to use multiple TCP connections to the same origin in parallel, thus losing most of the benefits of keep-alive connections. While connections would still be reused to a certain (but lesser) extent, we were back at square one.</p><p>Finally, more than a decade later, came SPDY and then <a href="https://tools.ietf.org/html/rfc7540">HTTP/2</a>, which, among other things, introduced the concept of HTTP “streams”: an abstraction that allows HTTP implementations to concurrently multiplex different HTTP exchanges onto the same TCP connection, allowing browsers to more efficiently reuse TCP connections.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/gBysPThlyWyj8a339vwWI/c616564cc352fccac4eb7e1977ddfe28/Screen-Shot-2019-09-25-at-7.43.01-PM.png" />
            
            </figure><p>But, yet again, this was no silver bullet! HTTP/2 solves the original problem — inefficient use of a single TCP connection — since multiple requests/responses can now be transmitted over the same connection at the same time. However, all requests and responses are equally affected by packet loss (e.g. due to network congestion), even if the data that is lost only concerns a single request. This is because while the HTTP/2 layer can segregate different HTTP exchanges on separate streams, TCP has no knowledge of this abstraction, and all it sees is a stream of bytes with no particular meaning.</p><p>The role of TCP is to deliver the entire stream of bytes, in the correct order, from one endpoint to the other. When a TCP packet carrying some of those bytes is lost on the network path, it creates a gap in the stream and TCP needs to fill it by resending the affected packet when the loss is detected. While doing so, none of the successfully delivered bytes that follow the lost ones can be delivered to the application, even if they were not themselves lost and belong to a completely independent HTTP request. So they end up getting unnecessarily delayed as TCP cannot know whether the application would be able to process them without the missing bits. This problem is known as “head-of-line blocking”.</p>
    <div>
      <h2>Enter HTTP/3</h2>
      <a href="#enter-http-3">
        
      </a>
    </div>
    <p>This is where HTTP/3 comes into play: instead of using TCP as the transport layer for the session, it uses <a href="/the-road-to-quic/">QUIC, a new Internet transport protocol</a>, which, among other things, introduces streams as first-class citizens at the transport layer. QUIC streams share the same QUIC connection, so no additional handshakes and slow starts are required to create new ones, but QUIC streams are delivered independently such that in most cases packet loss affecting one stream doesn't affect others. This is possible because QUIC packets are encapsulated on top of <a href="https://www.cloudflare.com/learning/ddos/glossary/user-datagram-protocol-udp/">UDP datagrams</a>.</p><p>Using UDP allows much more flexibility compared to TCP, and enables QUIC implementations to live fully in user-space — updates to the protocol’s implementations are not tied to operating systems updates as is the case with TCP. With QUIC, HTTP-level streams can be simply mapped on top of QUIC streams to get all the benefits of HTTP/2 without the head-of-line blocking.</p><p>QUIC also combines the typical 3-way TCP handshake with <a href="/rfc-8446-aka-tls-1-3/">TLS 1.3</a>'s handshake. Combining these steps means that encryption and authentication are provided by default, and also enables faster connection establishment. In other words, even when a new QUIC connection is required for the initial request in an HTTP session, the latency incurred before data starts flowing is lower than that of TCP with TLS.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1fHLYlTE6rQeewwbb11hVH/1e56b0a3ad747f02222b96ebac3d37a3/http-request-over-quic_2x.png" />
            
            </figure><p>But why not just use HTTP/2 on top of QUIC, instead of creating a whole new HTTP revision? After all, HTTP/2 also offers the stream multiplexing feature. As it turns out, it’s somewhat more complicated than that.</p><p>While it’s true that some of the HTTP/2 features can be mapped on top of QUIC very easily, that’s not true for all of them. One in particular, <a href="/hpack-the-silent-killer-feature-of-http-2/">HTTP/2’s header compression scheme called HPACK</a>, heavily depends on the order in which different HTTP requests and responses are delivered to the endpoints. QUIC enforces delivery order of bytes within single streams, but does not guarantee ordering among different streams.</p><p>This behavior required the creation of a new HTTP header compression scheme, called QPACK, which fixes the problem but requires changes to the HTTP mapping. In addition, some of the features offered by HTTP/2 (like per-stream flow control) are already offered by QUIC itself, so they were dropped from HTTP/3 in order to remove unnecessary complexity from the protocol.</p>
    <div>
      <h2>HTTP/3, powered by a delicious quiche</h2>
      <a href="#http-3-powered-by-a-delicious-quiche">
        
      </a>
    </div>
    <p>QUIC and HTTP/3 are very exciting standards, promising to address many of the shortcomings of previous standards and ushering in a new era of performance on the web. So how do we go from exciting standards documents to working implementation?</p><p>Cloudflare's QUIC and HTTP/3 support is powered by quiche, <a href="/enjoy-a-slice-of-quic-and-rust/">our own open-source implementation written in Rust</a>.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3SjyP0JlJLJAUAGnLJmLm/07b8ae667df06953b1ee5e16014ecf3f/Screen-Shot-2019-09-25-at-7.39.59-PM.png" />
            
            </figure><p>You can find it on GitHub at <a href="https://github.com/cloudflare/quiche">github.com/cloudflare/quiche</a>.</p><p>We announced quiche a few months ago and since then have added support for the HTTP/3 protocol, on top of the existing QUIC support. We have designed quiche in such a way that it can now be used to implement HTTP/3 clients and servers or just plain QUIC ones.</p>
    <div>
      <h2>How do I enable HTTP/3 for my domain?</h2>
      <a href="#how-do-i-enable-http-3-for-my-domain">
        
      </a>
    </div>
    <p>As mentioned above, we have started on-boarding customers that signed up for the waiting list. If you are on the waiting list and have received an email from us communicating that you can now enable the feature for your websites, you can simply go to the <a href="https://dash.cloudflare.com/?to=/:account/:zone/network">Cloudflare dashboard</a> and flip the switch from the "Network" tab manually:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6DADHjrJrzR3HCwKXT8G9m/7cefedde11594b83a23845c92359be0f/http3-toggle-1.png" />
            
            </figure><p>We expect to make the HTTP/3 feature available to all customers in the near future.</p><p>Once enabled, you can experiment with HTTP/3 in a number of ways:</p>
    <div>
      <h3>Using Google Chrome as an HTTP/3 client</h3>
      <a href="#using-google-chrome-as-an-http-3-client">
        
      </a>
    </div>
    <p>In order to use the Chrome browser to connect to your website over HTTP/3, you first need to download and install the <a href="https://www.google.com/chrome/canary/">latest Canary build</a>. Then all you need to do to enable HTTP/3 support is starting Chrome Canary with the “--enable-quic” and “--quic-version=h3-23” <a href="https://www.chromium.org/developers/how-tos/run-chromium-with-flags">command-line arguments</a>.</p><p>Once Chrome is started with the required arguments, you can just type your domain in the address bar, and see it loaded over HTTP/3 (you can use the Network tab in Chrome’s Developer Tools to check what protocol version was used). Note that due to how HTTP/3 is negotiated between the browser and the server, HTTP/3 might not be used for the first few connections to the domain, so you should try to reload the page a few times.</p><p>If this seems too complicated, don’t worry, as the HTTP/3 support in Chrome will become more stable as time goes on, enabling HTTP/3 will become easier.</p><p>This is what the Network tab in the Developer Tools shows when browsing this very blog over HTTP/3:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5doD9EStpvkaCUGlV8iCyx/7615f6b8f52b126c7ac12028c2891444/Screen-Shot-2019-09-20-at-1.27.34-PM.png" />
            
            </figure><p>Note that due to the experimental nature of the HTTP/3 support in Chrome, the protocol is actually identified as “http2+quic/99” in Developer Tools, but don’t let that fool you, it is indeed HTTP/3.</p>
    <div>
      <h3>Using curl</h3>
      <a href="#using-curl">
        
      </a>
    </div>
    <p>The curl command-line tool also <a href="https://daniel.haxx.se/blog/2019/09/11/curl-7-66-0-the-parallel-http-3-future-is-here/">supports HTTP/3 as an experimental feature</a>. You’ll need to download the <a href="https://github.com/curl/curl">latest version from git</a> and <a href="https://github.com/curl/curl/blob/master/docs/HTTP3.md#quiche-version">follow the instructions on how to enable HTTP/3 support</a>.</p><p>If you're running macOS, we've also made it easy to install an HTTP/3 equipped version of curl via Homebrew:</p>
            <pre><code> % brew install --HEAD -s https://raw.githubusercontent.com/cloudflare/homebrew-cloudflare/master/curl.rb</code></pre>
            <p>In order to perform an HTTP/3 request all you need is to add the “--http3” command-line flag to a normal curl command:</p>
            <pre><code> % ./curl -I https://blog.cloudflare.com/ --http3
HTTP/3 200
date: Tue, 17 Sep 2019 12:27:07 GMT
content-type: text/html; charset=utf-8
set-cookie: __cfduid=d3fc7b95edd40bc69c7d894d296564df31568723227; expires=Wed, 16-Sep-20 12:27:07 GMT; path=/; domain=.blog.cloudflare.com; HttpOnly; Secure
x-powered-by: Express
cache-control: public, max-age=60
vary: Accept-Encoding
cf-cache-status: HIT
age: 57
expires: Tue, 17 Sep 2019 12:28:07 GMT
alt-svc: h3-23=":443"; ma=86400
expect-ct: max-age=604800, report-uri="https://report-uri.cloudflare.com/cdn-cgi/beacon/expect-ct"
server: cloudflare
cf-ray: 517b128df871bfe3-MAN</code></pre>
            
    <div>
      <h3>Using quiche’s http3-client</h3>
      <a href="#using-quiches-http3-client">
        
      </a>
    </div>
    <p>Finally, we also provide an example <a href="https://github.com/cloudflare/quiche/blob/master/examples/http3-client.rs">HTTP/3 command-line client</a> (as well as a command-line server) built on top of quiche, that you can use to experiment with HTTP/3.</p><p>To get it running, first clone quiche’s GitHub repository:</p>
            <pre><code>$ git clone --recursive https://github.com/cloudflare/quiche</code></pre>
            <p>Then build it. You need a working Rust and Cargo installation for this to work (we recommend using <a href="https://rustup.rs/">rustup</a> to easily setup a working Rust development environment).</p>
            <pre><code>$ cargo build --examples</code></pre>
            <p>And finally you can execute an HTTP/3 request:</p>
            <pre><code>$ RUST_LOG=info target/debug/examples/http3-client https://blog.cloudflare.com/</code></pre>
            
    <div>
      <h2>What’s next?</h2>
      <a href="#whats-next">
        
      </a>
    </div>
    <p>In the coming months we’ll be working on improving and optimizing our QUIC and <a href="https://www.cloudflare.com/learning/performance/what-is-http3/">HTTP/3 implementation</a>, and will eventually allow everyone to enable this new feature without having to go through a waiting list. We'll continue updating our implementation as standards evolve, which <b>may result in breaking changes</b> between draft versions of the standards.</p><p>Here are a few new features on our roadmap that we're particularly excited about:</p>
    <div>
      <h3>Connection migration</h3>
      <a href="#connection-migration">
        
      </a>
    </div>
    <p>One important feature that QUIC enables is seamless and transparent migration of connections between different networks (such as your home WiFi network and your carrier’s mobile network as you leave for work in the morning) without requiring a whole new connection to be created.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7KaBR6EaZ00sko7yvIPavs/182d187326680c8c92d34486acea0de1/Screen-Shot-2019-09-25-at-7.39.44-PM.png" />
            
            </figure><p>This feature will require some additional changes to our infrastructure, but it’s something we are excited to offer our customers in the future.</p>
    <div>
      <h3>Zero Round Trip Time Resumption</h3>
      <a href="#zero-round-trip-time-resumption">
        
      </a>
    </div>
    <p>Just like TLS 1.3, QUIC supports a <a href="/introducing-0-rtt/">mode of operation that allows clients to start sending HTTP requests before the connection handshake has completed</a>. We don’t yet support this feature in our QUIC deployment, but we’ll be working on making it available, just like we already do for our TLS 1.3 support.</p>
    <div>
      <h2>HTTP/3: it's alive!</h2>
      <a href="#http-3-its-alive">
        
      </a>
    </div>
    <p>We are excited to support HTTP/3 and allow our customers to experiment with it while efforts to standardize QUIC and HTTP/3 are still ongoing. We'll continue working alongside other organizations, including Google and Mozilla, to finalize the QUIC and HTTP/3 standards and encourage broad adoption.</p><p>Here's to a faster, more reliable, more secure web experience for all.</p> ]]></content:encoded>
            <category><![CDATA[Birthday Week]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[HTTP3]]></category>
            <category><![CDATA[QUIC]]></category>
            <category><![CDATA[Security]]></category>
            <guid isPermaLink="false">4cfvya4KDyDXaX5DdNkv9x</guid>
            <dc:creator>Alessandro Ghedini</dc:creator>
            <dc:creator>Rustam Lalkaka</dc:creator>
        </item>
        <item>
            <title><![CDATA[Enjoy a slice of QUIC, and Rust!]]></title>
            <link>https://blog.cloudflare.com/enjoy-a-slice-of-quic-and-rust/</link>
            <pubDate>Tue, 22 Jan 2019 16:26:07 GMT</pubDate>
            <description><![CDATA[ During last year’s Birthday Week we announced early support for QUIC, the next generation encrypted-by-default network transport protocol designed to secure and accelerate web traffic on the Internet. ]]></description>
            <content:encoded><![CDATA[ <p>During last year’s Birthday Week we announced <a href="/head-start-with-quic/">early support</a> for <a href="/the-road-to-quic/">QUIC</a>, the next generation encrypted-by-default network transport protocol designed to secure and accelerate web traffic on the Internet.</p><p>We are not quite ready to make this feature available to every Cloudflare customer yet, but while you wait we thought you might enjoy a slice of <a href="https://github.com/cloudflare/quiche">quiche</a>, our own open-source implementation of the QUIC protocol written in Rust.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3Lk6CTjnpUTbJJtfnEnpaU/d060799beb660787b20dfdce6f35cef8/quiche-1.png" />
            
            </figure><p>Quiche will allow us to keep on top of changes to the QUIC protocol as the standardization process progresses and experiment with new features more easily. Let’s have a quick look at it together.</p>
    <div>
      <h3>Simple and genuine ingredients</h3>
      <a href="#simple-and-genuine-ingredients">
        
      </a>
    </div>
    <p>The main design principle that guided quiche’s initial development was exposing most of the QUIC complexity to applications through a minimal and intuitive API, but without making too many assumptions about the application itself, in order to allow us to reuse the same library in different contexts.</p><p>For example, while we think Rust is great, most of the stack that deals with HTTP requests on Cloudflare’s edge network is still written in good ol’ C, which means that our QUIC implementation would need to be integrated into that.</p><p>The quiche API can process QUIC packets received from network sockets and generate packets to send back, but will not touch the sockets themselves. It also exposes <a href="https://github.com/cloudflare/quiche/blob/master/include/quiche.h">a thin layer</a> on top on the Rust API itself to make integration into C/C++ (and other languages) easier.</p><p>The application is responsible for fetching data from the network (e.g. via sockets), passing it to quiche, and sending the data that quiche generates back into the network. The application also needs to handle timers, with quiche telling it when to wake-up (this is required for retransmitting lost packets once the corresponding retransmission timeouts expire for example). This leaves the application free to decide how to best implement the I/O and event loop support, depending on the support offered by the operating system or the networking framework used.</p><p>Thanks to this we were able to integrate quiche into our NGINX fork (though this is not ready to be open-sourced just yet) without major changes to NGINX internals. Quiche can also be built <a href="https://github.com/curl/curl/pull/3314#issuecomment-455778313">together with cURL</a> to power cURL’s very early (and very experimental) QUIC support. And of course you can use quiche to implement QUIC <a href="https://github.com/cloudflare/quiche/blob/master/examples/client.rs">clients</a> and <a href="https://github.com/cloudflare/quiche/blob/master/examples/server.rs">servers</a> written in Rust as well.</p>
    <div>
      <h3>More boring than ever</h3>
      <a href="#more-boring-than-ever">
        
      </a>
    </div>
    <p>A couple years ago we <a href="/make-ssl-boring-again/">migrated our entire HTTPS stack to BoringSSL</a>, the crypto and TLS library developed by Google. This allowed us to streamline our stack (we’d previously had to maintain our own internal patches to OpenSSL for many of the features that BoringSSL offers out-of-the-box) as well as ship exciting new features more quickly.</p><p>For those not following the QUIC standardization process, the QUIC protocol itself makes use of <a href="/rfc-8446-aka-tls-1-3/">TLS 1.3</a> as part of its connection handshake, so it made sense that our QUIC implementation would also use BoringSSL to implement that part of the protocol.</p><p>As far as QUIC is concerned, the TLS library needs to provide negotiation of cryptographic parameters (including encryption secrets), which are then used by the QUIC layer itself to encrypt/decrypt the packets on the wire. The TLS record layer is replaced by QUIC framing to avoid overhead and duplication so that TLS handshake messages are carried directly on top of encrypted QUIC packets.</p><p>This makes integrating with existing TLS implementations more challenging since they would need to expose the raw TLS handshake messages as-is, without record layer or protection which would be handled by quiche itself.BoringSSL offers a dedicated API that can be used by QUIC implementations, which required <a href="https://boringssl-review.googlesource.com/c/boringssl/+/29464">a</a> <a href="https://boringssl-review.googlesource.com/c/boringssl/+/33724">few</a> <a href="https://boringssl-review.googlesource.com/c/boringssl/+/33904">tweaks</a> along the way as you would expect with something so new and experimental, but was overall a breeze to integrate into quiche.</p>
    <div>
      <h3>One ring to rule them all</h3>
      <a href="#one-ring-to-rule-them-all">
        
      </a>
    </div>
    <p>While the TLS handshake is implemented using BoringSSL’s API directly (via Rust’s FFI support), to implement QUIC’s packet protection we decided to use <a href="https://github.com/briansmith/ring">ring</a>, a very popular Rust library that provides safe and fast cryptographic primitives.</p><p>Ring offers most of the same cryptographic primitives you’d get with BoringSSL’s libcrypto, but exposed through an intuitive and safe Rust API. In fact, ring uses some of the same fast implementations of cryptographic algorithms that BoringSSL also uses, but exposed through a nicer API.</p><p>However QUIC’s use of cryptography is somewhat unique and, uhm, exotic: the packet’s payload protection uses standard AEAD (“authenticated encryption with associated data”) algorithms like AES-GCM and ChaCha20-Poly1305, but the protection for the packet’s header is different and was designed specifically for QUIC to prevent middle-boxes on the network from intercepting some of the packet’s metadata (like the packet number).</p><p>Ring didn’t originally expose the primitives required to implement QUIC’s header protection, but adding support for them was easy enough, and our changes, now <a href="https://github.com/briansmith/ring/pull/749">also</a> <a href="https://github.com/briansmith/ring/pull/756">open-source</a>, were officially released in ring v0.14.0 and available for everyone to use.</p>
    <div>
      <h3>Conclusion</h3>
      <a href="#conclusion">
        
      </a>
    </div>
    <p>While quiche is one of the more recent additions to the <a href="https://github.com/quicwg/base-drafts/wiki/Implementations">list of QUIC implementations</a> (its first commit only dates back 3 months or so), it’s already able to interoperate with the other more mature implementations and exercise many of QUIC’s features.</p><p>Quiche, just like QUIC itself, is not “done” (or perfect for that matter) yet. Bugs will be found and fixed, new exciting features implemented (and then more bugs will be found), and API compatibility broken, as we gain experience and learn from wider deployment of QUIC on the Internet.</p> ]]></content:encoded>
            <category><![CDATA[QUIC]]></category>
            <category><![CDATA[Rust]]></category>
            <guid isPermaLink="false">5DnlohGnzDmDOrXWOTE8vJ</guid>
            <dc:creator>Alessandro Ghedini</dc:creator>
        </item>
        <item>
            <title><![CDATA[Encrypt that SNI: Firefox edition]]></title>
            <link>https://blog.cloudflare.com/encrypt-that-sni-firefox-edition/</link>
            <pubDate>Thu, 18 Oct 2018 13:00:00 GMT</pubDate>
            <description><![CDATA[ A couple of weeks ago we announced support for the encrypted Server Name Indication (SNI) TLS extension (ESNI for short). As promised, our friends at Mozilla landed support for ESNI in Firefox Nightl.  ]]></description>
            <content:encoded><![CDATA[ <p>A couple of weeks ago we <a href="https://blog.cloudflare.com/esni/">announced</a> support for the <a href="https://blog.cloudflare.com/encrypted-sni/">encrypted Server Name Indication (SNI) TLS extension</a> (ESNI for short). As promised, our friends at Mozilla <a href="https://blog.mozilla.org/security/2018/10/18/encrypted-sni-comes-to-firefox-nightly/">landed support for ESNI in Firefox Nightly</a>, so you can now browse Cloudflare websites without leaking the plaintext SNI TLS extension to on-path observers (ISPs, coffee-shop owners, firewalls, …). Today we'll show you how to enable it and how to get full marks on our <a href="https://www.encryptedsni.com/">Browsing Experience Security Check</a>.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6zyycnqY5HRKXntoXJyn0g/87a427fca18ae269c4ff55c5939cdf94/esni-3_3.5x-1.png" />
          </figure>
    <div>
      <h3>Here comes the night</h3>
      <a href="#here-comes-the-night">
        
      </a>
    </div>
    <p>The first step is to download and install the very latest <a href="https://www.mozilla.org/firefox/channel/desktop/#nightly">Firefox Nightly build</a>, or, if you have Nightly already installed, make sure it’s up to date.</p><p>When we announced our support for ESNI we also created a test page you can point your browser to <a href="https://encryptedsni.com/">https://encryptedsni.com</a> which checks whether your browser / DNS configuration is providing a more secure browsing experience by using secure DNS transport, DNSSEC validation, TLS 1.3 &amp; ESNI itself when it connects to our test page. Before you make any changes to your Firefox configuration, you might well see a result something like this:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2i7XABalwEra2pZXq0uaCQ/fbf7c039508cf3efe7fbfcf96d00ecca/encryptedsni_securedns_check_failed.png" />
          </figure><p>So, room for improvement! Next, head to the <a>about:config</a> page and look for the <code>network.security.esni.enabled</code> option (you can type the name in the search box at the top to filter out unrelated options), and switch it to true by double clicking on its value.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5McCvsEx9Zvrz9QmrLOFfP/eecc257ff561f4641404dbe32081a923/firefox_enable_encryptedsni.png" />
          </figure><p>Now <a href="https://blog.cloudflare.com/encrypted-sni/">encrypted SNI is enabled</a> and will be automatically used when you visit websites that support it (including all websites on Cloudflare).</p><p>It’s important to note that, as explained in our blog post, you must also enable support for DNS over HTTPS (also known as “Trusted Recursive Resolver” in Firefox) in order to avoid leaking the websites visited through plaintext DNS queries. To do that with Firefox, you can simply follow the instructions on <a href="https://wiki.mozilla.org/Trusted_Recursive_Resolver">this page</a>.</p><p>Mozilla recommends setting up the Trusted Recursive Resolver in mode “2”, which means that if, for whatever reason, the DNS query to the TRR fails, it will be retried using the system’s DNS resolver. This is good to avoid breaking your web browsing due to DNS misconfigurations, however Firefox will also fallback to the system resolver in case of a failed <a href="https://blog.cloudflare.com/dnssec-an-introduction/">DNSSEC</a> signature verification, which might affect user’s security and privacy due to the fact that the query will then be retried over plaintext DNS.</p><p>This is due to the fact that any DNS failure, including DNSSEC failures, from the DNS resolver is identified by the DNS SERVFAIL return code, which is not granular enough for Firefox to differentiate different failure scenarios. We are looking into options to address this on our <a href="https://1.1.1.1/">1.1.1.1</a> resolver, in order to give Firefox and other DNS clients more information on the type of DNS failure experienced to avoid the fallback behaviour when appropriate.</p><p>Now that everything is in place, go ahead and visit our <a href="https://www.encryptedsni.com/">Browsing Experience Security Check</a> page, and click on the “Check My Browser” button. You should now see results something like this:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4BVLoxvwwTnBfFoGCqJIjF/834b8b0771d392e90f23f762551c77e1/encryptedsni_securedns_check_passed.png" />
          </figure><p>Note: As you make changes in <code>about:config</code> to the ESNI &amp; TRR settings, you will need to hard refresh the check page to ensure a new TLS connection is established. We plan to fix this in a future update.</p><p>To test for encrypted SNI support on your Cloudflare domain, you can visit the “/cdn-cgi/trace” page, for example, <a href="https://www.cloudflare.com/cdn-cgi/trace">https://www.cloudflare.com/cdn-cgi/trace</a> (replace <code>www.cloudflare.com</code> with your own domain). If the browser encrypted the SNI you should see <code>sni=encrypted</code> in the trace output.</p>
    <div>
      <h3>On the wire</h3>
      <a href="#on-the-wire">
        
      </a>
    </div>
    <p>You can also go a step further and <a href="https://www.wireshark.org/docs/wsdg_html_chunked/ChSrcObtain.html">download</a> and <a href="https://www.wireshark.org/docs/wsdg_html_chunked/ChSrcBuildFirstTime.html">build</a> the latest <a href="https://www.wireshark.org/">Wireshark</a> code from its git repository (this feature hasn’t landed in a stable release yet so building from source is required for now).</p><p>This will allow you to see what the encrypted SNI extension looks like on the wire, while you visit a website that supports ESNI (e.g. <a href="https://cloudflare.com/">https://cloudflare.com</a>).</p><p>This is how a normal TLS connection looks with a plaintext SNI:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6ATpi1p2wqPGGhxqOjaD23/91b20294cf31b7327c413adcdc107acf/unencrypted_sni_pcap-2.png" />
          </figure><p>And here it is again, but this time with the encrypted SNI extension:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4sai8zbi9oq5x4jsktOtIB/0cb8547ed4aa4b9fb629c775f0a52dc0/encrypted_sni_pcap.png" />
          </figure>
    <div>
      <h3>Fallback</h3>
      <a href="#fallback">
        
      </a>
    </div>
    <p>As mentioned in our earlier post, there may be cases when the DNS record fetched by the client doesn’t match a valid key owned by the TLS server, in which case the connection using ESNI would simply fail to be established.</p><p>This might happen for example if the authoritative DNS server and the TLS server somehow get out of sync (for example, the TLS server rotates its own key, but the DNS record is not updated accordingly). But this could also be caused by external parties, for example, a caching DNS resolver that doesn’t properly respect the TTL set by the authoritative server might serve an outdated ESNI record even though the authoritative server is up-to-date. When this happens, Firefox will fail to connect to the website.</p><p>The way we work around this problem on the Cloudflare edge network, is to simply make the TLS termination stack keep a list of valid ESNI keys for the past few hours, rather than just the latest and most recent key. This allows the TLS server to decrypt the encrypted SNI sent by a client even if a slightly outdated DNS record was used to produce it. The duration of the lifetime of ESNI keys needs to be balanced between increasing service availability, by keeping as many keys around as possible, and increasing security and forward secrecy of ESNI, which on the contrary requires keeping as few keys as possible.</p><p>There is some room for experimentation while the encrypted SNI specification is not finalized yet, and one proposed solution would allow the server to detect the failure and serve a fresh ESNI record to the client which in turn can then try to connect again using the newly received record without having to disable ESNI completely. But while this might seem easy, in practice a lot of things need to be taken into account: the server needs to serve a certificate to the client, so the client can make sure the connection is not being intercepted, but at the same time the server doesn’t know which certificate to serve because it can’t decrypt and inspect the SNI, which introduces the need for some sort of “fallback certificate”. Additionally any such fallback mechanism would inevitably add an additional round-trip to the connection handshake which would negate one of the main performance improvements introduced by TLS 1.3 (that is, shorter handshakes).</p>
    <div>
      <h3>Conclusion</h3>
      <a href="#conclusion">
        
      </a>
    </div>
    <p>On our part, we’ll continue to experiment and evolve our implementation as the specification evolves, to make encrypted SNI work best for our customers and users.</p> ]]></content:encoded>
            <category><![CDATA[Birthday Week]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[Cryptography]]></category>
            <category><![CDATA[TLS 1.3]]></category>
            <category><![CDATA[DNS]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[Privacy]]></category>
            <guid isPermaLink="false">3CdowETfSYeI0Jt27lcRwf</guid>
            <dc:creator>Alessandro Ghedini</dc:creator>
        </item>
        <item>
            <title><![CDATA[Encrypt it or lose it: how encrypted SNI works]]></title>
            <link>https://blog.cloudflare.com/encrypted-sni/</link>
            <pubDate>Mon, 24 Sep 2018 12:01:00 GMT</pubDate>
            <description><![CDATA[ Today we announced support for encrypted SNI, an extension to the TLS 1.3 protocol that improves privacy of Internet users. ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Today we announced <a href="/esni">support for encrypted SNI</a>, <a href="https://tools.ietf.org/html/draft-ietf-tls-esni">an extension</a> to the <a href="/rfc-8446-aka-tls-1-3/">TLS 1.3</a> protocol that improves privacy of Internet users by preventing on-path observers, including ISPs, coffee shop owners and firewalls, from intercepting the TLS Server Name Indication (SNI) extension and using it to determine which websites users are visiting.</p><p>Encrypted SNI, together with other Internet security features already offered by Cloudflare for free, will make it harder to censor content and track users on the Internet. Read on to learn how it works.</p>
    <div>
      <h3>SNWhy?</h3>
      <a href="#snwhy">
        
      </a>
    </div>
    <p>The TLS Server Name Indication (SNI) extension, <a href="https://tools.ietf.org/html/rfc3546">originally standardized back in 2003</a>, lets servers host multiple TLS-enabled websites on the same set of IP addresses, by requiring clients to specify which site they want to connect to during the initial TLS handshake. Without SNI the server wouldn’t know, for example, which certificate to serve to the client, or which configuration to apply to the connection.</p><p>The client adds the SNI extension containing the hostname of the site it’s connecting to to the ClientHello message. It sends the ClientHello to the server during the TLS handshake. Unfortunately the ClientHello message is sent unencrypted, due to the fact that client and server don’t share an encryption key at that point.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/15sFb2PJWXZn3WBxjYOa7p/0f42a188a08641aaee1e18b82ce160a9/tls13_unencrypted_server_name_indication-2.png" />
            
            </figure><p><i>TLS 1.3 with Unencrypted SNI</i></p><p>This means that an on-path observer (say, an ISP, coffee shop owner, or a firewall) can intercept the plaintext ClientHello message, and determine which website the client is trying to connect to. That allows the observer to track which sites a user is visiting.</p><p>But with SNI encryption the client encrypts the SNI even though the rest of the ClientHello is sent in plaintext.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/596ebAAHizuuRJ4B7kuFEq/9cba63e4141b47dd0f5d0e8941157b58/tls13_encrypted_server_name_indication-1.png" />
            
            </figure><p><i>TLS 1.3 with Encrypted SNI</i></p><p>So how come the original SNI couldn’t be encrypted before, but now it can? Where does the encryption key come from if client and server haven’t negotiated one yet?</p>
    <div>
      <h3>If the chicken must come before the egg, where do you put the chicken?</h3>
      <a href="#if-the-chicken-must-come-before-the-egg-where-do-you-put-the-chicken">
        
      </a>
    </div>
    <p>As with <a href="https://datatracker.ietf.org/meeting/101/materials/slides-101-dnsop-sessa-the-dns-camel-01">many other Internet features</a> the answer is simply “DNS”.</p><p>The server publishes a <a href="https://en.wikipedia.org/wiki/Public-key_cryptography">public key</a> on a well-known DNS record, which can be fetched by the client before connecting (as it already does for A, AAAA and other records). The client then replaces the SNI extension in the ClientHello with an “encrypted SNI” extension, which is none other than the original SNI extension, but encrypted using a symmetric encryption key derived using the server’s public key, as described below. The server, which owns the private key and can derive the symmetric encryption key as well, can then decrypt the extension and therefore terminate the connection (or forward it to a backend server). Since only the client, and the server it’s connecting to, can derive the encryption key, the encrypted SNI cannot be decrypted and accessed by third parties.</p><p>It’s important to note that this is an extension to TLS version 1.3 and above, and doesn’t work with previous versions of the protocol. The reason is very simple: one of the changes introduced by TLS 1.3 (<a href="/you-get-tls-1-3-you-get-tls-1-3-everyone-gets-tls-1-3/">not without problems</a>) meant moving the Certificate message sent by the server to the encrypted portion of the TLS handshake (before 1.3, it was sent in plaintext). Without this fundamental change to the protocol, an attacker would still be able to determine the identity of the server by simply observing the plaintext certificate sent on the wire.</p><p>The underlying cryptographic machinery involves using the <a href="https://en.wikipedia.org/wiki/Diffie%E2%80%93Hellman_key_exchange">Diffie-Hellman key exchange algorithm</a> which allows client and server to generate a shared encryption key over an untrusted channel. The encrypted SNI encryption key is thus calculated on the client-side by using the server’s public key (which is actually the public portion of a Diffie-Hellman semi-static key share) and the private portion of an ephemeral Diffie-Hellman share generated by the client itself on the fly and discarded immediately after the ClientHello is sent to the server. Additional data (such as some of the cryptographic parameters sent by the client as part of its ClientHello message) is also mixed into the cryptographic process for good measure.</p><p>The client’s ESNI extension will then include, not only the actual encrypted SNI bits, but also the client’s public key share, the cipher suite it used for encryption and the digest of the server’s ESNI DNS record. On the other side, the server uses its own private key share, and the public portion of the client’s share to generate the encryption key and decrypt the extension.</p><p>While this may seem overly complicated, this ensures that the encryption key is cryptographically tied to the specific TLS session it was generated for, and cannot be reused across multiple connections. This prevents an attacker able to observe the encrypted extension sent by the client from simply capturing it and replaying it to the server in a separate session to unmask the identity of the website the user was trying to connect to (this is known as “cut-and-paste” attack).</p><p>However a compromise of the server’s private key would put all ESNI symmetric keys generated from it in jeopardy (which would allow observers to decrypt previously collected encrypted data), which is why Cloudflare’s own SNI encryption implementation rotates the server’s keys every hour to improve forward secrecy, but keeps track of the keys for the previous few hours to allow for DNS caching and replication delays, so that clients with slightly outdated keys can still use ESNI without problems (but eventually all keys are discarded and forgotten).</p>
    <div>
      <h3>But wait, DNS? For real?</h3>
      <a href="#but-wait-dns-for-real">
        
      </a>
    </div>
    <p>The observant reader might have realized that simply using <a href="https://www.cloudflare.com/learning/dns/what-is-dns/">DNS</a> (which is, by default, unencrypted) would make the whole encrypted SNI idea completely pointless: an on-path observer would be able to determine which website the client is connecting to by simply observing the plaintext DNS queries sent by the client itself, whether encrypted SNI was used or not.</p><p>But with the introduction of DNS features such as DNS over TLS (DoT) and DNS over HTTPS (DoH), and of public DNS resolvers that provide those features to their users (such as Cloudflare’s own <a href="/announcing-1111/">1.1.1.1</a>), DNS queries can now be encrypted and protected by the prying eyes of censors and trackers alike.</p><p>However, while responses from DoT/DoH DNS resolvers can be trusted, to a certain extent (evil resolvers notwithstanding), it might still be possible for a determined attacker to poison the resolver’s cache by intercepting its communication with the authoritative DNS server and injecting malicious data. That is, unless both the authoritative server and the resolver support <a href="https://www.cloudflare.com/dns/dnssec/">DNSSEC</a><sub>[1]</sub>. Incidentally, Cloudflare’s authoritative DNS servers can sign responses returned to resolvers, and the 1.1.1.1 resolver can verify them.</p>
    <div>
      <h3>What about the IP address?</h3>
      <a href="#what-about-the-ip-address">
        
      </a>
    </div>
    <p>While both DNS queries and the TLS SNI extensions can now be protected by on-path attackers, it might still be possible to determine which websites users are visiting by simply looking at the destination IP addresses on the traffic originating from users’ devices. Some of our customers are protected by this to a certain degree thanks to the fact that many Cloudflare domains share the same sets of addresses, but this is not enough and more work is required to protect end users to a larger degree. Stay tuned for more updates from Cloudflare on the subject in the future.</p>
    <div>
      <h3>Where do I sign up?</h3>
      <a href="#where-do-i-sign-up">
        
      </a>
    </div>
    <p>Encrypted SNI is now enabled for free on all Cloudflare zones using our name servers, so you don’t need to do anything to enable it on your Cloudflare website. On the browser side, our friends at Firefox tell us that they expect to add encrypted SNI support this week to <a href="https://www.mozilla.org/firefox/channel/desktop/">Firefox Nightly</a> (keep in mind that the encrypted SNI spec is still under development, so it’s not stable just yet).</p><p>By visiting <a href="https://encryptedsni.com">encryptedsni.com</a> you can check how secure your browsing experience is. Are you using secure DNS? Is your resolver validating DNSSEC signatures? Does your browser support TLS 1.3? Did your browser encrypt the SNI? If the answer to all those questions is “yes” then you can sleep peacefully knowing that your browsing is protected from prying eyes.</p>
    <div>
      <h3>Conclusion</h3>
      <a href="#conclusion">
        
      </a>
    </div>
    <p>Encrypted SNI, along with TLS 1.3, DNSSEC and DoT/DoH, plugs one of the few remaining holes that enable surveillance and censorship on the Internet. More work is still required to get to a surveillance-free Internet, but we are (slowly) getting there.</p><p>[1]: It's important to mention that DNSSEC could be disabled by BGP route hijacking between a DNS resolver and the TLD server. Last week we <a href="/rpki/">announced</a> our commitment to RPKI and if DNS resolvers and <a href="https://www.cloudflare.com/learning/dns/top-level-domain/">TLDs</a> also implement RPKI, this type of hijacking will be much more difficult.</p><p><a href="/subscribe/"><i>Subscribe to the blog</i></a><i> for daily updates on all our Birthday Week announcements.</i></p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4J9GqS6QbKbhfEaYr5EO0a/a524c0ca04e9a919cc052e55eb670c17/Cloudflare-Birthday-Week-3.png" />
            
            </figure><p></p> ]]></content:encoded>
            <category><![CDATA[Birthday Week]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[Cryptography]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[TLS]]></category>
            <category><![CDATA[TLS 1.3]]></category>
            <category><![CDATA[DNS]]></category>
            <category><![CDATA[Reliability]]></category>
            <category><![CDATA[Speed & Reliability]]></category>
            <guid isPermaLink="false">377BXgmPj3EgsOPaOyf1oG</guid>
            <dc:creator>Alessandro Ghedini</dc:creator>
        </item>
        <item>
            <title><![CDATA[The Road to QUIC]]></title>
            <link>https://blog.cloudflare.com/the-road-to-quic/</link>
            <pubDate>Thu, 26 Jul 2018 15:04:36 GMT</pubDate>
            <description><![CDATA[ QUIC (Quick UDP Internet Connections) is a new encrypted-by-default Internet transport protocol, that provides a number of improvements designed to accelerate HTTP traffic as well as make it more secure, with the intended goal of eventually replacing TCP and TLS on the web. ]]></description>
            <content:encoded><![CDATA[ <p>QUIC (Quick UDP Internet Connections) is a new encrypted-by-default Internet transport protocol, that provides a number of improvements designed to accelerate HTTP traffic as well as make it more secure, with the intended goal of eventually replacing TCP and TLS on the web. In this blog post we are going to outline some of the key features of QUIC and how they benefit the web, and also some of the challenges of supporting this radical new protocol.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3XUtUi1ckk243XN0PWk2zo/68dcab940f1bcbb37cb3d3b7f10d1ddb/QUIC-Badge-Dark-RGB-Horiz.png" />
            
            </figure><p>There are in fact two protocols that share the same name: “Google QUIC” (“gQUIC” for short), is the original protocol that was designed by Google engineers several years ago, which, after years of experimentation, has now been adopted by the <a href="https://ietf.org/">IETF</a> (Internet Engineering Task Force) for standardization.</p><p>“IETF QUIC” (just “QUIC” from now on) has already diverged from gQUIC quite significantly such that it can be considered a separate protocol. From the wire format of the packets, to the handshake and the mapping of HTTP, QUIC has improved the original gQUIC design thanks to open collaboration from many organizations and individuals, with the shared goal of making the Internet faster and more secure.</p><p>So, what are the improvements QUIC provides?</p>
    <div>
      <h3>Built-in security (and performance)</h3>
      <a href="#built-in-security-and-performance">
        
      </a>
    </div>
    <p>One of QUIC’s more radical deviations from the now venerable TCP, is the stated design goal of providing a secure-by-default transport protocol. QUIC accomplishes this by providing security features, like authentication and encryption, that are typically handled by a higher layer protocol (like TLS), from the transport protocol itself.</p><p>The initial QUIC handshake combines the typical three-way handshake that you get with TCP, with the TLS 1.3 handshake, which provides authentication of the end-points as well as negotiation of cryptographic parameters. For those familiar with the TLS protocol, QUIC replaces the TLS record layer with its own framing format, while keeping the same TLS handshake messages.</p><p>Not only does this ensure that the connection is always authenticated and encrypted, but it also makes the initial connection establishment faster as a result: the typical QUIC handshake only takes a single round-trip between client and server to complete, compared to the two round-trips required for the TCP and TLS 1.3 handshakes combined.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1CWL4LYn5pKIq6nJjHfPeK/0683aa058799594bf35d41605e05b4c1/http-request-over-tcp-tls_2x.png" />
            
            </figure><p> </p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5rT3ge707TKiFEIiyEL4IW/3588b3d9d434ca9b705d75c2a72de090/http-request-over-quic_2x.png" />
            
            </figure><p>But QUIC goes even further, and also encrypts additional connection metadata that could be abused by middle-boxes to interfere with connections. For example packet numbers could be used by passive on-path attackers to correlate users activity over multiple network paths when connection migration is employed (see below). By encrypting packet numbers QUIC ensures that they can't be used to correlate activity by any entity other than the end-points in the connection.</p><p>Encryption can also be an effective remedy to ossification, which makes flexibility built into a protocol (like for example being able to negotiate different versions of that protocol) impossible to use in practice due to wrong assumptions made by implementations (ossification is what <a href="/why-tls-1-3-isnt-in-browsers-yet/">delayed deployment of TLS 1.3</a> for so long, which <a href="/you-get-tls-1-3-you-get-tls-1-3-everyone-gets-tls-1-3">was only possible</a> after several changes, designed to prevent ossified middle-boxes from incorrectly blocking the new revision of the TLS protocol, were adopted).</p>
    <div>
      <h3>Head-of-line blocking</h3>
      <a href="#head-of-line-blocking">
        
      </a>
    </div>
    <p>One of the main improvements delivered by <a href="/introducing-http2/">HTTP/2</a> was the ability to multiplex different HTTP requests onto the same TCP connection. This allows HTTP/2 applications to process requests concurrently and better utilize the network bandwidth available to them.</p><p>This was a big improvement over the then status quo, which required applications to initiate multiple TCP+TLS connections if they wanted to process multiple HTTP/1.1 requests concurrently (e.g. when a browser needs to fetch both CSS and Javascript assets to render a web page). Creating new connections requires repeating the initial handshakes multiple times, as well as going through the initial congestion window ramp-up, which means that rendering of web pages is slowed down. Multiplexing HTTP exchanges avoids all that.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5FcudeYzlfQKPqRI8YXk45/d271d04cc9150d0debd301c6481157d5/multiplexing.svg" />
            
            </figure><p>This however has a downside: since multiple requests/responses are transmitted over the same TCP connection, they are all equally affected by packet loss (e.g. due to network congestion), even if the data that was lost only concerned a single request. This is called “head-of-line blocking”.</p><p>QUIC goes a bit deeper and provides first class support for multiplexing such that different HTTP streams can in turn be mapped to different QUIC transport streams, but, while they still share the same QUIC connection so no additional handshakes are required and congestion state is shared, QUIC streams are delivered independently, such that in most cases packet loss affecting one stream doesn't affect others.</p><p>This can dramatically reduce the time required to, for example, render complete web pages (with CSS, Javascript, images, and other kinds of assets) particularly when crossing highly congested networks, with high packet loss rates.</p>
    <div>
      <h3>That easy, uh?</h3>
      <a href="#that-easy-uh">
        
      </a>
    </div>
    <p>In order to deliver on its promises, the QUIC protocol needs to break some of the assumptions that were taken for granted by many network applications, potentially making implementations and deployment of QUIC more difficult.</p><p>QUIC is designed to be delivered on top of UDP datagrams, to ease deployment and avoid problems coming from network appliances that drop packets from unknown protocols, since most appliances already support UDP. This also allows QUIC implementations to live in user-space, so that, for example, browsers will be able to implement new protocol features and ship them to their users without having to wait for operating systems updates.</p><p>However despite the intended goal of avoiding breakage, it also makes preventing abuse and correctly routing packets to the correct end-points more challenging.</p>
    <div>
      <h3>One NAT to bring them all and in the darkness bind them</h3>
      <a href="#one-nat-to-bring-them-all-and-in-the-darkness-bind-them">
        
      </a>
    </div>
    <p>Typical NAT routers can keep track of TCP connections passing through them by using the traditional 4-tuple (source IP address and port, and destination IP address and port), and by observing TCP SYN, ACK and FIN packets transmitted over the network, they can detect when a new connection is established and when it is terminated. This allows them to precisely manage the lifetime of NAT bindings, the association between the internal IP address and port, and the external ones.</p><p>With QUIC this is not yet possible, since NAT routers deployed in the wild today do not understand QUIC yet, so they typically fallback to the default and less precise handling of UDP flows, which usually involves using <a href="https://conferences.sigcomm.org/imc/2010/papers/p260.pdf">arbitrary, and at times very short, timeouts</a>, which could affect long-running connections.</p><p>When a NAT rebinding happens (due to a timeout for example), the end-point on the outside of the NAT perimeter will see packets coming from a different source port than the one that was observed when the connection was originally established, which makes it impossible to track connections by only using the 4-tuple.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2nozSxNZ6rsrj02Tb1IACy/fa90467738d8e89865b59be3ee4e5cf6/NAT-timeout-_2x.png" />
            
            </figure><p>And it's not just NAT! One of the features QUIC is intended to deliver is called “connection migration” and will allow QUIC end-points to migrate connections to different IP addresses and network paths at will. For example, a mobile client will be able to migrate QUIC connections between cellular data networks and WiFi when a known WiFi network becomes available (like when its user enters their favorite coffee shop).</p><p>QUIC tries to address this problem by introducing the concept of a connection ID: an arbitrary opaque blob of variable length, carried by QUIC packets, that can be used to identify a connection. End-points can use this ID to track connections that they are responsible for without the need to check the 4-tuple (in practice there might be multiple IDs identifying the same connection, for example to avoid linking different paths when connection migration is used, but that behavior is controlled by the end-points not the middle-boxes).</p><p>However this also poses a problem for network operators that use anycast addressing and <a href="/path-mtu-discovery-in-practice/">ECMP routing</a>, where a single destination IP address can potentially identify hundreds or even thousands of servers. Since edge routers used by these networks also don't yet know how to handle QUIC traffic, it might happen that UDP packets belonging to the same QUIC connection (that is, with the same connection ID) but with different 4-tuple (due to NAT rebinding or connection migration) might end up being routed to different servers, thus breaking the connection.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3y7Ae7Hm6zUy6SyL315Oxa/366eee8f4d5a89458556b77c53aacecb/anycast-cdn.png" />
            
            </figure><p>In order to address this, network operators might need to employ smarter layer 4 load balancing solutions, which can be implemented in software and deployed without the need to touch edge routers (see for example Facebook's <a href="https://github.com/facebookincubator/katran">Katran</a> project).</p>
    <div>
      <h3>QPACK</h3>
      <a href="#qpack">
        
      </a>
    </div>
    <p>Another benefit introduced by HTTP/2 was <a href="/hpack-the-silent-killer-feature-of-http-2/">header compression (or HPACK)</a> which allows HTTP/2 end-points to reduce the amount of data transmitted over the network by removing redundancies from HTTP requests and responses.</p><p>In particular, among other techniques, HPACK employs dynamic tables populated with headers that were sent (or received) from previous HTTP requests (or responses), allowing end-points to reference previously encountered headers in new requests (or responses), rather than having to transmit them all over again.</p><p>HPACK's dynamic tables need to be synchronized between the encoder (the party that sends an HTTP request or response) and the decoder (the one that receives them), otherwise the decoder will not be able to decode what it receives.</p><p>With HTTP/2 over TCP this synchronization is transparent, since the transport layer (TCP) takes care of delivering HTTP requests and responses in the same order they were sent in, the instructions for updating the tables can simply be sent by the encoder as part of the request (or response) itself, making the encoding very simple. But for QUIC this is more complicated.</p><p>QUIC can deliver multiple HTTP requests (or responses) over different streams independently, which means that while it takes care of delivering data in order as far as a single stream is concerned, there are no ordering guarantees across multiple streams.</p><p>For example, if a client sends HTTP request A over QUIC stream A, and request B over stream B, it might happen, due to packet reordering or loss in the network, that request B is received by the server before request A, and if request B was encoded such that it referenced a header from request A, the server will be unable to decode it since it didn't yet see request A.</p><p>In the gQUIC protocol this problem was solved by simply serializing all HTTP request and response headers (but not the bodies) over the same gQUIC stream, which meant headers would get delivered in order no matter what. This is a very simple scheme that allows implementations to reuse a lot of their existing HTTP/2 code, but on the other hand it increases the head-of-line blocking that QUIC was designed to reduce. The IETF QUIC working group thus designed a new mapping between HTTP and QUIC (“HTTP/QUIC”) as well as a new header compression scheme called “QPACK”.</p><p>In the latest draft of the HTTP/QUIC mapping and the QPACK spec, each HTTP request/response exchange uses its own bidirectional QUIC stream, so there's no head-of-line blocking. In addition, in order to support QPACK, each peer creates two additional unidirectional QUIC streams, one used to send QPACK table updates to the other peer, and one to acknowledge updates received by the other side. This way, a QPACK encoder can use a dynamic table reference only after it has been explicitly acknowledged by the decoder.</p>
    <div>
      <h3>Deflecting Reflection</h3>
      <a href="#deflecting-reflection">
        
      </a>
    </div>
    <p>A common problem among <a href="/ssdp-100gbps/">UDP-based</a> <a href="/memcrashed-major-amplification-attacks-from-port-11211/">protocols</a> is their susceptibility to <a href="/reflections-on-reflections/">reflection attacks</a>, where an attacker tricks an otherwise innocent server into sending large amounts of data to a third-party victim, by spoofing the source IP address of packets targeted to the server to make them look like they came from the victim.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7LfQICIemcnM4AgyhDb6kD/0649164a476896ca3a58de5390c08f09/ip-spoofing.png" />
            
            </figure><p>This kind of attack can be very effective when the response sent by the server happens to be larger than the request it received, in which case we talk of “amplification”.</p><p>TCP is not usually used for this kind of attack due to the fact that the initial packets transmitted during its handshake (SYN, SYN+ACK, …) have the same length so they don’t provide any amplification potential.</p><p>QUIC’s handshake on the other hand is very asymmetrical: like for TLS, in its first flight the QUIC server generally sends its own certificate chain, which can be very large, while the client only has to send a few bytes (the TLS ClientHello message embedded into a QUIC packet). For this reason, the initial QUIC packet sent by a client has to be padded to a specific minimum length (even if the actual content of the packet is much smaller). However this mitigation is still not sufficient, since the typical server response spans multiple packets and can thus still be far larger than the padded client packet.</p><p>The QUIC protocol also defines an explicit source-address verification mechanism, in which the server, rather than sending its long response, only sends a much smaller “retry” packet which contains a unique cryptographic token that the client will then have to echo back to the server inside a new initial packet. This way the server has a higher confidence that the client is not spoofing its own source IP address (since it received the retry packet) and can complete the handshake. The downside of this mitigation is that it increases the initial handshake duration from a single round-trip to two.</p><p>An alternative solution involves reducing the server's response to the point where a reflection attack becomes less effective, for example by using <a href="/ecdsa-the-digital-signature-algorithm-of-a-better-internet/">ECDSA certificates</a> (which are typically much smaller than their RSA counterparts). We have also been experimenting with a mechanism for <a href="https://tools.ietf.org/html/draft-ietf-tls-certificate-compression">compressing TLS certificates</a> using off-the-shelf compression algorithms like zlib and brotli, which is a feature originally introduced by gQUIC but not currently available in TLS.</p>
    <div>
      <h3>UDP performance</h3>
      <a href="#udp-performance">
        
      </a>
    </div>
    <p>One of the recurring issues with QUIC involves existing hardware and software deployed in the wild not being able to understand it. We've already looked at how QUIC tries to address network middle-boxes like routers, but another potentially problematic area is the performance of sending and receiving data over UDP on the QUIC end-points themselves. Over the years a lot of work has gone into optimizing TCP implementations as much as possible, including building off-loading capabilities in both software (like in operating systems) and hardware (like in network interfaces), but none of that is currently available for UDP.</p><p>However it’s only a matter of time until QUIC implementations can take advantage of these capabilities as well. Look for example at the recent efforts to implement <a href="https://lwn.net/Articles/752184/">Generic Segmentation Offloading for UDP on LInux</a>, which would allow applications to bundle and transfer multiple UDP segments between user-space and the kernel-space networking stack at the cost of a single one (or close enough), as well as the one to add <a href="https://lwn.net/Articles/655299/">zerocopy socket support also on Linux</a> which would allow applications to avoid the cost of copying user-space memory into kernel-space.</p>
    <div>
      <h3>Conclusion</h3>
      <a href="#conclusion">
        
      </a>
    </div>
    <p>Like HTTP/2 and TLS 1.3, QUIC is set to deliver a lot of new features designed to improve performance and security of web sites, as well as other Internet-based properties. The IETF working group is currently set to deliver the first version of the QUIC specifications by the end of the year and Cloudflare engineers are already hard at work to provide the benefits of QUIC to all of our customers.</p> ]]></content:encoded>
            <category><![CDATA[TLS]]></category>
            <category><![CDATA[HTTP2]]></category>
            <category><![CDATA[Speed & Reliability]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[QUIC]]></category>
            <category><![CDATA[IETF]]></category>
            <guid isPermaLink="false">4ZyUVtsRDEiNCkr2iwov88</guid>
            <dc:creator>Alessandro Ghedini</dc:creator>
        </item>
        <item>
            <title><![CDATA[You get TLS 1.3! You get TLS 1.3! Everyone gets TLS 1.3!]]></title>
            <link>https://blog.cloudflare.com/you-get-tls-1-3-you-get-tls-1-3-everyone-gets-tls-1-3/</link>
            <pubDate>Wed, 16 May 2018 17:28:07 GMT</pubDate>
            <description><![CDATA[ It's no secret that Cloudflare has been a big proponent of TLS 1.3, the newest edition of the TLS protocol that improves both speed and security, since we have made it available to our customers starting in 2016.  ]]></description>
            <content:encoded><![CDATA[ <p>It's no secret that Cloudflare has been a big proponent of <a href="/introducing-tls-1-3/">TLS 1.3</a>, the newest edition of the TLS protocol that improves both speed and security, since we have made it available to our customers starting in 2016. However, for the longest time TLS 1.3 has been a work-in-progress which meant that the feature was disabled by default in our customers’ dashboards, at least until <a href="/why-tls-1-3-isnt-in-browsers-yet/">all the kinks</a> in the protocol could be resolved.</p><p>With the specification <a href="https://www.ietf.org/mail-archive/web/tls/current/msg25837.html">finally nearing its official publication</a>, and after several years of work (as well as 28 draft versions), we are happy to announce that the TLS 1.3 feature on Cloudflare is out of beta and will be enabled by default for all new zones.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6uc6xUOlMR5H87HZejeEA1/9be622a3ef7c9d8160c1f084021a653c/Screen-Shot-2018-05-23-at-8.49.33-AM.png" />
            
            </figure><p>Custom image derived from <a href="https://youtu.be/8CAscBCdaQg?t=1m48s">YouTube video</a> courtesy of <a href="https://www.youtube.com/user/OWN">OWN</a></p><p>For our Free and Pro customers not much changes, they already had TLS 1.3 enabled by default from the start. We have also decided to disable the <a href="/introducing-0-rtt/">0-RTT feature</a> by default for these plans (it was previously enabled by default as well), due to <a href="https://twitter.com/grittygrease/status/991750903295164416">its inherent security properties</a>. It will still be possible to explicitly enable it from the dashboard or the API (more on 0-RTT soon-ish in another blog post).</p><p>Our Business and Enterprise customers will now also get TLS 1.3 enabled by default for new zones (but will continue to have 0-RTT disabled). For existing Business customers that haven't made an explicit choice (that is, they haven't turned the feature on or off manually), we are also retroactively turning TLS 1.3 on.</p>
    <div>
      <h3>What happened to the middleboxes?</h3>
      <a href="#what-happened-to-the-middleboxes">
        
      </a>
    </div>
    <p>Back in December <a href="/why-tls-1-3-isnt-in-browsers-yet/">we blogged about why TLS 1.3 still wasn't being widely adopted</a>, the main reason being non-compliant middleboxes, network appliances designed to monitor and sometimes intercept HTTPS traffic.</p><p>Due to the fact that the TLS protocol hasn’t been updated for a long time (TLS 1.2 came out back in 2008, with fairly minimal changes compared to TLS 1.1), wrong assumptions about the protocol made by these appliances meant that some of the more invasive changes in TLS 1.3, which broke those assumptions, caused the middleboxes to misbehave, in the worst cases causing TLS connections passing through them to break.</p><p>Since then, new draft versions of the protocol have been discussed and published, providing additional measures (on top of the ones already adopted, like the “supported_versions” extension) to mitigate the impact caused by middleboxes. How?, you ask. The trick was to modify the TLS 1.3 protocol to look more like previous TLS versions, but without impacting the improved performance and security benefits the new version provides.</p><p>For example, the ChangeCipherSpec handshake message, which in previous versions of the protocol was used to notify the receiving party that subsequent records would be encrypted, was originally removed from TLS 1.3 since it had no purpose in the protocol anymore after the handshake algorithm was streamlined, but in order to avoid confusing middleboxes that expected to see the message on the wire, it was reintroduced even though the receiving endpoint will just ignore it.</p><p>Another point of contention was the fact that some middleboxes expect to see the Certificate messages sent by servers (usually to identify the end server, sometimes with nefarious purposes), but since TLS 1.3 moved that message to the encrypted portion of the handshake, it became invisible to the snooping boxes. The trick there was to make the TLS 1.3 handshake look like it was <a href="/tls-session-resumption-full-speed-and-secure/">resuming a previous connection</a> which means that, even in previous TLS versions, the Certificate message is omitted from plain text communication. This was achieved by populating the previously deprecated "session_id" field in the ClientHello message with a bogus value.</p><p>Adopting these changes meant that, while the protocol itself lost a bit of its original elegance (but without losing any of the security and performance), major browsers could finally enable TLS 1.3 by default for all of their users: <a href="https://www.chromestatus.com/features/5712755738804224">Chrome enabled TLS 1.3 by default in version 65</a> while <a href="https://www.mozilla.org/en-US/firefox/60.0/releasenotes/">Firefox did so in version 60</a>.</p>
    <div>
      <h3>Adoption</h3>
      <a href="#adoption">
        
      </a>
    </div>
    <p>We can now go back to our metrics and see what all of this means for general TLS 1.3 adoption.</p><p>Back in December, <a href="/why-tls-1-3-isnt-in-browsers-yet/">only 0.06% of TLS connections to Cloudflare websites used TLS 1.3</a>. Now, 5-6% do so, with this number steadily rising:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3jRyGwowWGzFifg16szfCW/2ef83fa44444667f43b8af1d773af02b/tls13_metric.png" />
            
            </figure><p>It’s worth noting that the current Firefox beta (v61) switched to using draft 28, from draft 23 (which Chrome also uses). The two draft versions are incompatible due to some minor wire changes that were adopted some time after draft 23 was published, but Cloudflare can speak both versions so there won’t be a dip in adoption once Firefox 61 becomes stable. Once the final TLS 1.3 version (that is draft 28) becomes an official RFC we will also support that alongside the previous draft versions, to avoid leaving behind slow to update clients.</p>
    <div>
      <h3>Conclusion</h3>
      <a href="#conclusion">
        
      </a>
    </div>
    <p>The tremendous work required to specify, implement and deploy TLS 1.3 is finally starting to bear fruit, and adoption will without a doubt keep steadily increasing for some time: at the end of 2017 <a href="/our-predictions-for-2018/">our CTO predicted</a> that by the end of 2018 more than 50% of HTTPS connections will happen over TLS 1.3, and given the recent developments we are still confident that it is a reachable target.</p> ]]></content:encoded>
            <category><![CDATA[TLS]]></category>
            <category><![CDATA[TLS 1.3]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[Security]]></category>
            <guid isPermaLink="false">6d9VconNxieKz4HESOUy0</guid>
            <dc:creator>Alessandro Ghedini</dc:creator>
        </item>
        <item>
            <title><![CDATA[Make SSL boring again]]></title>
            <link>https://blog.cloudflare.com/make-ssl-boring-again/</link>
            <pubDate>Wed, 06 Dec 2017 14:00:00 GMT</pubDate>
            <description><![CDATA[ It may (or may not!) come as surprise, but a few months ago we migrated Cloudflare’s edge SSL connection termination stack to use BoringSSL: Google's crypto and SSL implementation that started as a fork of OpenSSL. ]]></description>
            <content:encoded><![CDATA[ <p>It may (or may not!) come as surprise, but a few months ago we migrated Cloudflare’s edge SSL connection termination stack to use <a href="https://boringssl.googlesource.com/boringssl/">BoringSSL</a>: Google's crypto and SSL implementation that started as a fork of OpenSSL.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5ocB2kjFK3yu7bQpWq7NCu/28291ba6a3d750709fe632d9f214ca95/2017-12-05-160528_621x216_scrot-1.png" />
            
            </figure><p>We dedicated several months of work to make this happen without negative impact on customer traffic. We had a few bumps along the way, and had to overcome some challenges, but we ended up in a better place than we were in a few months ago.</p>
    <div>
      <h3>TLS 1.3</h3>
      <a href="#tls-1-3">
        
      </a>
    </div>
    <p>We have <a href="/introducing-tls-1-3/">already</a> <a href="/tls-1-3-overview-and-q-and-a/">blogged</a> <a href="/tls-1-3-explained-by-the-cloudflare-crypto-team-at-33c3/">extensively</a> about TLS 1.3. Our original TLS 1.3 stack required our main SSL termination software (which was based on OpenSSL) to hand off TCP connections to a separate system based on <a href="https://github.com/cloudflare/tls-tris">our fork of Go's crypto/tls standard library</a>, which was specifically developed to only handle TLS 1.3 connections. This proved handy as an experiment that we could roll out to our client base in relative safety.</p><p>However, over time, this separate system started to make our lives more complicated: most of our SSL-related business logic needed to be duplicated in the new system, which caused a few subtle bugs to pop up, and made it harder to roll-out new features such as <a href="/introducing-tls-client-auth/">Client Auth</a> to all our clients.</p><p>As it happens, BoringSSL has supported TLS 1.3 for quite a long time (it was one the first open source SSL implementations to work on this feature), so now all of our edge SSL traffic (including TLS 1.3 connections) is handled by the same system, with no duplication, no added complexity, and no increased latency. Yay!</p>
    <div>
      <h3>Fancy new crypto, part 1: X25519 for TLS 1.2 (and earlier)</h3>
      <a href="#fancy-new-crypto-part-1-x25519-for-tls-1-2-and-earlier">
        
      </a>
    </div>
    <p>When establishing an SSL connection, client and server will negotiate connection-specific secret keys that will then be used to encrypt the application traffic. There are a few different methods for doing this, the most popular one being ECDH (Elliptic Curve Diffie–Hellman). Long story short this depends on an <a href="/a-relatively-easy-to-understand-primer-on-elliptic-curve-cryptography/">elliptic curve</a> being negotiated between client and server.</p><p>For the longest time the only widely supported curves available were the ones defined by NIST, until Daniel J. Bernstein proposed Curve25519 (X25519 is the mechanism used for ECDH based on Curve25519), which has quickly gained popularity and is now the default choice of many popular browsers (including Chrome).</p><p>This was already supported for TLS 1.3 connections, and with BoringSSL we are now able to support key negotiation based on X25519 at our edge for TLS 1.2 (and earlier) connections as well.</p><p>X25519 is now the second most popular elliptic curve algorithm that is being used on our network:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1bi0W8QN1x6ftLgdkeHx4A/4e9b0526e02faa55c06fdda2e3d7db13/curves-metrics-with-logo-1.png" />
            
            </figure>
    <div>
      <h3>Fancy new crypto, part 2: RSA-PSS for TLS 1.2</h3>
      <a href="#fancy-new-crypto-part-2-rsa-pss-for-tls-1-2">
        
      </a>
    </div>
    <p>Another one of the changes introduced by TLS 1.3 is the adoption of the PSS padding scheme for RSA signatures (RSASSA-PSS). This replaces the more fragile, and historically prone to security vulnerabilities, RSASSA-PKCS1-v1.5, for all TLS 1.3 connections.</p><p>RSA PKCS#v1.5 has been known to be vulnerable to known ciphertext attacks since <a href="http://archiv.infsec.ethz.ch/education/fs08/secsem/bleichenbacher98.pdf">Bleichenbacher’s CRYPTO 98 paper</a> which showed SSL/TLS to be vulnerable to this kind of attacks as well.</p><p>The attacker exploits an “oracle”, in this case a TLS server that allows them to determine whether a given ciphertext has been correctly padded under the rules of PKCS1-v1.5 or not. For example, if the server returns a different error for correct padding vs. incorrect padding, that information can be used as an oracle (this is how Bleichenbacher broke SSLv3 in 1998). If incorrect padding causes the handshake to take a measurably different amount of time compared to correct padding, that’s called a timing oracle.</p><p>If an attacker has access to an oracle, it can take as little as <a href="http://csf2012.seas.harvard.edu/5min_abstracts/MillionMessageAttack.pdf">15,000</a> messages to gain enough information to perform an RSA secret-key operation without possessing the secret key. This is enough for the attacker to either decrypt a ciphertext encrypted with RSA, or to forge a signature. Forging a signature allows the attacker to <a href="https://www.nds.rub.de/media/nds/veroeffentlichungen/2015/08/21/Tls13QuicAttacks.pdf">hijack TLS connections</a>, and decrypting a ciphertext allows the attacker to decrypt any connection that do not use <a href="/staying-on-top-of-tls-attacks/">forward secrecy</a>.</p><p>Since then, SSL/TLS implementations have adopted mitigations to prevent these attacks, but they are tricky to get right, as the recently published <a href="https://support.f5.com/csp/article/K21905460">F5 vulnerability</a> shows.</p><p>With the switch to BoringSSL we made RSA PSS available to TLS 1.2 connections as well. This is already supported "in the wild", and is the preferred scheme by modern browsers like Chrome when dealing with RSA server certificates.</p>
    <div>
      <h3>The dark side of the moon</h3>
      <a href="#the-dark-side-of-the-moon">
        
      </a>
    </div>
    <p>Besides all these new exciting features that we are now offering to all our clients, BoringSSL also has a few internal features that end users won't notice, but that made our life so much easier.</p><p>Some of our SSL features required special patches that we maintained in our internal OpenSSL fork, however BoringSSL provides replacements for these features (and more!) out of the box.</p><p>Some examples include its <a href="https://github.com/google/boringssl/blob/b2c312d670b9967cf881419758f0ec286b66a25f/include/openssl/ssl.h#L1123">private key callback</a> support that we now use to implement <a href="/keyless-ssl-the-nitty-gritty-technical-details/">Keyless SSL</a>, its <a href="https://github.com/google/boringssl/blob/b2c312d670b9967cf881419758f0ec286b66a25f/include/openssl/ssl.h#L2004">asynchronous session lookup callback</a> that we use to support <a href="/tls-session-resumption-full-speed-and-secure/">distributed session ID caches</a> (for session resumption with clients that, for whatever reason, don't support session tickets), its <a href="https://github.com/google/boringssl/blob/b2c312d670b9967cf881419758f0ec286b66a25f/include/openssl/ssl.h#L1385">equal-preference cipher grouping</a> that allows us to offer <a href="/it-takes-two-to-chacha-poly/">ChaCha20-Poly1305 ciphers</a> alongside AES GCM ones and let clients decide which they prefer, or its <a href="https://github.com/google/boringssl/blob/b2c312d670b9967cf881419758f0ec286b66a25f/include/openssl/ssl.h#L3353">"select_certificate" callback</a> that we use for inspecting and logging ClientHellos, and for dynamically enabling features depending on the user’s configuration (we were previously using the “cert_cb” callback for the latter, which is also supported by OpenSSL, but we ran into some limitations like the fact that you can’t dynamically change the supported protocol versions with it, or the fact that it is not executed during session resumption).</p>
    <div>
      <h3>The case of the missing OCSP</h3>
      <a href="#the-case-of-the-missing-ocsp">
        
      </a>
    </div>
    <p>Apart from adding new features, the BoringSSL developers have also been busy working on <i>removing</i> features that most people don't care about, to make the codebase lighter and easier to maintain. For the most part this worked out very well: a huge amount of code has been removed from BoringSSL without anyone noticing.</p><p>However one of the features that also got the axe was OCSP. We relied heavily on this feature at our edge to offer OCSP stapling to all clients automatically. So in order to avoid losing this functionality we spent a few weeks working on a replacement, and, surprise! we ended up with a far more reliable OCSP pipeline than when we started. You can read more about the work we did in <a href="/high-reliability-ocsp-stapling/">this blog post</a>.</p>
    <div>
      <h3>ChaCha20-Poly1305 draft</h3>
      <a href="#chacha20-poly1305-draft">
        
      </a>
    </div>
    <p>Another feature that was removed was support for the <a href="/it-takes-two-to-chacha-poly/">legacy ChaCha20-Poly1305 ciphers</a> (not to be confused with the ciphers standardized in <a href="https://tools.ietf.org/html/rfc7905">RFC7905</a>). These ciphers were deployed by some browsers before the standardization process finished and ended up being incompatible with the standard ciphers later ratified.</p><p>We looked at our metrics and realized that a significant percentage of clients still relied on this feature. These would be older mobile clients that don't have AES hardware offloading, and that didn't get software updated to get the newer ChaCha20 ciphers.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1RtyTjVTdSV5RtFn6pJKTF/dd9e482b0233834b26a93b9a185ced00/chacha-metrics-with-logo.png" />
            
            </figure><p>We decided to add support for these ciphers back to our own internal BoringSSL fork so that those older clients could still take advantage of them. We will keep monitoring our metrics and decide whether to remove them once the usage drops significantly.</p>
    <div>
      <h3>Slow Base64: veni, vidi, vici</h3>
      <a href="#slow-base64-veni-vidi-vici">
        
      </a>
    </div>
    <p>One somewhat annoying problem we noticed during a test deployment, was an increase in the startup time of our NGINX instances. Armed with perf and flamegraphs we looked into what was going on and realized the CPU was spending a ridiculous amount of time in BoringSSL’s base64 decoder.</p><p>It turns out that we were loading CA trusted certificates from disk (in PEM format, which uses base64) over and over and over in different parts of our NGINX configuration, and because of a <a href="https://github.com/google/boringssl/commit/536036abf46a13e52a43a92f6e44a87404e8755f#diff-c7192c0c5ad80a961243b0ad5c434176">change</a> in BoringSSL that was intended to make the base64 decoder constant-time, but also made it <a href="https://boringssl-review.googlesource.com/c/boringssl/+/16384#message-06ca2814d05ae91a486a2126e017cc38f2e514b3">several times slower</a> than the decoder in OpenSSL, our startup times also suffered.</p><p>Of course the astute reader might ask, why were you loading those certificates from disk multiple times in the first place? And indeed there was no particular reason, other than the fact that the problem went unnoticed until it actually became a problem. So we fixed our configuration to only load the certificates from disk in the configuration sections where they are actually needed, and lived happily ever after.</p>
    <div>
      <h3>Conclusion</h3>
      <a href="#conclusion">
        
      </a>
    </div>
    <p>Despite a few hiccups, this whole process turned out to be fairly smooth, also thanks to the rock-solid stability of the BoringSSL codebase, not to mention its extensive documentation. Not only we ended up with a much better and more easily maintainable system than we had before, but we also managed to <a href="https://github.com/google/boringssl/commits?author=vkrasnov">contribute</a> a <a href="https://github.com/google/boringssl/commits?author=ghedo">little</a> back to the open-source community.</p><p>As a final note we’d like to thank the BoringSSL developers for the great work they poured into the project and for the help they provided us along the way.</p> ]]></content:encoded>
            <category><![CDATA[SSL]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[TLS 1.3]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[TLS]]></category>
            <guid isPermaLink="false">2ur99McG9vFBRl1LPt71Mn</guid>
            <dc:creator>Alessandro Ghedini</dc:creator>
        </item>
    </channel>
</rss>