
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/">
    <channel>
        <title><![CDATA[ The Cloudflare Blog ]]></title>
        <description><![CDATA[ Get the latest news on how products at Cloudflare are built, technologies used, and join the teams helping to build a better Internet. ]]></description>
        <link>https://blog.cloudflare.com</link>
        <atom:link href="https://blog.cloudflare.com/" rel="self" type="application/rss+xml"/>
        <language>en-us</language>
        
        <lastBuildDate>Sat, 04 Apr 2026 13:54:54 GMT</lastBuildDate>
        <item>
            <title><![CDATA[Zero Trust WARP: tunneling with a MASQUE]]></title>
            <link>https://blog.cloudflare.com/zero-trust-warp-with-a-masque/</link>
            <pubDate>Wed, 06 Mar 2024 14:00:15 GMT</pubDate>
            <description><![CDATA[ This blog discusses the introduction of MASQUE to Zero Trust WARP and how Cloudflare One customers will benefit from this modern protocol ]]></description>
            <content:encoded><![CDATA[ 
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3gjB6Xaz5umz7Thed17Fb8/831d6d87a94f651c4f4803a6444d0f5c/image5-11.png" />
            
            </figure>
    <div>
      <h2>Slipping on the MASQUE</h2>
      <a href="#slipping-on-the-masque">
        
      </a>
    </div>
    <p>In June 2023, we <a href="/masque-building-a-new-protocol-into-cloudflare-warp/">told you</a> that we were building a new protocol, <a href="https://datatracker.ietf.org/wg/masque/about/">MASQUE</a>, into WARP. MASQUE is a fascinating protocol that extends the capabilities of <a href="https://www.cloudflare.com/learning/performance/what-is-http3/">HTTP/3</a> and leverages the unique properties of the QUIC transport protocol to efficiently proxy IP and UDP traffic without sacrificing performance or privacy</p><p>At the same time, we’ve seen a rising demand from <a href="https://www.cloudflare.com/learning/security/glossary/what-is-zero-trust/">Zero Trust</a> customers for features and solutions that only MASQUE can deliver. All customers want WARP traffic to look like HTTPS to avoid detection and blocking by firewalls, while a significant number of customers also require FIPS-compliant encryption. We have something good here, and it’s been proven elsewhere (more on that below), so we are building MASQUE into Zero Trust WARP and will be making it available to all of our Zero Trust customers — at WARP speed!</p><p>This blog post highlights some of the key benefits our Cloudflare One customers will realize with MASQUE.</p>
    <div>
      <h2>Before the MASQUE</h2>
      <a href="#before-the-masque">
        
      </a>
    </div>
    <p>Cloudflare is on a mission to help build a better Internet. And it is a journey we’ve been on with our device client and WARP for almost five years. The precursor to WARP was the 2018 launch of <a href="/announcing-1111/">1.1.1.1</a>, the Internet’s fastest, privacy-first consumer DNS service. WARP was introduced in 2019 with the <a href="/1111-warp-better-vpn/">announcement</a> of the 1.1.1.1 service with WARP, a high performance and secure consumer DNS and VPN solution. Then in 2020, we <a href="/introducing-cloudflare-for-teams">introduced</a> Cloudflare’s Zero Trust platform and the Zero Trust version of WARP to help any IT organization secure their environment, featuring a suite of tools we first built to protect our own IT systems. Zero Trust WARP with MASQUE is the next step in our journey.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1zi7uOkKEYkgp6dpBwQRo4/cb0147f0558ed92bb83a0f61a4ebbacc/image4-14.png" />
            
            </figure>
    <div>
      <h2>The current state of WireGuard</h2>
      <a href="#the-current-state-of-wireguard">
        
      </a>
    </div>
    <p><a href="https://www.wireguard.com/">WireGuard</a> was the perfect choice for the 1.1.1.1 with WARP service in 2019. WireGuard is fast, simple, and secure. It was exactly what we needed at the time to guarantee our users’ privacy, and it has met all of our expectations. If we went back in time to do it all over again, we would make the same choice.</p><p>But the other side of the simplicity coin is a certain rigidity. We find ourselves wanting to extend WireGuard to deliver more capabilities to our Zero Trust customers, but WireGuard is not easily extended. Capabilities such as better session management, advanced congestion control, or simply the ability to use FIPS-compliant cipher suites are not options within WireGuard; these capabilities would have to be added on as proprietary extensions, if it was even possible to do so.</p><p>Plus, while WireGuard is popular in VPN solutions, it is not standards-based, and therefore not treated like a first class citizen in the world of the Internet, where non-standard traffic can be blocked, sometimes intentionally, sometimes not. WireGuard uses a non-standard port, port 51820, by default. Zero Trust WARP changes this to use port 2408 for the WireGuard tunnel, but it’s still a non-standard port. For our customers who control their own firewalls, this is not an issue; they simply allow that traffic. But many of the large number of public Wi-Fi locations, or the approximately 7,000 ISPs in the world, don’t know anything about WireGuard and block these ports. We’ve also faced situations where the ISP does know what WireGuard is and blocks it intentionally.</p><p>This can play havoc for roaming Zero Trust WARP users at their local coffee shop, in hotels, on planes, or other places where there are captive portals or public Wi-Fi access, and even sometimes with their local ISP. The user is expecting reliable access with Zero Trust WARP, and is frustrated when their device is blocked from connecting to Cloudflare’s global network.</p><p>Now we have another proven technology — MASQUE — which uses and extends HTTP/3 and QUIC. Let’s do a quick review of these to better understand why Cloudflare believes MASQUE is the future.</p>
    <div>
      <h2>Unpacking the acronyms</h2>
      <a href="#unpacking-the-acronyms">
        
      </a>
    </div>
    <p>HTTP/3 and QUIC are among the most recent advancements in the evolution of the Internet, enabling faster, more reliable, and more secure connections to endpoints like websites and APIs. Cloudflare worked closely with industry peers through the <a href="https://www.ietf.org/">Internet Engineering Task Force</a> on the development of <a href="https://datatracker.ietf.org/doc/html/rfc9000">RFC 9000</a> for QUIC and <a href="https://datatracker.ietf.org/doc/html/rfc9114">RFC 9114</a> for HTTP/3. The technical background on the basic benefits of HTTP/3 and QUIC are reviewed in our 2019 blog post where we announced <a href="/http3-the-past-present-and-future/">QUIC and HTTP/3 availability</a> on Cloudflare’s global network.</p><p>Most relevant for Zero Trust WARP, QUIC delivers better performance on low-latency or high packet loss networks thanks to packet coalescing and multiplexing. QUIC packets in separate contexts during the handshake can be coalesced into the same UDP datagram, thus reducing the number of receive and system interrupts. With multiplexing, QUIC can carry multiple HTTP sessions within the same UDP connection. Zero Trust WARP also benefits from QUIC’s high level of privacy, with TLS 1.3 designed into the protocol.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1ARWf5TO9CaOucOU527M2X/b53da149e40b8c28fc812552cfcaca26/image2-11.png" />
            
            </figure><p>MASQUE unlocks QUIC’s potential for proxying by providing the application layer building blocks to support efficient tunneling of TCP and UDP traffic. In Zero Trust WARP, MASQUE will be used to establish a tunnel over HTTP/3, delivering the same capability as WireGuard tunneling does today. In the future, we’ll be in position to add more value using MASQUE, leveraging Cloudflare’s ongoing participation in the <a href="https://datatracker.ietf.org/wg/masque/about/">MASQUE Working Group</a>. This blog post is a good read for those interested in <a href="/unlocking-quic-proxying-potential/">digging deeper into MASQUE</a>.</p><p>OK, so Cloudflare is going to use MASQUE for WARP. What does that mean to you, the Zero Trust customer?</p>
    <div>
      <h2>Proven reliability at scale</h2>
      <a href="#proven-reliability-at-scale">
        
      </a>
    </div>
    <p>Cloudflare’s network today spans more than 310 cities in over 120 countries, and interconnects with over 13,000 networks globally. HTTP/3 and QUIC were introduced to the Cloudflare network in 2019, the HTTP/3 standard was <a href="/cloudflare-view-http3-usage/">finalized in 2022</a>, and represented about <a href="https://radar.cloudflare.com/adoption-and-usage?dateStart=2023-01-01&amp;dateEnd=2023-12-31#http-1x-vs-http-2-vs-http-3">30% of all HTTP traffic on our network in 2023</a>.</p><p>We are also using MASQUE for <a href="/icloud-private-relay/">iCloud Private Relay</a> and other Privacy Proxy partners. The services that power these partnerships, from our Rust-based <a href="/introducing-oxy/">proxy framework</a> to our open source <a href="https://github.com/cloudflare/quiche">QUIC implementation</a>, are already deployed globally in our network and have proven to be fast, resilient, and reliable.</p><p>Cloudflare is already operating MASQUE, HTTP/3, and QUIC reliably at scale. So we want you, our Zero Trust WARP users and Cloudflare One customers, to benefit from that same reliability and scale.</p>
    <div>
      <h2>Connect from anywhere</h2>
      <a href="#connect-from-anywhere">
        
      </a>
    </div>
    <p>Employees need to be able to connect from anywhere that has an Internet connection. But that can be a challenge as many security engineers will configure firewalls and other networking devices to block all ports by default, and only open the most well-known and common ports. As we pointed out earlier, this can be frustrating for the roaming Zero Trust WARP user.</p><p>We want to fix that for our users, and remove that frustration. HTTP/3 and QUIC deliver the perfect solution. QUIC is carried on top of UDP (<a href="https://www.iana.org/assignments/protocol-numbers/protocol-numbers.xhtml">protocol number 17</a>), while HTTP/3 uses <a href="https://www.iana.org/assignments/service-names-port-numbers/service-names-port-numbers.xhtml">port 443</a> for encrypted traffic. Both of these are well known, widely used, and are very unlikely to be blocked.</p><p>We want our Zero Trust WARP users to reliably connect wherever they might be.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/53RZc92rNIUWscFuLuA13w/098b18464be4ee893d51786ff74a5bc4/image1-13.png" />
            
            </figure>
    <div>
      <h2>Compliant cipher suites</h2>
      <a href="#compliant-cipher-suites">
        
      </a>
    </div>
    <p>MASQUE leverages <a href="https://datatracker.ietf.org/doc/html/rfc8446">TLS 1.3</a> with QUIC, which provides a number of cipher suite choices. WireGuard also uses standard cipher suites. But some standards are more, let’s say, standard than others.</p><p>NIST, the <a href="https://www.nist.gov/">National Institute of Standards and Technology</a> and part of the US Department of Commerce, does a tremendous amount of work across the technology landscape. Of interest to us is the NIST research into network security that results in <a href="https://csrc.nist.gov/pubs/fips/140-2/upd2/final">FIPS 140-2</a> and similar publications. NIST studies individual cipher suites and publishes lists of those they recommend for use, recommendations that become requirements for US Government entities. Many other customers, both government and commercial, use these same recommendations as requirements.</p><p>Our first MASQUE implementation for Zero Trust WARP will use <a href="https://www.cloudflare.com/learning/ssl/why-use-tls-1.3/">TLS 1.3</a> and FIPS compliant cipher suites.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/25Qc8qdJd78bngZqpH0Pv7/1541929144b5ed4d85ccca36e0787957/image3-12.png" />
            
            </figure>
    <div>
      <h2>How can I get Zero Trust WARP with MASQUE?</h2>
      <a href="#how-can-i-get-zero-trust-warp-with-masque">
        
      </a>
    </div>
    <p>Cloudflare engineers are hard at work implementing MASQUE for the mobile apps, the desktop clients, and the Cloudflare network. Progress has been good, and we will open this up for beta testing early in the second quarter of 2024 for Cloudflare One customers. Your account team will be reaching out with participation details.</p>
    <div>
      <h2>Continuing the journey with Zero Trust WARP</h2>
      <a href="#continuing-the-journey-with-zero-trust-warp">
        
      </a>
    </div>
    <p>Cloudflare launched WARP five years ago, and we’ve come a long way since. This introduction of MASQUE to Zero Trust WARP is a big step, one that will immediately deliver the benefits noted above. But there will be more — we believe MASQUE opens up new opportunities to leverage the capabilities of QUIC and HTTP/3 to build innovative <a href="https://www.cloudflare.com/zero-trust/solutions/">Zero Trust solutions</a>. And we’re also continuing to work on other new capabilities for our Zero Trust customers.Cloudflare is committed to continuing our mission to help build a better Internet, one that is more private and secure, scalable, reliable, and fast. And if you would like to join us in this exciting journey, check out our <a href="https://www.cloudflare.com/careers/jobs/">open positions</a>.</p> ]]></content:encoded>
            <category><![CDATA[Security Week]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[Cloudflare Access]]></category>
            <category><![CDATA[Better Internet]]></category>
            <category><![CDATA[WARP]]></category>
            <category><![CDATA[HTTP3]]></category>
            <category><![CDATA[TLS 1.3]]></category>
            <category><![CDATA[Privacy]]></category>
            <category><![CDATA[QUIC]]></category>
            <category><![CDATA[1.1.1.1]]></category>
            <guid isPermaLink="false">5sDoFBGGZJbT4D9pftVhXY</guid>
            <dc:creator>Dan Hall</dc:creator>
        </item>
        <item>
            <title><![CDATA[A cost-effective and extensible testbed for transport protocol development]]></title>
            <link>https://blog.cloudflare.com/a-cost-effective-and-extensible-testbed-for-transport-protocol-development/</link>
            <pubDate>Tue, 14 Jan 2020 16:07:15 GMT</pubDate>
            <description><![CDATA[ At Cloudflare, we develop protocols at multiple layers of the network stack. In the past, we focused on HTTP/1.1, HTTP/2, and TLS 1.3. Now, we are working on QUIC and HTTP/3, which are still in IETF draft, but gaining a lot of interest. ]]></description>
            <content:encoded><![CDATA[ <p><i>This was originally published on </i><a href="https://calendar.perfplanet.com/2019/how-to-develop-a-practical-transport-protocol/"><i>Perf Planet's 2019 Web Performance Calendar</i></a><i>.</i></p><p>At Cloudflare, we develop protocols at multiple layers of the network stack. In the past, we focused on HTTP/1.1, HTTP/2, and TLS 1.3. Now, we are working on <a href="/http3-the-past-present-and-future/">QUIC and HTTP/3</a>, which are still in IETF draft, but gaining a lot of interest.</p><p>QUIC is a secure and multiplexed transport protocol that aims to perform better than TCP under some network conditions. It is specified in a family of documents: a transport layer which specifies packet format and basic state machine, recovery and congestion control, security based on TLS 1.3, and an HTTP application layer mapping, which is now called <a href="https://www.cloudflare.com/learning/performance/what-is-http3/">HTTP/3</a>.</p><p>Let’s focus on the transport and recovery layer first. This layer provides a basis for what is sent on the wire (the packet binary format) and how we send it reliably. It includes how to open the connection, how to handshake a new secure session with the help of TLS, how to send data reliably and how to react when there is packet loss or reordering of packets. Also it includes flow control and congestion control to interact well with other transport protocols in the same network. With confidence in the basic transport and recovery layer,  we can take a look at higher application layers such as HTTP/3.</p><p>To develop such a transport protocol, we need multiple stages of the development environment. Since this is a network protocol, it’s best to test in an actual physical network to see how works on the wire. We may start the development using localhost, but after some time we may want to send and receive packets with other hosts. We can build a lab with a couple of virtual machines, using Virtualbox, VMWare or even with Docker. We also have a local testing environment with a Linux VM. But sometimes these have a limited network (localhost only) or are noisy due to other processes in the same host or virtual machines.</p><p>Next step is to have a test lab, typically an isolated network focused on protocol analysis only consisting of dedicated x86 hosts. Lab configuration is particularly important for testing various cases - there is no one-size-fits-all scenario for protocol testing. For example, EDGE is still running in production mobile networks but LTE is dominant and 5G deployment is in early stages. WiFi is very common these days. We want to test our protocol in all those environments. Of course, we can't buy every type of machine or have a very expensive network simulator for every type of environment, so using cheap hardware and an open source OS where we can configure similar environments is ideal.</p>
    <div>
      <h2>The QUIC Protocol Testing lab</h2>
      <a href="#the-quic-protocol-testing-lab">
        
      </a>
    </div>
    <p>The goal of the QUIC testing lab is to aid transport layer protocol development. To develop a transport protocol we need to have a way to control our network environment and a way to get as many different types of debugging data as possible. Also we need to get metrics for comparison with other protocols in production.</p><p>The QUIC Testing Lab has the following goals:</p><ul><li><p><b><i>Help with multiple transport protocol development</i></b>: Developing a new transport layer requires many iterations, from building and validating packets as per protocol spec, to making sure everything works fine under moderate load, to very harsh conditions such as low bandwidth and high packet loss. We need a way to run tests with various network conditions reproducibly in order to catch unexpected issues.</p></li><li><p><b><i>Debugging multiple transport protocol development</i></b>: Recording as much debugging info as we can is important for fixing bugs. Looking into packet captures definitely helps but we also need a detailed debugging log of the server and client to understand the what and why for each packet. For example, when a packet is sent, we want to know why. Is this because there is an application which wants to send some data? Or is this a retransmit of data previously known as lost? Or is this a loss probe which is not an actual packet loss but sent to see if the network is lossy?</p></li><li><p><b><i>Performance comparison between each protocol</i></b>: We want to understand the performance of a new protocol by comparison with existing protocols such as TCP, or with a previous version of the protocol under development. Also we want to test with varying parameters such as changing the congestion control mechanism, changing various timeouts, or changing the buffer sizes at various levels of the stack.</p></li><li><p><b><i>Finding a bottleneck or errors easily</i></b>: Running tests we may see an unexpected error - a transfer that timed out, or ended with an error, or a transfer was corrupted at the client side - each test needs to make sure every test is run correctly, by using a checksum of the original file to compare with what is actually downloaded, or by checking various error codes at the protocol of API level.</p></li></ul><p>When we have a test lab with separate hardware, we have benefits, as follows:</p><ul><li><p>Can configure the testing lab without public Internet access - safe and quiet.</p></li><li><p>Handy access to hardware and its console for maintenance purpose, or for adding or updating hardware.</p></li><li><p>Try other CPU architectures. For clients we use the Raspberry Pi for regular testing because this is ARM architecture (32bit or 64bit), similar to modern smartphones. So testing with ARM architecture helps for compatibility testing before going into a smartphone OS.</p></li><li><p>We can add a real smartphone for testing, such as Android or iPhone. We can test with WiFi but these devices also support Ethernet, so we can test them with a wired network for better consistency.</p></li></ul>
    <div>
      <h2>Lab Configuration</h2>
      <a href="#lab-configuration">
        
      </a>
    </div>
    <p>Here is a diagram of our QUIC Protocol Testing Lab:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1O1CdD682XoQE9Q68bPVkh/a63bd6b8a1bafa516719cfcf0a82c033/Screenshot-2019-07-01-00.35.06.png" />
            
            </figure><p>This is a conceptual diagram and we need to configure a switch for connecting each machine. Currently, we have Raspberry Pis (2 and 3) as an Origin and a Client. And small Intel x86 boxes for the Traffic Shaper and Edge server plus Ethernet switches for interconnectivity.</p><ul><li><p>Origin is simply serving HTTP and HTTPS test objects using a web server. Client may download a file from Origin directly to simulate a download direct from a customer's origin server.</p></li><li><p>Client will download a test object from Origin or Edge, using a different protocol. In typical a configuration Client connects to Edge instead of Origin, so this is to simulate an edge server in the real world. For TCP/HTTP we are using the curl command line client and for QUIC, <a href="https://github.com/cloudflare/quiche">quiche’s</a> http3_client with some modification.</p></li><li><p>Edge is running Cloudflare's web server to serve HTTP/HTTPS via TCP and also the QUIC protocol using quiche. Edge server is installed with the same Linux kernel used on Cloudflare's production machines in order to have the same low level network stack.</p></li><li><p>Traffic Shaper is sitting between Client and Edge (and Origin), controlling network conditions. Currently we are using FreeBSD and ipfw + dummynet. Traffic shaping can also be done using Linux' netem which provides additional network simulation features.</p></li></ul><p>The goal is to run tests with various network conditions, such as bandwidth, latency and packet loss upstream and downstream. The lab is able to run a plaintext HTTP test but currently our focus of testing is HTTPS over TCP and HTTP/3 over QUIC. Since QUIC is running over UDP, both TCP and UDP traffic need to be controlled.</p>
    <div>
      <h2>Test Automation and Visualization</h2>
      <a href="#test-automation-and-visualization">
        
      </a>
    </div>
    <p>In the lab, we have a script installed in Client, which can run a batch of testing with various configuration parameters - for each test combination, we can define a test configuration, including:</p><ul><li><p>Network Condition - Bandwidth, Latency, Packet Loss (upstream and downstream)</p></li></ul><p>For example using netem traffic shaper we can simulate LTE network as below,(<a href="https://www.cloudflare.com/learning/cdn/glossary/round-trip-time-rtt/">RTT</a>=50ms, BW=22Mbps upstream and downstream, with BDP queue size)</p>
            <pre><code>$ tc qdisc add dev eth0 root handle 1:0 netem delay 25ms
$ tc qdisc add dev eth0 parent 1:1 handle 10: tbf rate 22mbit buffer 68750 limit 70000</code></pre>
            <ul><li><p>Test Object sizes - 1KB, 8KB, … 32MB</p></li><li><p>Test Protocols: HTTPS (TCP) and QUIC (UDP)</p></li><li><p>Number of runs and number of requests in a single connection</p></li></ul><p>The test script outputs a CSV file of results for importing into other tools for data processing and visualization - such as Google Sheets, Excel or even a jupyter notebook. Also it’s able to post the result to a database (Clickhouse in our case), so we can query and visualize the results.</p><p>Sometimes a whole test combination takes a long time - the current standard test set with simulated 2G, 3G, LTE, WiFi and various object sizes repeated 10 times for each request may take several hours to run. Large object testing on a slow network takes most of the time, so sometimes we also need to run a limited test (e.g. testing LTE-like conditions only for a smoke test) for quick debugging.</p>
    <div>
      <h3>Chart using Google Sheets:</h3>
      <a href="#chart-using-google-sheets">
        
      </a>
    </div>
    <p>The following comparison chart shows the total transfer time in msec for TCP vs QUIC for different network conditions. The QUIC protocol used here is a development version one.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2umJkG3YBHNnxyD2tJRHQA/c96ef4601d8d20c9760b45ad321e6135/Screen-Shot-2020-01-13-at-3.09.41-PM.png" />
            
            </figure>
    <div>
      <h2>Debugging and performance analysis using of a smartphone</h2>
      <a href="#debugging-and-performance-analysis-using-of-a-smartphone">
        
      </a>
    </div>
    <p>Mobile devices have become a crucial part of our day to day life, so testing the new transport protocol on mobile devices is critically important for mobile app performance. To facilitate that, we need to have a mobile test app which will proxy data over the new transport protocol under development. With this we have the ability to analyze protocol functionality and performance in mobile devices with different network conditions.</p><p>Adding a smartphone to the testbed mentioned above gives an advantage in terms of understanding real performance issues. The major smartphone operating systems, iOS and Android, have quite different networking stack. Adding a smartphone to testbed gives the ability to understand these operating system network stacks in depth which aides new protocol designs.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/RLA6wO7vjRol9o6nlzj34/7c1a7904a379b1e8079853c35597173c/Screen-Shot-2020-01-13-at-3.52.03-PM.png" />
            
            </figure><p>The above figure shows the network block diagram of another similar lab testbed used for protocol testing where a smartphone is connected both wired and wirelessly. A Linux netem based traffic shaper sits in-between the client and server shaping the traffic. Various networking profiles are fed to the traffic shaper to mimic real world scenarios. The client can be either an Android or iOS based smartphone, the server is a vanilla web server serving static files. Client, server and traffic shaper are all connected to the Internet along with the private lab network for management purposes.</p><p>The above lab has mobile devices for both Android or iOS  installed with a test app built with a proprietary client proxy software for proxying data over the new transport protocol under development. The test app also has the ability to make HTTP requests over TCP for comparison purposes.</p><p>The Android or iOS test app can be used to issue multiple HTTPS requests of different object sizes sequentially and concurrently using TCP and QUIC as underlying transport protocol. Later, TTOTAL (total transfer time) of each HTTPS request is used to compare TCP and QUIC performance over different network conditions. One such comparison is shown below,</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1Eh2Kl4C9RI40oKf8Z3CJY/a318afd23e177895137ff481bba2dfe1/Screen-Shot-2020-01-13-at-4.08.23-PM.png" />
            
            </figure><p>The table above shows the total transfer time taken for TCP and QUIC requests over an LTE network profile fetching different objects with different concurrency levels using the test app. Here TCP goes over native OS network stack and QUIC goes over Cloudflare QUIC stack.</p><p>Debugging network performance issues is hard when it comes to mobile devices. By adding an actual smartphone into the testbed itself we have the ability to take packet captures at different layers. These are very critical in analyzing and understanding protocol performance.</p><p>It's easy and straightforward to capture packets and analyze them using the tcpdump tool on x86 boxes, but it's a challenge to capture packets on iOS and Android devices. On iOS device ‘rvictl’ lets us capture packets on an external interface. But ‘rvictl’ has some drawbacks such as timestamps being inaccurate. Since we are dealing with millisecond level events, timestamps need to be accurate to analyze the root cause of a problem.</p><p>We can capture packets on internal loopback interfaces on jailbroken iPhones and rooted Android devices. Jailbreaking a recent iOS device is nontrivial. We also need to make sure that autoupdate of any sort is disabled on such a phone otherwise it would disable the jailbreak and you have to start the whole process again. With a jailbroken phone we have root access to the device which lets us take packet captures as needed using tcpdump.</p><p>Packet captures taken using jailbroken iOS devices or rooted Android devices connected to the lab testbed help us analyze  performance bottlenecks and improve protocol performance.</p><p>iOS and Android devices different network stacks in their core operating systems. These packet captures also help us understand the network stack of these mobile devices, for example in iOS devices packets punted through loopback interface had a mysterious delay of 5 to 7ms.</p>
    <div>
      <h2>Conclusion</h2>
      <a href="#conclusion">
        
      </a>
    </div>
    <p>Cloudflare is actively involved in helping to drive forward the QUIC and HTTP/3 standards by testing and optimizing these new protocols in simulated real world environments. By simulating a wide variety of networks we are working on our mission of Helping Build a Better Internet. For everyone, everywhere.</p><p><i>Would like to thank SangJo Lee, Hiren Panchasara, Lucas Pardue and Sreeni Tellakula for their contributions.</i></p> ]]></content:encoded>
            <category><![CDATA[HTTP3]]></category>
            <category><![CDATA[QUIC]]></category>
            <category><![CDATA[TLS 1.3]]></category>
            <category><![CDATA[TCP]]></category>
            <guid isPermaLink="false">58abpUfUPAE7n3X9TDOpyt</guid>
            <dc:creator>Lohith Bellad</dc:creator>
            <dc:creator>Junho Choi</dc:creator>
        </item>
        <item>
            <title><![CDATA[Know your SCM_RIGHTS]]></title>
            <link>https://blog.cloudflare.com/know-your-scm_rights/</link>
            <pubDate>Thu, 29 Nov 2018 09:54:22 GMT</pubDate>
            <description><![CDATA[ As TLS 1.3 was ratified earlier this year, I was recollecting how we got started with it here at Cloudflare. We made the decision to be early adopters of TLS 1.3 a little over two years ago. It was a very important decision, and we took it very seriously. ]]></description>
            <content:encoded><![CDATA[ <p>As TLS 1.3 was ratified earlier <a href="/rfc-8446-aka-tls-1-3/">this year</a>, I was recollecting how we got started with it here at Cloudflare. We made the decision to be early adopters of <a href="/introducing-tls-1-3/">TLS 1.3</a> a little over two years ago. It was a very important decision, and we took it very seriously.</p><p>It is no secret that Cloudflare uses <a href="/end-of-the-road-for-cloudflare-nginx/">nginx</a> to handle user traffic. A little less known fact, is that we have several instances of nginx running. I won’t go into detail, but there is one instance whose job is to accept connections on port 443, and proxy them to another instance of nginx that actually handles the requests. It has pretty limited functionality otherwise. We fondly call it nginx-ssl.</p><p>Back then we were using OpenSSL for TLS and Crypto in nginx, but OpenSSL (and BoringSSL) had yet to announce a timeline for TLS 1.3 support, therefore we had to implement our own TLS 1.3 stack. Obviously we wanted an implementation that would not affect any customer or client that would not enable TLS 1.3. We also needed something that we could iterate on quickly, because the spec was very fluid back then, and also something that we can release frequently without worrying about the rest of the Cloudflare stack.</p><p>The obvious solution was to implement it on top of OpenSSL. The OpenSSL version we were using was 1.0.2, but not only were we looking ahead to replace it with version 1.1.0 or with <a href="/make-ssl-boring-again/">BoringSSL</a> (which we eventually did), it was so ingrained in our stack and so fragile that we wouldn’t be able to achieve our stated goals, without risking serious bugs.</p><p>Instead, Filippo Valsorda and Brendan McMillion suggested that the easier path would be to implement TLS 1.3 on top of the Go TLS library and make a Go replica of nginx-ssl (go-ssl). Go is very easy to iterate and prototype, with a powerful standard library, and we had a great pool of Go talent to use, so it made a lot of sense. Thus <a href="https://github.com/cloudflare/tls-tris">tls-tris</a> was born.</p><p>The question remained how would we have Go handle only TLS 1.3 while letting nginx handling all prior versions of TLS?</p><p>And herein lies the problem. Both TLS 1.3 and older versions of TLS communicate on port 443, and it is common knowledge that only one application can listen on a given TCP port, and that application is nginx, that would still handle the bulk of the TLS traffic. We could pipe all the TCP data into another connection in Go, effectively creating an additional proxy layer, but where is the fun in that? Also it seemed a little inefficient.</p>
    <div>
      <h2>Meet SCM_RIGHTS</h2>
      <a href="#meet-scm_rights">
        
      </a>
    </div>
    <p>So how do you make two different processes, written in two different programming languages, share the same TCP socket?</p><p>Fortunately, Linux (or rather UNIX) provides us with just the tool that we need. You can use UNIX-domain sockets to pass file descriptors between applications, and like everything else in UNIX connections are files.Looking at <code>man 7 unix</code> we see the following:</p>
            <pre><code>   Ancillary messages
       Ancillary  data  is  sent and received using sendmsg(2) and recvmsg(2).
       For historical reasons the ancillary message  types  listed  below  are
       specified with a SOL_SOCKET type even though they are AF_UNIX specific.
       To send them  set  the  cmsg_level  field  of  the  struct  cmsghdr  to
       SOL_SOCKET  and  the cmsg_type field to the type.  For more information
       see cmsg(3).

       SCM_RIGHTS
              Send or receive a set of  open  file  descriptors  from  another
              process.  The data portion contains an integer array of the file
              descriptors.  The passed file descriptors behave as though  they
              have been created with dup(2).</code></pre>
            <blockquote><p>Technically you do not send “file descriptors”. The “file descriptors” you handle in the code are simply indices into the processes' local file descriptor table, which in turn points into the OS' open file table, that finally points to the vnode representing the file. Thus the “file descriptor” observed by the other process will most likely have a different numeric value, despite pointing to the same file.</p></blockquote><p>We can also check <code>man 3 cmsg</code> as suggested, to find a handy example on how to use SCM_RIGHTS:</p>
            <pre><code>   struct msghdr msg = { 0 };
   struct cmsghdr *cmsg;
   int myfds[NUM_FD];  /* Contains the file descriptors to pass */
   int *fdptr;
   union {         /* Ancillary data buffer, wrapped in a union
                      in order to ensure it is suitably aligned */
       char buf[CMSG_SPACE(sizeof(myfds))];
       struct cmsghdr align;
   } u;

   msg.msg_control = u.buf;
   msg.msg_controllen = sizeof(u.buf);
   cmsg = CMSG_FIRSTHDR(&amp;msg);
   cmsg-&gt;cmsg_level = SOL_SOCKET;
   cmsg-&gt;cmsg_type = SCM_RIGHTS;
   cmsg-&gt;cmsg_len = CMSG_LEN(sizeof(int) * NUM_FD);
   fdptr = (int *) CMSG_DATA(cmsg);    /* Initialize the payload */
   memcpy(fdptr, myfds, NUM_FD * sizeof(int));</code></pre>
            <p>And that is what we decided to use. We let OpenSSL read the “Client Hello” message from an established TCP connection. If the “Client Hello” indicated TLS version 1.3, we would use SCM_RIGHTS to send it to the Go process. The Go process would in turn try to parse the rest of the “Client Hello”, if it were successful it would proceed with TLS 1.3 connection, and upon failure it would give the file descriptor back to OpenSSL, to handle regularly.</p><p>So how exactly do you implement something like that?</p><p>Since in our case we established that the C process will listen for TCP connections, our other process will have to listen on a UNIX socket, for connections C will want to forward.</p><p>For example in Go:</p>
            <pre><code>type scmListener struct {
	*net.UnixListener
}

type scmConn struct {
	*net.UnixConn
}

var path = "/tmp/scm_example.sock"

func listenSCM() (*scmListener, error) {
	syscall.Unlink(path)

	addr, err := net.ResolveUnixAddr("unix", path)
	if err != nil {
		return nil, err
	}

	ul, err := net.ListenUnix("unix", addr)
	if err != nil {
		return nil, err
	}

	err = os.Chmod(path, 0777)
	if err != nil {
		return nil, err
	}

	return &amp;scmListener{ul}, nil
}

func (l *scmListener) Accept() (*scmConn, error) {
	uc, err := l.AcceptUnix()
	if err != nil {
		return nil, err
	}
	return &amp;scmConn{uc}, nil
}</code></pre>
            <p>Then in the C process, for each connection we want to pass, we will connect to that socket first:</p>
            <pre><code>int connect_unix()
{
    struct sockaddr_un addr = {.sun_family = AF_UNIX,
                               .sun_path = "/tmp/scm_example.sock"};

    int unix_sock = socket(AF_UNIX, SOCK_STREAM, 0);
    if (unix_sock == -1)
        return -1;

    if (connect(unix_sock, (struct sockaddr *)&amp;addr, sizeof(addr)) == -1)
    {
        close(unix_sock);
        return -1;
    }

    return unix_sock;
}</code></pre>
            <p>To actually pass a file descriptor we utilize the example from <code>man 3 cmsg</code>:</p>
            <pre><code>int send_fd(int unix_sock, int fd)
{
    struct iovec iov = {.iov_base = ":)", // Must send at least one byte
                        .iov_len = 2};

    union {
        char buf[CMSG_SPACE(sizeof(fd))];
        struct cmsghdr align;
    } u;

    struct msghdr msg = {.msg_iov = &amp;iov,
                         .msg_iovlen = 1,
                         .msg_control = u.buf,
                         .msg_controllen = sizeof(u.buf)};

    struct cmsghdr *cmsg = CMSG_FIRSTHDR(&amp;msg);
    *cmsg = (struct cmsghdr){.cmsg_level = SOL_SOCKET,
                             .cmsg_type = SCM_RIGHTS,
                             .cmsg_len = CMSG_LEN(sizeof(fd))};

    memcpy(CMSG_DATA(cmsg), &amp;fd, sizeof(fd));

    return sendmsg(unix_sock, &amp;msg, 0);
}</code></pre>
            <p>Then to receive the file descriptor in Go:</p>
            <pre><code>func (c *scmConn) ReadFD() (*os.File, error) {
	msg, oob := make([]byte, 2), make([]byte, 128)

	_, oobn, _, _, err := c.ReadMsgUnix(msg, oob)
	if err != nil {
		return nil, err
	}

	cmsgs, err := syscall.ParseSocketControlMessage(oob[0:oobn])
	if err != nil {
		return nil, err
	} else if len(cmsgs) != 1 {
		return nil, errors.New("invalid number of cmsgs received")
	}

	fds, err := syscall.ParseUnixRights(&amp;cmsgs[0])
	if err != nil {
		return nil, err
	} else if len(fds) != 1 {
		return nil, errors.New("invalid number of fds received")
	}

	fd := os.NewFile(uintptr(fds[0]), "")
	if fd == nil {
		return nil, errors.New("could not open fd")
	}

	return fd, nil
}</code></pre>
            
    <div>
      <h2>Rust</h2>
      <a href="#rust">
        
      </a>
    </div>
    <p>We can also do this in Rust, although the standard library in Rust does not yet support UNIX sockets, but it does let you address the C library via the <a href="https://rust-lang.github.io/libc/x86_64-unknown-linux-gnu/libc/">libc</a> crate. Warning, unsafe code ahead!</p><p>First we want to implement some UNIX socket functionality in Rust:</p>
            <pre><code>use libc::*;
use std::io::prelude::*;
use std::net::TcpStream;
use std::os::unix::io::FromRawFd;
use std::os::unix::io::RawFd;

fn errno_str() -&gt; String {
    let strerr = unsafe { strerror(*__error()) };
    let c_str = unsafe { std::ffi::CStr::from_ptr(strerr) };
    c_str.to_string_lossy().into_owned()
}

pub struct UNIXSocket {
    fd: RawFd,
}

pub struct UNIXConn {
    fd: RawFd,
}

impl Drop for UNIXSocket {
    fn drop(&amp;mut self) {
        unsafe { close(self.fd) };
    }
}

impl Drop for UNIXConn {
    fn drop(&amp;mut self) {
        unsafe { close(self.fd) };
    }
}

impl UNIXSocket {
    pub fn new() -&gt; Result&lt;UNIXSocket, String&gt; {
        match unsafe { socket(AF_UNIX, SOCK_STREAM, 0) } {
            -1 =&gt; Err(errno_str()),
            fd @ _ =&gt; Ok(UNIXSocket { fd }),
        }
    }

    pub fn bind(self, address: &amp;str) -&gt; Result&lt;UNIXSocket, String&gt; {
        assert!(address.len() &lt; 104);

        let mut addr = sockaddr_un {
            sun_len: std::mem::size_of::&lt;sockaddr_un&gt;() as u8,
            sun_family: AF_UNIX as u8,
            sun_path: [0; 104],
        };

        for (i, c) in address.chars().enumerate() {
            addr.sun_path[i] = c as i8;
        }

        match unsafe {
            unlink(&amp;addr.sun_path as *const i8);
            bind(
                self.fd,
                &amp;addr as *const sockaddr_un as *const sockaddr,
                std::mem::size_of::&lt;sockaddr_un&gt;() as u32,
            )
        } {
            -1 =&gt; Err(errno_str()),
            _ =&gt; Ok(self),
        }
    }

    pub fn listen(self) -&gt; Result&lt;UNIXSocket, String&gt; {
        match unsafe { listen(self.fd, 50) } {
            -1 =&gt; Err(errno_str()),
            _ =&gt; Ok(self),
        }
    }

    pub fn accept(&amp;self) -&gt; Result&lt;UNIXConn, String&gt; {
        match unsafe { accept(self.fd, std::ptr::null_mut(), std::ptr::null_mut()) } {
            -1 =&gt; Err(errno_str()),
            fd @ _ =&gt; Ok(UNIXConn { fd }),
        }
    }
}</code></pre>
            <p>And the code to extract the file desciptor:</p>
            <pre><code>#[repr(C)]
pub struct ScmCmsgHeader {
    cmsg_len: c_uint,
    cmsg_level: c_int,
    cmsg_type: c_int,
    fd: c_int,
}

impl UNIXConn {
    pub fn recv_fd(&amp;self) -&gt; Result&lt;RawFd, String&gt; {
        let mut iov = iovec {
            iov_base: std::ptr::null_mut(),
            iov_len: 0,
        };

        let mut scm = ScmCmsgHeader {
            cmsg_len: 0,
            cmsg_level: 0,
            cmsg_type: 0,
            fd: 0,
        };

        let mut mhdr = msghdr {
            msg_name: std::ptr::null_mut(),
            msg_namelen: 0,
            msg_iov: &amp;mut iov as *mut iovec,
            msg_iovlen: 1,
            msg_control: &amp;mut scm as *mut ScmCmsgHeader as *mut c_void,
            msg_controllen: std::mem::size_of::&lt;ScmCmsgHeader&gt;() as u32,
            msg_flags: 0,
        };

        let n = unsafe { recvmsg(self.fd, &amp;mut mhdr, 0) };

        if n == -1
            || scm.cmsg_len as usize != std::mem::size_of::&lt;ScmCmsgHeader&gt;()
            || scm.cmsg_level != SOL_SOCKET
            || scm.cmsg_type != SCM_RIGHTS
        {
            Err("Invalid SCM message".to_string())
        } else {
            Ok(scm.fd)
        }
    }
}</code></pre>
            
    <div>
      <h2>Conclusion</h2>
      <a href="#conclusion">
        
      </a>
    </div>
    <p>SCM_RIGHTS is a very powerful tool that can be used for many purposes. In our case we used to to introduce a new service in a non-obtrusive fashion. Other uses may be:</p><ul><li><p>A/B testing</p></li><li><p>Phasing out of an old C based service in favor of new Go or Rust one</p></li><li><p>Passing connections from a privileged process to an unprivileged one</p></li></ul><p>And more</p><p>You can find the full example <a href="https://github.com/vkrasnov/scm_sample">here</a>.</p> ]]></content:encoded>
            <category><![CDATA[TLS 1.3]]></category>
            <category><![CDATA[TLS]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[Linux]]></category>
            <category><![CDATA[TCP]]></category>
            <category><![CDATA[OpenSSL]]></category>
            <category><![CDATA[Go]]></category>
            <category><![CDATA[NGINX]]></category>
            <guid isPermaLink="false">6FPYG7UeMVoKyzz6mA3tpB</guid>
            <dc:creator>Vlad Krasnov</dc:creator>
        </item>
        <item>
            <title><![CDATA[Encrypt that SNI: Firefox edition]]></title>
            <link>https://blog.cloudflare.com/encrypt-that-sni-firefox-edition/</link>
            <pubDate>Thu, 18 Oct 2018 13:00:00 GMT</pubDate>
            <description><![CDATA[ A couple of weeks ago we announced support for the encrypted Server Name Indication (SNI) TLS extension (ESNI for short). As promised, our friends at Mozilla landed support for ESNI in Firefox Nightl.  ]]></description>
            <content:encoded><![CDATA[ <p>A couple of weeks ago we <a href="https://blog.cloudflare.com/esni/">announced</a> support for the <a href="https://blog.cloudflare.com/encrypted-sni/">encrypted Server Name Indication (SNI) TLS extension</a> (ESNI for short). As promised, our friends at Mozilla <a href="https://blog.mozilla.org/security/2018/10/18/encrypted-sni-comes-to-firefox-nightly/">landed support for ESNI in Firefox Nightly</a>, so you can now browse Cloudflare websites without leaking the plaintext SNI TLS extension to on-path observers (ISPs, coffee-shop owners, firewalls, …). Today we'll show you how to enable it and how to get full marks on our <a href="https://www.encryptedsni.com/">Browsing Experience Security Check</a>.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6zyycnqY5HRKXntoXJyn0g/87a427fca18ae269c4ff55c5939cdf94/esni-3_3.5x-1.png" />
          </figure>
    <div>
      <h3>Here comes the night</h3>
      <a href="#here-comes-the-night">
        
      </a>
    </div>
    <p>The first step is to download and install the very latest <a href="https://www.mozilla.org/firefox/channel/desktop/#nightly">Firefox Nightly build</a>, or, if you have Nightly already installed, make sure it’s up to date.</p><p>When we announced our support for ESNI we also created a test page you can point your browser to <a href="https://encryptedsni.com/">https://encryptedsni.com</a> which checks whether your browser / DNS configuration is providing a more secure browsing experience by using secure DNS transport, DNSSEC validation, TLS 1.3 &amp; ESNI itself when it connects to our test page. Before you make any changes to your Firefox configuration, you might well see a result something like this:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2i7XABalwEra2pZXq0uaCQ/fbf7c039508cf3efe7fbfcf96d00ecca/encryptedsni_securedns_check_failed.png" />
          </figure><p>So, room for improvement! Next, head to the <a>about:config</a> page and look for the <code>network.security.esni.enabled</code> option (you can type the name in the search box at the top to filter out unrelated options), and switch it to true by double clicking on its value.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5McCvsEx9Zvrz9QmrLOFfP/eecc257ff561f4641404dbe32081a923/firefox_enable_encryptedsni.png" />
          </figure><p>Now <a href="https://blog.cloudflare.com/encrypted-sni/">encrypted SNI is enabled</a> and will be automatically used when you visit websites that support it (including all websites on Cloudflare).</p><p>It’s important to note that, as explained in our blog post, you must also enable support for DNS over HTTPS (also known as “Trusted Recursive Resolver” in Firefox) in order to avoid leaking the websites visited through plaintext DNS queries. To do that with Firefox, you can simply follow the instructions on <a href="https://wiki.mozilla.org/Trusted_Recursive_Resolver">this page</a>.</p><p>Mozilla recommends setting up the Trusted Recursive Resolver in mode “2”, which means that if, for whatever reason, the DNS query to the TRR fails, it will be retried using the system’s DNS resolver. This is good to avoid breaking your web browsing due to DNS misconfigurations, however Firefox will also fallback to the system resolver in case of a failed <a href="https://blog.cloudflare.com/dnssec-an-introduction/">DNSSEC</a> signature verification, which might affect user’s security and privacy due to the fact that the query will then be retried over plaintext DNS.</p><p>This is due to the fact that any DNS failure, including DNSSEC failures, from the DNS resolver is identified by the DNS SERVFAIL return code, which is not granular enough for Firefox to differentiate different failure scenarios. We are looking into options to address this on our <a href="https://1.1.1.1/">1.1.1.1</a> resolver, in order to give Firefox and other DNS clients more information on the type of DNS failure experienced to avoid the fallback behaviour when appropriate.</p><p>Now that everything is in place, go ahead and visit our <a href="https://www.encryptedsni.com/">Browsing Experience Security Check</a> page, and click on the “Check My Browser” button. You should now see results something like this:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4BVLoxvwwTnBfFoGCqJIjF/834b8b0771d392e90f23f762551c77e1/encryptedsni_securedns_check_passed.png" />
          </figure><p>Note: As you make changes in <code>about:config</code> to the ESNI &amp; TRR settings, you will need to hard refresh the check page to ensure a new TLS connection is established. We plan to fix this in a future update.</p><p>To test for encrypted SNI support on your Cloudflare domain, you can visit the “/cdn-cgi/trace” page, for example, <a href="https://www.cloudflare.com/cdn-cgi/trace">https://www.cloudflare.com/cdn-cgi/trace</a> (replace <code>www.cloudflare.com</code> with your own domain). If the browser encrypted the SNI you should see <code>sni=encrypted</code> in the trace output.</p>
    <div>
      <h3>On the wire</h3>
      <a href="#on-the-wire">
        
      </a>
    </div>
    <p>You can also go a step further and <a href="https://www.wireshark.org/docs/wsdg_html_chunked/ChSrcObtain.html">download</a> and <a href="https://www.wireshark.org/docs/wsdg_html_chunked/ChSrcBuildFirstTime.html">build</a> the latest <a href="https://www.wireshark.org/">Wireshark</a> code from its git repository (this feature hasn’t landed in a stable release yet so building from source is required for now).</p><p>This will allow you to see what the encrypted SNI extension looks like on the wire, while you visit a website that supports ESNI (e.g. <a href="https://cloudflare.com/">https://cloudflare.com</a>).</p><p>This is how a normal TLS connection looks with a plaintext SNI:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6ATpi1p2wqPGGhxqOjaD23/91b20294cf31b7327c413adcdc107acf/unencrypted_sni_pcap-2.png" />
          </figure><p>And here it is again, but this time with the encrypted SNI extension:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4sai8zbi9oq5x4jsktOtIB/0cb8547ed4aa4b9fb629c775f0a52dc0/encrypted_sni_pcap.png" />
          </figure>
    <div>
      <h3>Fallback</h3>
      <a href="#fallback">
        
      </a>
    </div>
    <p>As mentioned in our earlier post, there may be cases when the DNS record fetched by the client doesn’t match a valid key owned by the TLS server, in which case the connection using ESNI would simply fail to be established.</p><p>This might happen for example if the authoritative DNS server and the TLS server somehow get out of sync (for example, the TLS server rotates its own key, but the DNS record is not updated accordingly). But this could also be caused by external parties, for example, a caching DNS resolver that doesn’t properly respect the TTL set by the authoritative server might serve an outdated ESNI record even though the authoritative server is up-to-date. When this happens, Firefox will fail to connect to the website.</p><p>The way we work around this problem on the Cloudflare edge network, is to simply make the TLS termination stack keep a list of valid ESNI keys for the past few hours, rather than just the latest and most recent key. This allows the TLS server to decrypt the encrypted SNI sent by a client even if a slightly outdated DNS record was used to produce it. The duration of the lifetime of ESNI keys needs to be balanced between increasing service availability, by keeping as many keys around as possible, and increasing security and forward secrecy of ESNI, which on the contrary requires keeping as few keys as possible.</p><p>There is some room for experimentation while the encrypted SNI specification is not finalized yet, and one proposed solution would allow the server to detect the failure and serve a fresh ESNI record to the client which in turn can then try to connect again using the newly received record without having to disable ESNI completely. But while this might seem easy, in practice a lot of things need to be taken into account: the server needs to serve a certificate to the client, so the client can make sure the connection is not being intercepted, but at the same time the server doesn’t know which certificate to serve because it can’t decrypt and inspect the SNI, which introduces the need for some sort of “fallback certificate”. Additionally any such fallback mechanism would inevitably add an additional round-trip to the connection handshake which would negate one of the main performance improvements introduced by TLS 1.3 (that is, shorter handshakes).</p>
    <div>
      <h3>Conclusion</h3>
      <a href="#conclusion">
        
      </a>
    </div>
    <p>On our part, we’ll continue to experiment and evolve our implementation as the specification evolves, to make encrypted SNI work best for our customers and users.</p> ]]></content:encoded>
            <category><![CDATA[Birthday Week]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[Cryptography]]></category>
            <category><![CDATA[TLS 1.3]]></category>
            <category><![CDATA[DNS]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[Privacy]]></category>
            <guid isPermaLink="false">3CdowETfSYeI0Jt27lcRwf</guid>
            <dc:creator>Alessandro Ghedini</dc:creator>
        </item>
        <item>
            <title><![CDATA[Encrypt it or lose it: how encrypted SNI works]]></title>
            <link>https://blog.cloudflare.com/encrypted-sni/</link>
            <pubDate>Mon, 24 Sep 2018 12:01:00 GMT</pubDate>
            <description><![CDATA[ Today we announced support for encrypted SNI, an extension to the TLS 1.3 protocol that improves privacy of Internet users. ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Today we announced <a href="/esni">support for encrypted SNI</a>, <a href="https://tools.ietf.org/html/draft-ietf-tls-esni">an extension</a> to the <a href="/rfc-8446-aka-tls-1-3/">TLS 1.3</a> protocol that improves privacy of Internet users by preventing on-path observers, including ISPs, coffee shop owners and firewalls, from intercepting the TLS Server Name Indication (SNI) extension and using it to determine which websites users are visiting.</p><p>Encrypted SNI, together with other Internet security features already offered by Cloudflare for free, will make it harder to censor content and track users on the Internet. Read on to learn how it works.</p>
    <div>
      <h3>SNWhy?</h3>
      <a href="#snwhy">
        
      </a>
    </div>
    <p>The TLS Server Name Indication (SNI) extension, <a href="https://tools.ietf.org/html/rfc3546">originally standardized back in 2003</a>, lets servers host multiple TLS-enabled websites on the same set of IP addresses, by requiring clients to specify which site they want to connect to during the initial TLS handshake. Without SNI the server wouldn’t know, for example, which certificate to serve to the client, or which configuration to apply to the connection.</p><p>The client adds the SNI extension containing the hostname of the site it’s connecting to to the ClientHello message. It sends the ClientHello to the server during the TLS handshake. Unfortunately the ClientHello message is sent unencrypted, due to the fact that client and server don’t share an encryption key at that point.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/15sFb2PJWXZn3WBxjYOa7p/0f42a188a08641aaee1e18b82ce160a9/tls13_unencrypted_server_name_indication-2.png" />
            
            </figure><p><i>TLS 1.3 with Unencrypted SNI</i></p><p>This means that an on-path observer (say, an ISP, coffee shop owner, or a firewall) can intercept the plaintext ClientHello message, and determine which website the client is trying to connect to. That allows the observer to track which sites a user is visiting.</p><p>But with SNI encryption the client encrypts the SNI even though the rest of the ClientHello is sent in plaintext.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/596ebAAHizuuRJ4B7kuFEq/9cba63e4141b47dd0f5d0e8941157b58/tls13_encrypted_server_name_indication-1.png" />
            
            </figure><p><i>TLS 1.3 with Encrypted SNI</i></p><p>So how come the original SNI couldn’t be encrypted before, but now it can? Where does the encryption key come from if client and server haven’t negotiated one yet?</p>
    <div>
      <h3>If the chicken must come before the egg, where do you put the chicken?</h3>
      <a href="#if-the-chicken-must-come-before-the-egg-where-do-you-put-the-chicken">
        
      </a>
    </div>
    <p>As with <a href="https://datatracker.ietf.org/meeting/101/materials/slides-101-dnsop-sessa-the-dns-camel-01">many other Internet features</a> the answer is simply “DNS”.</p><p>The server publishes a <a href="https://en.wikipedia.org/wiki/Public-key_cryptography">public key</a> on a well-known DNS record, which can be fetched by the client before connecting (as it already does for A, AAAA and other records). The client then replaces the SNI extension in the ClientHello with an “encrypted SNI” extension, which is none other than the original SNI extension, but encrypted using a symmetric encryption key derived using the server’s public key, as described below. The server, which owns the private key and can derive the symmetric encryption key as well, can then decrypt the extension and therefore terminate the connection (or forward it to a backend server). Since only the client, and the server it’s connecting to, can derive the encryption key, the encrypted SNI cannot be decrypted and accessed by third parties.</p><p>It’s important to note that this is an extension to TLS version 1.3 and above, and doesn’t work with previous versions of the protocol. The reason is very simple: one of the changes introduced by TLS 1.3 (<a href="/you-get-tls-1-3-you-get-tls-1-3-everyone-gets-tls-1-3/">not without problems</a>) meant moving the Certificate message sent by the server to the encrypted portion of the TLS handshake (before 1.3, it was sent in plaintext). Without this fundamental change to the protocol, an attacker would still be able to determine the identity of the server by simply observing the plaintext certificate sent on the wire.</p><p>The underlying cryptographic machinery involves using the <a href="https://en.wikipedia.org/wiki/Diffie%E2%80%93Hellman_key_exchange">Diffie-Hellman key exchange algorithm</a> which allows client and server to generate a shared encryption key over an untrusted channel. The encrypted SNI encryption key is thus calculated on the client-side by using the server’s public key (which is actually the public portion of a Diffie-Hellman semi-static key share) and the private portion of an ephemeral Diffie-Hellman share generated by the client itself on the fly and discarded immediately after the ClientHello is sent to the server. Additional data (such as some of the cryptographic parameters sent by the client as part of its ClientHello message) is also mixed into the cryptographic process for good measure.</p><p>The client’s ESNI extension will then include, not only the actual encrypted SNI bits, but also the client’s public key share, the cipher suite it used for encryption and the digest of the server’s ESNI DNS record. On the other side, the server uses its own private key share, and the public portion of the client’s share to generate the encryption key and decrypt the extension.</p><p>While this may seem overly complicated, this ensures that the encryption key is cryptographically tied to the specific TLS session it was generated for, and cannot be reused across multiple connections. This prevents an attacker able to observe the encrypted extension sent by the client from simply capturing it and replaying it to the server in a separate session to unmask the identity of the website the user was trying to connect to (this is known as “cut-and-paste” attack).</p><p>However a compromise of the server’s private key would put all ESNI symmetric keys generated from it in jeopardy (which would allow observers to decrypt previously collected encrypted data), which is why Cloudflare’s own SNI encryption implementation rotates the server’s keys every hour to improve forward secrecy, but keeps track of the keys for the previous few hours to allow for DNS caching and replication delays, so that clients with slightly outdated keys can still use ESNI without problems (but eventually all keys are discarded and forgotten).</p>
    <div>
      <h3>But wait, DNS? For real?</h3>
      <a href="#but-wait-dns-for-real">
        
      </a>
    </div>
    <p>The observant reader might have realized that simply using <a href="https://www.cloudflare.com/learning/dns/what-is-dns/">DNS</a> (which is, by default, unencrypted) would make the whole encrypted SNI idea completely pointless: an on-path observer would be able to determine which website the client is connecting to by simply observing the plaintext DNS queries sent by the client itself, whether encrypted SNI was used or not.</p><p>But with the introduction of DNS features such as DNS over TLS (DoT) and DNS over HTTPS (DoH), and of public DNS resolvers that provide those features to their users (such as Cloudflare’s own <a href="/announcing-1111/">1.1.1.1</a>), DNS queries can now be encrypted and protected by the prying eyes of censors and trackers alike.</p><p>However, while responses from DoT/DoH DNS resolvers can be trusted, to a certain extent (evil resolvers notwithstanding), it might still be possible for a determined attacker to poison the resolver’s cache by intercepting its communication with the authoritative DNS server and injecting malicious data. That is, unless both the authoritative server and the resolver support <a href="https://www.cloudflare.com/dns/dnssec/">DNSSEC</a><sub>[1]</sub>. Incidentally, Cloudflare’s authoritative DNS servers can sign responses returned to resolvers, and the 1.1.1.1 resolver can verify them.</p>
    <div>
      <h3>What about the IP address?</h3>
      <a href="#what-about-the-ip-address">
        
      </a>
    </div>
    <p>While both DNS queries and the TLS SNI extensions can now be protected by on-path attackers, it might still be possible to determine which websites users are visiting by simply looking at the destination IP addresses on the traffic originating from users’ devices. Some of our customers are protected by this to a certain degree thanks to the fact that many Cloudflare domains share the same sets of addresses, but this is not enough and more work is required to protect end users to a larger degree. Stay tuned for more updates from Cloudflare on the subject in the future.</p>
    <div>
      <h3>Where do I sign up?</h3>
      <a href="#where-do-i-sign-up">
        
      </a>
    </div>
    <p>Encrypted SNI is now enabled for free on all Cloudflare zones using our name servers, so you don’t need to do anything to enable it on your Cloudflare website. On the browser side, our friends at Firefox tell us that they expect to add encrypted SNI support this week to <a href="https://www.mozilla.org/firefox/channel/desktop/">Firefox Nightly</a> (keep in mind that the encrypted SNI spec is still under development, so it’s not stable just yet).</p><p>By visiting <a href="https://encryptedsni.com">encryptedsni.com</a> you can check how secure your browsing experience is. Are you using secure DNS? Is your resolver validating DNSSEC signatures? Does your browser support TLS 1.3? Did your browser encrypt the SNI? If the answer to all those questions is “yes” then you can sleep peacefully knowing that your browsing is protected from prying eyes.</p>
    <div>
      <h3>Conclusion</h3>
      <a href="#conclusion">
        
      </a>
    </div>
    <p>Encrypted SNI, along with TLS 1.3, DNSSEC and DoT/DoH, plugs one of the few remaining holes that enable surveillance and censorship on the Internet. More work is still required to get to a surveillance-free Internet, but we are (slowly) getting there.</p><p>[1]: It's important to mention that DNSSEC could be disabled by BGP route hijacking between a DNS resolver and the TLD server. Last week we <a href="/rpki/">announced</a> our commitment to RPKI and if DNS resolvers and <a href="https://www.cloudflare.com/learning/dns/top-level-domain/">TLDs</a> also implement RPKI, this type of hijacking will be much more difficult.</p><p><a href="/subscribe/"><i>Subscribe to the blog</i></a><i> for daily updates on all our Birthday Week announcements.</i></p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4J9GqS6QbKbhfEaYr5EO0a/a524c0ca04e9a919cc052e55eb670c17/Cloudflare-Birthday-Week-3.png" />
            
            </figure><p></p> ]]></content:encoded>
            <category><![CDATA[Birthday Week]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[Cryptography]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[TLS]]></category>
            <category><![CDATA[TLS 1.3]]></category>
            <category><![CDATA[DNS]]></category>
            <category><![CDATA[Reliability]]></category>
            <category><![CDATA[Speed & Reliability]]></category>
            <guid isPermaLink="false">377BXgmPj3EgsOPaOyf1oG</guid>
            <dc:creator>Alessandro Ghedini</dc:creator>
        </item>
        <item>
            <title><![CDATA[A Detailed Look at RFC 8446 (a.k.a. TLS 1.3)]]></title>
            <link>https://blog.cloudflare.com/rfc-8446-aka-tls-1-3/</link>
            <pubDate>Fri, 10 Aug 2018 23:00:00 GMT</pubDate>
            <description><![CDATA[ TLS 1.3 (RFC 8446) was published today. This article provides a deep dive into the changes introduced in TLS 1.3 and its impact on the future of internet security. ]]></description>
            <content:encoded><![CDATA[ <p>For the last five years, the Internet Engineering Task Force (IETF), the standards body that defines internet protocols, has been working on standardizing the latest version of one of its most important security protocols: Transport Layer Security (TLS). TLS is used to secure the web (and much more!), providing encryption and ensuring the authenticity of every HTTPS website and API. The latest version of TLS, TLS 1.3 (<a href="https://www.rfc-editor.org/rfc/pdfrfc/rfc8446.txt.pdf">RFC 8446</a>) was published today. It is the first major overhaul of the protocol, bringing significant security and performance improvements. This article provides a deep dive into the changes introduced in TLS 1.3 and its impact on the future of internet security.</p>
    <div>
      <h3>An evolution</h3>
      <a href="#an-evolution">
        
      </a>
    </div>
    <p>One major way Cloudflare provides <a href="https://www.cloudflare.com/application-services/solutions/api-security/">security</a> is by supporting HTTPS for websites and web services such as APIs. With HTTPS (the “S” stands for secure) the communication between your browser and the server travels over an encrypted and authenticated channel. Serving your content over HTTPS instead of HTTP provides confidence to the visitor that the content they see is presented by the legitimate content owner and that the communication is safe from eavesdropping. This is a big deal in a world where online privacy is more important than ever.</p><p>The machinery under the hood that makes HTTPS secure is a protocol called TLS. It has its roots in a protocol called Secure Sockets Layer (SSL) developed in the mid-nineties at Netscape. By the end of the 1990s, Netscape handed SSL over to the IETF, who renamed it TLS and have been the stewards of the protocol ever since. Many people still refer to web encryption as SSL, even though the vast majority of services have switched over to supporting TLS only. The term SSL continues to have popular appeal and Cloudflare has kept the term alive through product names like <a href="/keyless-ssl-the-nitty-gritty-technical-details/">Keyless SSL</a> and <a href="/introducing-universal-ssl/">Universal SSL</a>.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/59tFn3me3Oe6OcjT24CqYF/22a662ccc88b06adc516449b8e2be657/image5.png" />
            
            </figure><p>In the IETF, protocols are called RFCs. TLS 1.0 was RFC 2246, TLS 1.1 was RFC 4346, and TLS 1.2 was RFC 5246. Today, TLS 1.3 was published as RFC 8446. RFCs are generally published in order, keeping 46 as part of the RFC number is a nice touch.</p>
    <div>
      <h3>TLS 1.2 wears parachute pants and shoulder pads</h3>
      <a href="#tls-1-2-wears-parachute-pants-and-shoulder-pads">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5p8wOtJF3L8LEprZDoO0z7/c4742d6066b73c33e4f5e98afddb83ff/image11.jpg" />
            
            </figure><p><a href="https://memegenerator.net/Mc-Hammer-Pants">MC Hammer</a>, like SSL, was popular in the 90s</p><p>Over the last few years, TLS has seen its fair share of problems. First of all, there have been problems with the code that implements TLS, including <a href="/answering-the-critical-question-can-you-get-private-ssl-keys-using-heartbleed/">Heartbleed</a>, <a href="https://www.imperialviolet.org/2014/09/26/pkcs1.html">BERserk</a>, <a href="https://gotofail.com/">goto fail;</a>, and more. These issues are not fundamental to the protocol and mostly resulted from a lack of testing. Tools like <a href="https://github.com/RUB-NDS/TLS-Attacker">TLS Attacker</a> and <a href="https://security.googleblog.com/2016/12/project-wycheproof.html">Project Wycheproof</a> have helped improve the robustness of TLS implementation, but the more challenging problems faced by TLS have had to do with the protocol itself.</p><p>TLS was designed by engineers using tools from mathematicians. Many of the early design decisions from the days of SSL were made using heuristics and an incomplete understanding of how to design robust security protocols. That said, this isn’t the fault of the protocol designers (Paul Kocher, Phil Karlton, Alan Freier, Tim Dierks, Christopher Allen and others), as the entire industry was still learning how to do this properly. When TLS was designed, formal papers on the design of secure authentication protocols like Hugo Krawczyk’s landmark <a href="http://webee.technion.ac.il/~hugo/sigma-pdf.pdf">SIGMA</a> paper were still years away. TLS was 90s crypto: It meant well and seemed cool at the time, but the modern cryptographer’s design palette has moved on.</p><p>Many of the design flaws were discovered using <a href="https://en.wikipedia.org/wiki/Formal_verification">formal verification</a>. Academics attempted to prove certain security properties of TLS, but instead found counter-examples that were turned into real vulnerabilities. These weaknesses range from the purely theoretical (<a href="https://access.redhat.com/articles/2112261">SLOTH</a> and <a href="https://eprint.iacr.org/2018/298.pdf">CurveSwap</a>), to feasible for highly resourced attackers (<a href="https://weakdh.org/imperfect-forward-secrecy-ccs15.pdf">WeakDH</a>, <a href="/logjam-the-latest-tls-vulnerability-explained/">LogJam</a>, <a href="https://censys.io/blog/freak">FREAK</a>, <a href="https://nakedsecurity.sophos.com/2016/08/25/anatomy-of-a-cryptographic-collision-the-sweet32-attack/">SWEET32</a>), to practical and dangerous (<a href="https://en.wikipedia.org/wiki/POODLE">POODLE</a>, <a href="https://robotattack.org/">ROBOT</a>).</p>
    <div>
      <h3>TLS 1.2 is slow</h3>
      <a href="#tls-1-2-is-slow">
        
      </a>
    </div>
    <p>Encryption has always been important online, but historically it was only used for things like logging in or sending credit card information, leaving most other data exposed. There has been a major trend in the last few years towards using HTTPS for all traffic on the Internet. This has the positive effect of protecting more of what we do online from eavesdroppers and <a href="/an-introduction-to-javascript-based-ddos/">injection attacks</a>, but has the downside that new connections get a bit slower.</p><p>For a browser and web server to agree on a key, they need to exchange cryptographic data. The exchange, called the “handshake” in TLS, has remained largely unchanged since TLS was standardized in 1999. The handshake requires two additional round-trips between the browser and the server before encrypted data can be sent (or one when resuming a previous connection). The additional cost of the TLS handshake for HTTPS results in a noticeable hit to latency compared to an HTTP alone. This additional delay can negatively impact performance-focused applications.</p>
    <div>
      <h3>Defining TLS 1.3</h3>
      <a href="#defining-tls-1-3">
        
      </a>
    </div>
    <p>Unsatisfied with the outdated design of TLS 1.2 and two-round-trip overhead, the IETF set about defining a new version of TLS. In August 2013, Eric Rescorla laid out a wishlist of features for the new protocol:<a href="https://www.ietf.org/proceedings/87/slides/slides-87-tls-5.pdf">https://www.ietf.org/proceedings/87/slides/slides-87-tls-5.pdf</a></p><p>After <a href="https://www.ietf.org/mail-archive/web/tls/current/msg20938.html">some debate</a>, it was decided that this new version of TLS was to be called TLS 1.3. The main issues that drove the design of TLS 1.3 were mostly the same as those presented five years ago:</p><ul><li><p>reducing handshake latency</p></li><li><p>encrypting more of the handshake</p></li><li><p>improving resiliency to cross-protocol attacks</p></li><li><p>removing legacy features</p></li></ul><p>The specification was shaped by volunteers through an open design process, and after four years of diligent work and vigorous debate, TLS 1.3 is now in its final form: RFC 8446. As adoption increases, the new protocol will make the internet both faster and more secure.</p><p>In this blog post I will focus on the two main advantages TLS 1.3 has over previous versions: security and performance.</p>
    <div>
      <h3>Trimming the hedges</h3>
      <a href="#trimming-the-hedges">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/57PQK3ofbneOYgOmlwNm4Y/e5c3c319903504002330efec0fc06db2/image10.jpg" />
            
            </figure><p><a href="https://commons.wikimedia.org/wiki/File:Williton_Highbridge_Nursery_topiary_garden.jpg">Creative Commons Attribution-Share Alike 3.0</a></p><p>In the last two decades, we as a society have learned a lot about how to write secure cryptographic protocols. The parade of cleverly-named attacks from POODLE to Lucky13 to SLOTH to LogJam showed that even TLS 1.2 contains antiquated ideas from the early days of cryptographic design. One of the design goals of TLS 1.3 was to correct previous mistakes by removing potentially dangerous design elements.</p>
    <div>
      <h4>Fixing key exchange</h4>
      <a href="#fixing-key-exchange">
        
      </a>
    </div>
    <p>TLS is a so-called “hybrid” cryptosystem. This means it uses both symmetric key cryptography (encryption and decryption keys are the same) and public key cryptography (encryption and decryption keys are different). Hybrid schemes are the predominant form of encryption used on the Internet and are used in <a href="https://en.wikipedia.org/wiki/Secure_Shell">SSH</a>, <a href="https://en.wikipedia.org/wiki/IPsec">IPsec</a>, <a href="https://en.wikipedia.org/wiki/Signal_Protocol">Signal</a>, <a href="https://www.wireguard.com/">WireGuard</a> and other protocols. In hybrid cryptosystems, public key cryptography is used to establish a shared secret between both parties, and the shared secret is used to create symmetric keys that can be used to encrypt the data exchanged.</p><p>As a general rule, public key crypto is slow and expensive (microseconds to milliseconds per operation) and symmetric key crypto is fast and cheap (nanoseconds per operation). Hybrid encryption schemes let you send a lot of encrypted data with very little overhead by only doing the expensive part once. Much of the work in TLS 1.3 has been about improving the part of the handshake, where public keys are used to establish symmetric keys.</p>
    <div>
      <h4>RSA key exchange</h4>
      <a href="#rsa-key-exchange">
        
      </a>
    </div>
    <p>The public key portion of TLS is about establishing a shared secret. There are two main ways of doing this with public key cryptography. The simpler way is with public-key encryption: one party encrypts the shared secret with the other party’s public key and sends it along. The other party then uses its private key to decrypt the shared secret and ... voila! They both share the same secret. This technique was discovered in 1977 by Rivest, Shamir and Adelman and is called RSA key exchange. In TLS’s RSA key exchange, the shared secret is decided by the client, who then encrypts it to the server’s public key (extracted from the certificate) and sends it to the server.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6vFfpyW3dU5vLUnbrzOGk8/5b9c70ff88d0da6d3210fc37f13e8184/image4.png" />
            
            </figure><p>The other form of key exchange available in TLS is based on another form of public-key cryptography, invented by Diffie and Hellman in 1976, so-called Diffie-Hellman key agreement. In Diffie-Hellman, the client and server both start by creating a public-private key pair. They then send the public portion of their key share to the other party. When each party receives the public key share of the other, they combine it with their own private key and end up with the same value: the pre-main secret. The server then uses a digital signature to ensure the exchange hasn’t been tampered with. This key exchange is called “ephemeral” if the client and server both choose a new key pair for every exchange.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6tjgSGvMVdzh3LZvt1HZT1/98031ef05fdc4353af60d27062fdb67a/image3.png" />
            
            </figure><p>Both modes result in the client and server having a shared secret, but RSA mode has a serious downside: it’s not <a href="/staying-on-top-of-tls-attacks/">forward secret</a>. That means that if someone records the encrypted conversation and then gets ahold of the RSA private key of the server, they can decrypt the conversation. This even applies if the conversation was recorded and the key is obtained some time well into the future. In a world where national governments are recording encrypted conversations and using exploits like <a href="https://en.wikipedia.org/wiki/Heartbleed">Heartbleed</a> to steal private keys, this is a realistic threat.</p><p>RSA key exchange has been problematic for some time, and not just because it’s not forward-secret. It’s also notoriously difficult to do correctly. In 1998, Daniel Bleichenbacher discovered a vulnerability in the way RSA encryption was done in SSL and created what’s called the “million-message attack,” which allows an attacker to perform an RSA private key operation with a server’s private key by sending a million or so well-crafted messages and looking for differences in the error codes returned. The attack has been refined over the years and in some cases only requires thousands of messages, making it feasible to do from a laptop. It was recently discovered that major websites (including facebook.com) were also vulnerable to a variant of Bleichenbacher’s attack called the <a href="https://robotattack.org/">ROBOT attack</a> as recently as 2017.</p><p>To reduce the risks caused by non-forward secret connections and million-message attacks, RSA encryption was removed from TLS 1.3, leaving ephemeral Diffie-Hellman as the only key exchange mechanism. Removing RSA key exchange brings other advantages, as we will discuss in the performance section below.</p>
    <div>
      <h4>Diffie-Hellman named groups</h4>
      <a href="#diffie-hellman-named-groups">
        
      </a>
    </div>
    <p>When it comes to cryptography, giving too many options leads to the wrong option being chosen. This principle is most evident when it comes to choosing Diffie-Hellman parameters. In previous versions of TLS, the choice of the Diffie-Hellman parameters was up to the participants. This resulted in some implementations choosing incorrectly, resulting in vulnerable implementations being deployed. TLS 1.3 takes this choice away.</p><p>Diffie-Hellman is a powerful tool, but not all Diffie-Hellman parameters are “safe” to use. The security of Diffie-Hellman depends on the difficulty of a specific mathematical problem called the <a href="https://en.wikipedia.org/wiki/Discrete_logarithm">discrete logarithm problem</a>. If you can solve the discrete logarithm problem for a set of parameters, you can extract the private key and break the security of the protocol. Generally speaking, the bigger the numbers used, the harder it is to solve the discrete logarithm problem. So if you choose small DH parameters, you’re in trouble.</p><p>The LogJam and WeakDH attacks of 2015 showed that many TLS servers could be tricked into using small numbers for Diffie-Hellman, allowing an attacker to break the security of the protocol and decrypt conversations.</p><p>Diffie-Hellman also requires the parameters to have certain other mathematical properties. In 2016, Antonio Sanso found an <a href="http://arstechnica.com/security/2016/01/high-severity-bug-in-openssl-allows-attackers-to-decrypt-https-traffic/">issue in OpenSSL</a> where parameters were chosen that lacked the right mathematical properties, resulting in another vulnerability.</p><p>TLS 1.3 takes the opinionated route, restricting the Diffie-Hellman parameters to ones that are known to be secure. However, it still leaves several options; permitting only one option makes it difficult to update TLS in case these parameters are found to be insecure some time in the future.</p>
    <div>
      <h3>Fixing ciphers</h3>
      <a href="#fixing-ciphers">
        
      </a>
    </div>
    <p>The other half of a hybrid crypto scheme is the actual encryption of data. This is done by combining an authentication code and a symmetric cipher for which each party knows the key. As I’ll describe, there are many ways to encrypt data, most of which are wrong.</p>
    <div>
      <h4>CBC mode ciphers</h4>
      <a href="#cbc-mode-ciphers">
        
      </a>
    </div>
    <p>In the last section we described TLS as a hybrid encryption scheme, with a public key part and a symmetric key part. The public key part is not the only one that has caused trouble over the years. The symmetric key portion has also had its fair share of issues. In any secure communication scheme, you need both encryption (to keep things private) and integrity (to make sure people don’t modify, add, or delete pieces of the conversation). Symmetric key encryption is used to provide both encryption and integrity, but in TLS 1.2 and earlier, these two pieces were combined in the wrong way, leading to security vulnerabilities.</p><p>An algorithm that performs symmetric encryption and decryption is called a symmetric cipher. Symmetric ciphers usually come in two main forms: block ciphers and stream ciphers.</p><p>A stream cipher takes a fixed-size key and uses it to create a stream of pseudo-random data of arbitrary length, called a key stream. To encrypt with a stream cipher, you take your message and combine it with the key stream by XORing each bit of the key stream with the corresponding bit of your message.. To decrypt, you take the encrypted message and XOR it with the key stream. Examples of pure stream ciphers are RC4 and ChaCha20. Stream ciphers are popular because they’re simple to implement and fast in software.</p><p>A block cipher is different than a stream cipher because it only encrypts fixed-sized messages. If you want to encrypt a message that is shorter or longer than the block size, you have to do a bit of work. For shorter messages, you have to add some extra data to the end of the message. For longer messages, you can either split your message up into blocks the cipher can encrypt and then use a block cipher mode to combine the pieces together somehow. Alternatively, you can turn your block cipher into a stream cipher by encrypting a sequence of counters with a block cipher and using that as the stream. This is called “counter mode”. One popular way of encrypting arbitrary length data with a block cipher is a mode called cipher block chaining (CBC).</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5hHGcSStKo5bHWDn64PXHq/4801697c668fc061eab0c0ab57c2fdd8/image9.png" />
            
            </figure><p></p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/510G6uPdPGbwTJHcsGqPSc/726945c085f0912e1307e18ef4393563/image7.png" />
            
            </figure><p>In order to prevent people from tampering with data, encryption is not enough. Data also needs to be integrity-protected. For CBC-mode ciphers, this is done using something called a message-authentication code (MAC), which is like a fancy checksum with a key. Cryptographically strong MACs have the property that finding a MAC value that matches an input is practically impossible unless you know the secret key. There are two ways to combine MACs and CBC-mode ciphers. Either you encrypt first and then MAC the ciphertext, or you MAC the plaintext first and then encrypt the whole thing. In TLS, they chose the latter, MAC-then-Encrypt, which turned out to be the wrong choice.</p><p>You can blame this choice for <a href="https://www.youtube.com/watch?v=-_8-2pDFvmg">BEAST</a>, as well as a slew of padding oracle vulnerabilities such as <a href="http://www.isg.rhul.ac.uk/tls/Lucky13.html">Lucky 13</a> and <a href="https://eprint.iacr.org/2015/1129">Lucky Microseconds</a>. Read my previous post on <a href="/padding-oracles-and-the-decline-of-cbc-mode-ciphersuites/">padding oracle attacks</a> for a comprehensive explanation of these flaws. The interaction between CBC mode and padding was also the cause of the widely publicized <a href="/sslv3-support-disabled-by-default-due-to-vulnerability/">POODLE vulnerability</a> in SSLv3 and some implementations of TLS.</p><p>RC4 is a classic stream cipher designed by Ron Rivest (the “R” of RSA) that was broadly supported since the early days of TLS. In 2013, it was found to have <a href="http://www.isg.rhul.ac.uk/tls/">measurable biases</a> that could be leveraged to allow attackers to decrypt messages.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7A6uMNALwNJtA0GSKOkxOL/5c9692baac1ab202e0e4d7f79e4ae8f2/image2.png" />
            
            </figure><p>AEAD Mode</p><p>In TLS 1.3, all the troublesome ciphers and cipher modes have been removed. You can no longer use CBC-mode ciphers or insecure stream ciphers such as RC4. The only type of symmetric crypto allowed in TLS 1.3 is a new construction called <a href="/it-takes-two-to-chacha-poly/">AEAD (authenticated encryption with additional data)</a>, which combines encryption and integrity into one seamless operation.</p>
    <div>
      <h3>Fixing digital signatures</h3>
      <a href="#fixing-digital-signatures">
        
      </a>
    </div>
    <p>Another important part of TLS is authentication. In every connection, the server authenticates itself to the client using a digital certificate, which has a public key. In RSA-encryption mode, the server proves its ownership of the private key by decrypting the pre-main secret and computing a MAC over the transcript of the conversation. In Diffie-Hellman mode, the server proves ownership of the private key using a digital signature. If you’ve been following this blog post so far, it should be easy to guess that this was done incorrectly too.</p>
    <div>
      <h4>PKCS#1v1.5</h4>
      <a href="#pkcs-1v1-5">
        
      </a>
    </div>
    <p>Daniel Bleichenbacher has made a living identifying problems with RSA in TLS. In 2006, he devised a pen-and-paper attack against RSA signatures as used in TLS. It was later discovered that major TLS implemenations including those of NSS and OpenSSL <a href="https://www.ietf.org/mail-archive/web/openpgp/current/msg00999.html">were vulnerable to this attack</a>. This issue again had to do with how difficult it is to implement padding correctly, in this case, the PKCS#1 v1.5 padding used in RSA signatures. In TLS 1.3, PKCS#1 v1.5 is removed in favor of the newer design <a href="https://en.wikipedia.org/wiki/Probabilistic_signature_scheme">RSA-PSS</a>.</p>
    <div>
      <h4>Signing the entire transcript</h4>
      <a href="#signing-the-entire-transcript">
        
      </a>
    </div>
    <p>We described earlier how the server uses a digital signature to prove that the key exchange hasn’t been tampered with. In TLS 1.2 and earlier, the server’s signature only covers part of the handshake. The other parts of the handshake, specifically the parts that are used to negotiate which symmetric cipher to use, are not signed by the private key. Instead, a symmetric MAC is used to ensure that the handshake was not tampered with. This oversight resulted in a number of high-profile vulnerabilities (FREAK, LogJam, etc.). In TLS 1.3 these are prevented because the server signs the entire handshake transcript.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2gJewngm3kPtCgXDvl7O4q/340d28439e4eaac4cd176359dfa19900/image1.png" />
            
            </figure><p>The FREAK, LogJam and CurveSwap attacks took advantage of two things:</p><ol><li><p>the fact that intentionally weak ciphers from the 1990s (called export ciphers) were still supported in many browsers and servers, and</p></li><li><p>the fact that the part of the handshake used to negotiate which cipher was used was not digitally signed.</p></li></ol><p>The on-path attacker can swap out the supported ciphers (or supported groups, or supported curves) from the client with an easily crackable choice that the server supports. They then break the key and forge two finished messages to make both parties think they’ve agreed on a transcript.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6PHIoZN6sxu78eUdoLWcbz/4f4a1fbbff72785aa2be24c5c9872e8f/image13.png" />
            
            </figure><p>These attacks are called downgrade attacks, and they allow attackers to force two participants to use the weakest cipher supported by both parties, even if more secure ciphers are supported. In this style of attack, the perpetrator sits in the middle of the handshake and changes the list of supported ciphers advertised from the client to the server to only include weak export ciphers. The server then chooses one of the weak ciphers, and the attacker figures out the key with a brute-force attack, allowing the attacker to forge the MACs on the handshake. In TLS 1.3, this type of downgrade attack is impossible because the server now signs the entire handshake, including the cipher negotiation.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/P4S0oZBnuJvkG23ljrAmN/a658f4a88dddcf2019fa22567e150a53/image14.png" />
            
            </figure>
    <div>
      <h3>Better living through simplification</h3>
      <a href="#better-living-through-simplification">
        
      </a>
    </div>
    <p>TLS 1.3 is a much more elegant and secure protocol with the removal of the insecure features listed above. This hedge-trimming allowed the protocol to be simplified in ways that make it easier to understand, and faster.</p>
    <div>
      <h4>No more take-out menu</h4>
      <a href="#no-more-take-out-menu">
        
      </a>
    </div>
    <p>In previous versions of TLS, the main negotiation mechanism was the ciphersuite. A ciphersuite encompassed almost everything that could be negotiated about a connection:</p><ul><li><p>type of certificates supported</p></li><li><p>hash function used for deriving keys (e.g., SHA1, SHA256, ...)</p></li><li><p>MAC function (e.g., HMAC with SHA1, SHA256, …)</p></li><li><p>key exchange algorithm (e.g., RSA, ECDHE, …)</p></li><li><p>cipher (e.g., AES, RC4, ...)</p></li><li><p>cipher mode, if applicable (e.g., CBC)</p></li></ul><p>Ciphersuites in previous versions of TLS had grown into monstrously large alphabet soups. Examples of commonly used cipher suites are: DHE-RC4-MD5 or ECDHE-ECDSA-AES-GCM-SHA256. Each ciphersuite was represented by a code point in a table maintained by an organization called the Internet Assigned Numbers Authority (IANA). Every time a new cipher was introduced, a new set of combinations needed to be added to the list. This resulted in a combinatorial explosion of code points representing every valid choice of these parameters. It had become a bit of a mess.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2W8o2jcAOgb3EUcGitVN8b/59d24160803d908ed4417494d57ea288/image8.png" />
            
            </figure><p>TLS 1.2</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/171emSynzRJ1MSpJv08jc1/18c165eb9c379b48418a5f25a47131e1/image16.png" />
            
            </figure><p></p><p>TLS 1.3</p><p>TLS 1.3 removes many of these legacy features, allowing for a clean split between three orthogonal negotiations:</p><ul><li><p>Cipher + HKDF Hash</p></li><li><p>Key Exchange</p></li><li><p>Signature Algorithm</p></li></ul>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5eHObkLXwOPPw9MEbaxSsc/8e20132ec65ebb83b1e43528711fe05d/image6.png" />
            
            </figure><p>This simplified cipher suite negotiation and radically reduced set of negotiation parameters opens up a new possibility. This possibility enables the TLS 1.3 handshake latency to drop from two round-trips to only one round-trip, providing the performance boost that will ensure that TLS 1.3 will be popular and widely adopted.</p>
    <div>
      <h3>Performance</h3>
      <a href="#performance">
        
      </a>
    </div>
    <p>When establishing a new connection to a server that you haven’t seen before, it takes two <a href="https://www.cloudflare.com/learning/cdn/glossary/round-trip-time-rtt/">round-trips</a> before data can be sent on the connection. This is not particularly noticeable in locations where the server and client are geographically close to each other, but it can make a big difference on mobile networks where latency can be as high as 200ms, an amount that is noticeable for humans.</p>
    <div>
      <h3>1-RTT mode</h3>
      <a href="#1-rtt-mode">
        
      </a>
    </div>
    <p>TLS 1.3 now has a radically simpler cipher negotiation model and a reduced set of key agreement options (no RSA, no user-defined DH parameters). This means that every connection will use a DH-based key agreement and the parameters supported by the server are likely easy to guess (ECDHE with X25519 or P-256). Because of this limited set of choices, the client can simply choose to send DH key shares in the first message instead of waiting until the server has confirmed which key shares it is willing to support. That way, the server can learn the shared secret and send encrypted data one round trip earlier. Chrome’s implementation of TLS 1.3, for example, sends an X25519 keyshare in the first message to the server.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3E3tuAB7cL1jf7HXHLegge/461301a79e282e3034a6acc1bb537e49/image3.png" />
            
            </figure><p></p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5xa8AA1zO4jZ4yjPcIukcM/a29464a13527710055cd6031cae54c92/image15.png" />
            
            </figure><p>In the rare situation that the server does not support one of the key shares sent by the client, the server can send a new message, the HelloRetryRequest, to let the client know which groups it supports. Because the list has been trimmed down so much, this is not expected to be a common occurrence.</p>
    <div>
      <h3>0-RTT resumption</h3>
      <a href="#0-rtt-resumption">
        
      </a>
    </div>
    <p>A further optimization was inspired by the <a href="https://docs.google.com/document/u/1/d/1g5nIXAIkN_Y-7XJW5K45IblHd_L2f5LTaDUDwvZ5L6g/edit">QUIC protocol</a>. It lets clients send encrypted data in their first message to the server, resulting in no additional latency cost compared to unencrypted HTTP. This is a big deal, and once TLS 1.3 is widely deployed, the encrypted web is sure to feel much snappier than before.</p><p>In TLS 1.2, there are two ways to resume a connection, <a href="/tls-session-resumption-full-speed-and-secure/">session ids and session tickets</a>. In TLS 1.3 these are combined to form a new mode called PSK (pre-shared key) resumption. The idea is that after a session is established, the client and server can derive a shared secret called the “resumption main secret”. This can either be stored on the server with an id (session id style) or encrypted by a key known only to the server (session ticket style). This session ticket is sent to the client and redeemed when resuming a connection.</p><p>For resumed connections, both parties share a resumption main secret so key exchange is not necessary except for providing forward secrecy. The next time the client connects to the server, it can take the secret from the previous session and use it to encrypt application data to send to the server, along with the session ticket. Something as amazing as sending encrypted data on the first flight does come with its downfalls.</p>
    <div>
      <h3>Replayability</h3>
      <a href="#replayability">
        
      </a>
    </div>
    <p>There is no interactivity in 0-RTT data. It’s sent by the client, and consumed by the server without any interactions. This is great for performance, but comes at a cost: replayability. If an attacker captures a 0-RTT packet that was sent to server, they can replay it and there’s a chance that the server will accept it as valid. This can have interesting negative consequences.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4aI0oRVRPjH8lmqKPfo2Uu/66828a933209d66d8f8ac0db0d77d54d/0-rtt-attack-_2x.png" />
            
            </figure><p>An example of dangerous replayed data is anything that changes state on the server. If you increment a counter, perform a database transaction, or do anything that has a permanent effect, it’s risky to put it in 0-RTT data.</p><p>As a client, you can try to protect against this by only putting “safe” requests into the 0-RTT data. In this context, “safe” means that the request won’t change server state. In HTTP, different methods are supposed to have different semantics. HTTP GET requests are supposed to be safe, so a browser can usually protect HTTPS servers against replay attacks by only sending GET requests in 0-RTT. Since most page loads start with a GET of “/” this results in faster page load time.</p><p>Problems start to happen when data sent in 0-RTT are used for state-changing requests. To help prevent against this failure case, TLS 1.3 also includes the time elapsed value in the session ticket. If this diverges too much, the client is either approaching the speed of light, or the value has been replayed. In either case, it’s prudent for the server to reject the 0-RTT data.</p><p>For more details about <a href="/introducing-0-rtt/">0-RTT, and the improvements to session resumption</a> in TLS 1.3, check out this previous blog post.</p>
    <div>
      <h3>Deployability</h3>
      <a href="#deployability">
        
      </a>
    </div>
    <p>TLS 1.3 was a radical departure from TLS 1.2 and earlier, but in order to be deployed widely, it has to be backwards compatible with existing software. One of the reasons TLS 1.3 has taken so long to go from draft to final publication was the fact that some existing software (namely middleboxes) wasn’t playing nicely with the new changes. Even minor changes to the TLS 1.3 protocol that were visible on the wire (such as eliminating the redundant ChangeCipherSpec message, bumping the version from 0x0303 to 0x0304) ended up causing connection issues for some people.</p><p>Despite the fact that future flexibility was built into the TLS spec, some implementations made incorrect assumptions about how to handle future TLS versions. The phenomenon responsible for this change is called <i>ossification</i> and I explore it more fully in the context of TLS in my previous post about <a href="/why-tls-1-3-isnt-in-browsers-yet/">why TLS 1.3 isn’t deployed yet</a>. To accommodate these changes, TLS 1.3 was modified to look a lot like TLS 1.2 session resumption (at least on the wire). This resulted in a much more functional, but less aesthetically pleasing protocol. This is the price you pay for upgrading one of the most widely deployed protocols online.</p>
    <div>
      <h3>Conclusions</h3>
      <a href="#conclusions">
        
      </a>
    </div>
    <p>TLS 1.3 is a modern security protocol built with modern tools like <a href="http://tls13tamarin.github.io/TLS13Tamarin/">formal</a> <a href="https://eprint.iacr.org/2016/081">analysis</a> that retains its backwards compatibility. It has been tested widely and iterated upon using real world deployment data. It’s a cleaner, faster, and more secure protocol ready to become the de facto two-party encryption protocol online. Draft 28 of TLS 1.3 is enabled by default for <a href="/you-get-tls-1-3-you-get-tls-1-3-everyone-gets-tls-1-3/">all Cloudflare customers</a>, and we will be rolling out the final version soon.</p><p>Publishing TLS 1.3 is a huge accomplishment. It is one the best recent examples of how it is possible to take 20 years of deployed legacy code and change it on the fly, resulting in a better internet for everyone. TLS 1.3 has been debated and analyzed for the last three years and it’s now ready for prime time. Welcome, RFC 8446.</p> ]]></content:encoded>
            <category><![CDATA[Encryption]]></category>
            <category><![CDATA[TLS]]></category>
            <category><![CDATA[TLS 1.3]]></category>
            <category><![CDATA[HTTPS]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[Research]]></category>
            <category><![CDATA[Cryptography]]></category>
            <guid isPermaLink="false">2sBEBduE1Y7lYRV2e70E5m</guid>
            <dc:creator>Nick Sullivan</dc:creator>
        </item>
        <item>
            <title><![CDATA[You get TLS 1.3! You get TLS 1.3! Everyone gets TLS 1.3!]]></title>
            <link>https://blog.cloudflare.com/you-get-tls-1-3-you-get-tls-1-3-everyone-gets-tls-1-3/</link>
            <pubDate>Wed, 16 May 2018 17:28:07 GMT</pubDate>
            <description><![CDATA[ It's no secret that Cloudflare has been a big proponent of TLS 1.3, the newest edition of the TLS protocol that improves both speed and security, since we have made it available to our customers starting in 2016.  ]]></description>
            <content:encoded><![CDATA[ <p>It's no secret that Cloudflare has been a big proponent of <a href="/introducing-tls-1-3/">TLS 1.3</a>, the newest edition of the TLS protocol that improves both speed and security, since we have made it available to our customers starting in 2016. However, for the longest time TLS 1.3 has been a work-in-progress which meant that the feature was disabled by default in our customers’ dashboards, at least until <a href="/why-tls-1-3-isnt-in-browsers-yet/">all the kinks</a> in the protocol could be resolved.</p><p>With the specification <a href="https://www.ietf.org/mail-archive/web/tls/current/msg25837.html">finally nearing its official publication</a>, and after several years of work (as well as 28 draft versions), we are happy to announce that the TLS 1.3 feature on Cloudflare is out of beta and will be enabled by default for all new zones.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6uc6xUOlMR5H87HZejeEA1/9be622a3ef7c9d8160c1f084021a653c/Screen-Shot-2018-05-23-at-8.49.33-AM.png" />
            
            </figure><p>Custom image derived from <a href="https://youtu.be/8CAscBCdaQg?t=1m48s">YouTube video</a> courtesy of <a href="https://www.youtube.com/user/OWN">OWN</a></p><p>For our Free and Pro customers not much changes, they already had TLS 1.3 enabled by default from the start. We have also decided to disable the <a href="/introducing-0-rtt/">0-RTT feature</a> by default for these plans (it was previously enabled by default as well), due to <a href="https://twitter.com/grittygrease/status/991750903295164416">its inherent security properties</a>. It will still be possible to explicitly enable it from the dashboard or the API (more on 0-RTT soon-ish in another blog post).</p><p>Our Business and Enterprise customers will now also get TLS 1.3 enabled by default for new zones (but will continue to have 0-RTT disabled). For existing Business customers that haven't made an explicit choice (that is, they haven't turned the feature on or off manually), we are also retroactively turning TLS 1.3 on.</p>
    <div>
      <h3>What happened to the middleboxes?</h3>
      <a href="#what-happened-to-the-middleboxes">
        
      </a>
    </div>
    <p>Back in December <a href="/why-tls-1-3-isnt-in-browsers-yet/">we blogged about why TLS 1.3 still wasn't being widely adopted</a>, the main reason being non-compliant middleboxes, network appliances designed to monitor and sometimes intercept HTTPS traffic.</p><p>Due to the fact that the TLS protocol hasn’t been updated for a long time (TLS 1.2 came out back in 2008, with fairly minimal changes compared to TLS 1.1), wrong assumptions about the protocol made by these appliances meant that some of the more invasive changes in TLS 1.3, which broke those assumptions, caused the middleboxes to misbehave, in the worst cases causing TLS connections passing through them to break.</p><p>Since then, new draft versions of the protocol have been discussed and published, providing additional measures (on top of the ones already adopted, like the “supported_versions” extension) to mitigate the impact caused by middleboxes. How?, you ask. The trick was to modify the TLS 1.3 protocol to look more like previous TLS versions, but without impacting the improved performance and security benefits the new version provides.</p><p>For example, the ChangeCipherSpec handshake message, which in previous versions of the protocol was used to notify the receiving party that subsequent records would be encrypted, was originally removed from TLS 1.3 since it had no purpose in the protocol anymore after the handshake algorithm was streamlined, but in order to avoid confusing middleboxes that expected to see the message on the wire, it was reintroduced even though the receiving endpoint will just ignore it.</p><p>Another point of contention was the fact that some middleboxes expect to see the Certificate messages sent by servers (usually to identify the end server, sometimes with nefarious purposes), but since TLS 1.3 moved that message to the encrypted portion of the handshake, it became invisible to the snooping boxes. The trick there was to make the TLS 1.3 handshake look like it was <a href="/tls-session-resumption-full-speed-and-secure/">resuming a previous connection</a> which means that, even in previous TLS versions, the Certificate message is omitted from plain text communication. This was achieved by populating the previously deprecated "session_id" field in the ClientHello message with a bogus value.</p><p>Adopting these changes meant that, while the protocol itself lost a bit of its original elegance (but without losing any of the security and performance), major browsers could finally enable TLS 1.3 by default for all of their users: <a href="https://www.chromestatus.com/features/5712755738804224">Chrome enabled TLS 1.3 by default in version 65</a> while <a href="https://www.mozilla.org/en-US/firefox/60.0/releasenotes/">Firefox did so in version 60</a>.</p>
    <div>
      <h3>Adoption</h3>
      <a href="#adoption">
        
      </a>
    </div>
    <p>We can now go back to our metrics and see what all of this means for general TLS 1.3 adoption.</p><p>Back in December, <a href="/why-tls-1-3-isnt-in-browsers-yet/">only 0.06% of TLS connections to Cloudflare websites used TLS 1.3</a>. Now, 5-6% do so, with this number steadily rising:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3jRyGwowWGzFifg16szfCW/2ef83fa44444667f43b8af1d773af02b/tls13_metric.png" />
            
            </figure><p>It’s worth noting that the current Firefox beta (v61) switched to using draft 28, from draft 23 (which Chrome also uses). The two draft versions are incompatible due to some minor wire changes that were adopted some time after draft 23 was published, but Cloudflare can speak both versions so there won’t be a dip in adoption once Firefox 61 becomes stable. Once the final TLS 1.3 version (that is draft 28) becomes an official RFC we will also support that alongside the previous draft versions, to avoid leaving behind slow to update clients.</p>
    <div>
      <h3>Conclusion</h3>
      <a href="#conclusion">
        
      </a>
    </div>
    <p>The tremendous work required to specify, implement and deploy TLS 1.3 is finally starting to bear fruit, and adoption will without a doubt keep steadily increasing for some time: at the end of 2017 <a href="/our-predictions-for-2018/">our CTO predicted</a> that by the end of 2018 more than 50% of HTTPS connections will happen over TLS 1.3, and given the recent developments we are still confident that it is a reachable target.</p> ]]></content:encoded>
            <category><![CDATA[TLS]]></category>
            <category><![CDATA[TLS 1.3]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[Security]]></category>
            <guid isPermaLink="false">6d9VconNxieKz4HESOUy0</guid>
            <dc:creator>Alessandro Ghedini</dc:creator>
        </item>
        <item>
            <title><![CDATA[Why TLS 1.3 isn't in browsers yet]]></title>
            <link>https://blog.cloudflare.com/why-tls-1-3-isnt-in-browsers-yet/</link>
            <pubDate>Tue, 26 Dec 2017 20:30:00 GMT</pubDate>
            <description><![CDATA[ Upgrading a security protocol in an ecosystem as complex as the Internet is difficult. You need to update clients and servers and make sure everything in between continues to work correctly. The Internet is in the middle of such an upgrade right now.  ]]></description>
            <content:encoded><![CDATA[ 
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4X5RY5RIBFMtAApEoeXPqO/accd858973a81426156cbde6907b0b09/split-1.jpg" />
            
            </figure><p>Upgrading a security protocol in an ecosystem as complex as the Internet is difficult. You need to update clients and servers and make sure everything in between continues to work correctly. The Internet is in the middle of such an upgrade right now. Transport Layer Security (TLS), the protocol that keeps web browsing confidential (and many people persist in calling SSL), is getting its first major overhaul with the introduction of TLS 1.3. Last year, Cloudflare was the first major provider to support <a href="/introducing-tls-1-3/">TLS 1.3</a> by default on the server side. We expected the client side would follow suit and be enabled in all major browsers soon thereafter. It has been over a year since Cloudflare’s TLS 1.3 launch and still, none of the major browsers have enabled TLS 1.3 by default.</p><p>The reductive answer to why TLS 1.3 hasn’t been deployed yet is <i>middleboxes</i>: network appliances designed to monitor and sometimes intercept HTTPS traffic inside corporate environments and mobile networks. Some of these middleboxes implemented TLS 1.2 incorrectly and now that’s blocking browsers from releasing TLS 1.3. However, simply blaming network appliance vendors would be disingenuous. The deeper truth of the story is that TLS 1.3, as it was originally designed, was incompatible with the way the Internet has evolved over time. How and why this happened is the multifaceted question I will be exploring in this blog post.</p><p>To help support this discussion with data, we built a tool to help check if your network is compatible with TLS 1.3:<a href="https://tls13.mitm.watch/">https://tls13.mitm.watch/</a></p>
    <div>
      <h3>How version negotiation used to work in TLS</h3>
      <a href="#how-version-negotiation-used-to-work-in-tls">
        
      </a>
    </div>
    <p>The Transport Layer Security protocol, TLS, is the workhorse that enables secure web browsing with HTTPS. The TLS protocol was adapted from an earlier protocol, Secure Sockets Layer (SSL), in the late 1990s. TLS currently has three versions: 1.0, 1.1 and 1.2. The protocol is very flexible and can evolve over time in different ways. Minor changes can be incorporated as <a href="https://blog.susanka.eu/what-are-tls-extensions/">“extensions”</a> (such as OCSP and Certificate Transparency) while larger and more fundamental changes often require a new version. TLS 1.3 is by far the largest change to the protocol in its history, completely revamping the cryptography and introducing features like <a href="/introducing-0-rtt/">0-RTT</a>.</p><p>Not every client and server support the same version of TLS—that would make it impossible to upgrade the protocol—so most support multiple versions simultaneously. In order to agree on a common version for a connection, clients and servers negotiate. TLS version negotiation is very simple. The client tells the server the <i>newest</i> version of the protocol that it supports and the server replies back with the newest version of the protocol that they both support.</p><p>Versions in TLS are represented as two-byte values. Since TLS was adapted from SSLv3, the literal version numbers used in the protocol were just increments of the minor version:SSLv3 is 3.0TLS 1.0 is 3.1TLS 1.1 is 3.2TLS 1.2 is 3.3, etc.</p><p>When connecting to a server with TLS, the client sends its highest supported version at the beginning of the connection:</p>
            <pre><code>(3, 3) → server</code></pre>
            <p>A server that understands the same version can reply back with a message starting with the same version bytes.</p>
            <pre><code>(3, 3) → server
client ← (3, 3)</code></pre>
            <p>Or, if the server only knows an older version of the protocol, it can reply with an older version. For example, if the server only speaks TLS 1.0, it can reply with:</p>
            <pre><code>(3, 3) → server
client ← (3, 1)</code></pre>
            <p>If the client supports the version returned by the server then they continue using that version of TLS and establish a secure connection. If the client and server don’t share a common version, the connection fails.</p><p>This negotiation was designed to be future-compatible. If a client sends higher version than the server supports, the server should still be able to reply with whatever version the server supports. For example, if a client sends (3, 8) to a modern-day TLS 1.2-capable server, it should just reply back with (3,3) and the handshake will continue as TLS 1.2.</p><p>Pretty simple, right? As it turns out, some servers didn’t implement this correctly and this led to a chain of events that exposed web users to a serious security vulnerability.</p>
    <div>
      <h3>POODLE and the downgrade dance</h3>
      <a href="#poodle-and-the-downgrade-dance">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4fK2Yxs4OrlHhnpuqcLms4/b04f3af721d9c19fb5f19f40a31f4360/poodle-6102_960_720.jpg" />
            
            </figure><p><a href="https://pixabay.com/en/poodle-metal-iron-silhouette-6102/">CC0 Creative Commons</a></p><p>The last major upgrade to TLS was the introduction of TLS 1.2. During the roll-out of TLS 1.2 in browsers, it was found that some TLS 1.0 servers did not implement version negotiation correctly. When a client connected with a TLS connection advertising support for TLS 1.2, the faulty servers would disconnect instead of negotiating a version of TLS that they understood (like TLS 1.0).</p><p>Browsers had three options to deal with this situation</p><ol><li><p>enable TLS 1.2 and a percentage of websites would stop working</p></li><li><p>delay the deployment of TLS 1.2 until these servers are fixed</p></li><li><p>retry with an older version of TLS if the connection fails</p></li></ol><p>One expectation that people have about their browsers is that when they are updated, websites keep working. The number of misbehaving servers was far too numerous to just break with an update, eliminating option 1). By this point, TLS 1.2 had been around for a year, and still, servers were still broken waiting longer wasn’t going to solve the situation, eliminating option 2). This left option 3) as the only viable choice.</p><p>To both enable TLS 1.2 and keep their users happy, most browsers implemented what’s called an “insecure downgrade”. When faced with a connection failure when connecting to a site, they would try again with TLS 1.1, then with TLS 1.0, then if that failed, SSLv3. This downgrade logic “fixed” these broken servers... at the cost of a slow connection establishment. ?</p><p>However, insecure downgrades are called <i>insecure</i> for a reason. Client downgrades are triggered by a specific type of network failure, one that can be easily spoofed. From the client’s perspective, there’s no way to tell if this failure was caused by a faulty server or by an attacker who happens to be on the path of the connection network. This means that network attackers can inject fake network failures and trick a client into connecting to a server with SSLv3, even if both support a newer protocol. At this point, there were no severe publicly-known vulnerabilities in SSLv3, so this didn’t seem like a big problem. Then POODLE happened.</p><p>In October 2014, Bodo Möller published <a href="https://security.googleblog.com/2014/10/this-poodle-bites-exploiting-ssl-30.html">POODLE</a>, a serious vulnerability in the SSLv3 protocol that allows an in-network attacker to reveal encrypted information with minimal effort. Because TLS 1.0 was so widely deployed on both clients and servers, very few connections on the web used SSLv3, which should have kept them safe. However, the insecure downgrade feature made it possible for attackers to downgrade any connection to SSLv3 if both parties supported it (which most of them did). The availability of this “downgrade dance” vector turned the risk posed by POODLE from a curiosity into a serious issue affecting most of the web.</p>
    <div>
      <h3>Fixing POODLE and removing insecure downgrade</h3>
      <a href="#fixing-poodle-and-removing-insecure-downgrade">
        
      </a>
    </div>
    <p>The fixes for POODLE were:</p><ol><li><p>disable SSLv3 on the client or the server</p></li><li><p>enable a new TLS feature called <a href="https://tools.ietf.org/html/rfc7507">SCSV</a>.</p></li></ol><p>SCSV was conveniently proposed by Bodo Möller and Adam Langley a few months before POODLE. Simply put, with SCSV the client adds an indicator, called a <i>downgrade canary</i>, into its client hello message if it is retrying due to a network error. If the server supports a newer version than the one advertised in the client hello and sees this canary, it can assume a downgrade attack is happening and close the connection. This is a nice feature, but it is optional and requires both clients and servers to update, leaving many web users exposed.</p><p>Browsers immediately removed support for SSLv3, with very little impact other than breaking some SSLv3-only websites. Users with older browsers had to depend on web servers disabling SSLv3. Cloudflare <a href="/sslv3-support-disabled-by-default-due-to-vulnerability/">did this immediately</a> for its customers, and so did most sites, but even in late 2017 <a href="https://www.ssllabs.com/ssl-pulse/">over 10% of sites measured by SSL Pulse still support SSLv3</a>.</p><p>Turning off SSLv3 was a feasible solution to POODLE because SSLv3 was not critical to the Internet. This raises the question: what happens if there’s a serious vulnerability in TLS 1.0? TLS 1.0 is still very widely used and depended on, turning it off in the browser would lock out around 10% of users. Also, despite SCSV being a nice solution to insecure downgrades, <a href="https://www.ssllabs.com/ssl-pulse/">many servers don’t support it</a>. The only option to ensure the safety of users against a future issue in TLS 1.0 is to disable insecure fallback.</p><p>After several years of bad performance due to the client having to reconnect multiple times, the majority of the websites that did not implement version negotiation correctly fixed their servers. This made it possible for some browsers to remove this insecure fallback:<a href="https://www.mozilla.org/en-US/firefox/37.0/releasenotes/">Firefox in 2015</a>, and <a href="https://developers.google.com/web/updates/2016/03/chrome-50-deprecations#remove_insecure_tls_version_fallback">Chrome in 2016</a>. Because of this, Chrome and Firefox users are now in a safer position in the event of a new TLS 1.0 vulnerability.</p>
    <div>
      <h3>Introducing a new version (again)</h3>
      <a href="#introducing-a-new-version-again">
        
      </a>
    </div>
    <p>Designing a protocol for future compatibility is hard, it’s easy to make mistakes if there isn’t a feedback mechanism. This is the case of TLS version negotiation: nothing breaks if you implement it wrong, just hypothetical future versions. There is no signal to developers that an implementation is flawed, and so mistakes can happen without being noticed. That is, until a new version of the protocol is deployed and your implementation fails, but by then the code is deployed and it could take years for everyone to upgrade. The fact that some server implementations failed to handle a “future” version of TLS correctly should have been expected.</p><p>TLS version negotiation, though simple to explain, is actually an example of a protocol design antipattern. It demonstrates a phenomenon in protocol design called <i>ossification</i>. If a protocol is designed with a flexible structure, but that flexibility is never used in practice, some implementation is going to assume it is constant. Adam Langley <a href="https://www.imperialviolet.org/2014/02/27/tlssymmetriccrypto.html">compares the phenomenon to rust</a>. If you rarely open a door, its hinge is more likely to rust shut. Protocol negotiation in TLS is such a hinge.</p><p>Around the time of TLS 1.2’s deployment, the discussions around designing an ambitious new version of TLS were beginning. It was going to be called TLS 1.3, and the version number was naturally chosen as 3.4 (or (3, 4)). By mid-2016, the TLS 1.3 draft had been through 15 iterations, and the version number and negotiation were basically set. At this point, browsers had removed the insecure fallback. It was assumed that servers that weren’t able to do TLS version negotiation correctly had already learned the lessons of POODLE and finally implemented version negotiation correctly. This turned out to not be the case. Again.</p><p>When presented with a client hello with version 3.4, a large percentage of TLS 1.2-capable servers would disconnect instead of replying with 3.3. Internet scans by Hanno Böck, David Benjamin, SSL Labs, and others confirmed that the failure rate for TLS 1.3 was very high, over 3% in many measurements. This was the exact same situation faced during the upgrade to TLS 1.2. History was repeating itself.</p><p>This unexpected setback caused a crisis of sorts for the people involved in the protocol’s design. Browsers did not want to re-enable the insecure downgrade and fight the uphill battle of oiling the protocol negotiation joint again for the next half-decade. But without a downgrade, using TLS 1.3 as written would break 3% of the Internet for their users. What could be done?</p><p>The controversial choice was to accept a proposal from David Benjamin to make the first TLS 1.3 message (the client hello) look like it TLS 1.2. The version number from the client was changed back to (3, 3) and a new “version” extension was introduced with the list of supported versions inside. The server would return a server hello starting with (3, 4) if TLS 1.3 was supported and (3, 3) or earlier otherwise. Draft 16 of TLS 1.3 contained this new and “improved” protocol negotiation logic.</p><p>And this worked. Servers for the most part were tolerant to this change and easily fell back to TLS 1.2, ignoring the new extension. But this was not the end of the story. Ossification is a fickle phenomenon. It not only affects clients and servers, but it also affects everything on the network that interacts with a protocol.</p>
    <div>
      <h3>The real world is full of middleboxes</h3>
      <a href="#the-real-world-is-full-of-middleboxes">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/47fsWCuUoGyhvHj0mKoSxg/ae37b8a52bfff83574e2b59ce7d6481e/Ramen_Box_Wall_at_Anime_Midwest_2014.jpg" />
            
            </figure><p><a href="https://en.wikipedia.org/wiki/File:Ramen_Box_Wall_at_Anime_Midwest_2014.jpg">CC BY-SA 3.0</a></p><p>Readers of our blog may remember a post from earlier this year on <a href="/understanding-the-prevalence-of-web-traffic-interception/">HTTPS interception</a>. In it, we discussed a study that measured how many “secure” HTTPS connections were actually intercepted and decrypted by inspection software or hardware somewhere in between the browser and the web server. There are also passive inspection middleboxes that parse TLS and either block or divert the connections based on visible data, such as ISPs that use hostname information (<a href="https://en.wikipedia.org/wiki/Server_Name_Indication">SNI</a>) from TLS connections to block “banned” websites.</p><p>In order to inspect traffic, these network appliances need to implement some or all of TLS. Every new device that supports TLS introduces a TLS implementation that makes assumption about how the protocol should act. The more implementations there are, the more likely it is that ossification occurs.</p><p>In February of 2017, both Chrome and Firefox started enabling TLS 1.3 for a percentage of their customers. The results were unexpectedly horrible. A much higher percentage of TLS 1.3 connections were failing than expected. For some users, no matter what the website, TLS 1.2 worked but TLS 1.3 did not.</p><p>Success rates for TLS 1.3 Draft 18</p>
            <pre><code>Firefox &amp; Cloudflare
97.8% for TLS 1.2
96.1% for TLS 1.3

Chrome &amp; Gmail
98.3% for TLS 1.2
92.3% for TLS 1.3</code></pre>
            <p>After some investigation, it was found that some widely deployed middleboxes, both of the intercepting and passive variety, were causing connections to fail.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/69FNZ8lQO5MBFxPjmGPYfm/8d337aa54dd37e517bbf8be988022f49/image1-1.png" />
            
            </figure><p>Because TLS has generally looked the same throughout its history, some network appliances made assumptions about how the protocol would evolve over time. When faced with a new version that violated these assumptions, these appliances fail in various ways.</p><p>Some features of TLS that were changed in TLS 1.3 were merely cosmetic. Things like the ChangeCipherSpec, session_id, and compression fields that were part of the protocol since SSLv3 were removed. These fields turned out to be considered essential features of TLS to some of these middleboxes, and removing them caused connection failures to skyrocket.</p><p>If a protocol is in use for a long enough time with a similar enough format, people building tools around that protocol will make assumptions around that format being constant. This is often not an intentional choice by developers, but an unintended consequence of how a protocol is used in practice. Developers of network devices may not understand every protocol used on the internet, so they often test against what they see on the network. If a part of a protocol that is supposed to be flexible never changes in practice, someone will assume it is a constant. This is more likely the more implementations are created.</p><p>It would be disingenuous to put all of the blame for this on the specific implementers of these middleboxes. Yes, they created faulty implementations of TLS, but another way to think about it is that the original design of TLS lent itself to this type of failure. Implementers implement to the reality of the protocol, not the intention of the protocol’s designer or the text of the specification. In complex ecosystems with multiple implementers, unused joints rust shut.</p><p>Removing features that have been part of a protocol for 20 years and expecting it to simply “work” was wishful thinking.</p>
    <div>
      <h3>Making TLS 1.3 work</h3>
      <a href="#making-tls-1-3-work">
        
      </a>
    </div>
    <p>At the IETF meeting in Singapore last month, a new change was proposed to TLS 1.3 to help resolve this issue. These changes were based on an idea from Kyle Nekritz of Facebook: make TLS 1.3 look like TLS 1.2 to middleboxes. This change re-introduces many of the parts of the protocol that were removed (session_id, ChangeCipherSpec, an empty compression field), and introduces some other changes that make TLS 1.3 look like TLS 1.2 in all the ways that matter to the broken middleboxes.</p><p>Several iterations of these changes were developed by <a href="/make-ssl-boring-again/">BoringSSL</a> and tested in Chrome over a series of months. Facebook also performed some similar experiments and the two teams converged on the same set of changes.</p><p>Chrome Experiment Success Rates</p>
            <pre><code>TLS 1.2 - 98.6%
Experimental changes (PR 1091 on Github) - 98.8%</code></pre>
            <p>Firefox Experiment Success Rates</p>
            <pre><code>TLS 1.2 - 98.42%
Experimental changes (PR 1091 on Github) - 98.37%</code></pre>
            <p>These experiments showed that is was possible to modify TLS 1.3 to be compatible with middleboxes. They also demonstrated the ossification phenomenon. As we described in a previous section, the only thing that could have rusted shut in the client hello, the version negotiation, rusted shut. This resulted in Draft 16, which moved the version negotiation to an extension. As an intermediary between the client and the server, middleboxes also care about the server hello message. This message had many more hinges that were thought to be flexible but turned out weren’t. Almost all of these had rusted shut. The new “middlebox-friendly” changes took this reality into account. These experimental changes were incorporated into the specification in <a href="https://tools.ietf.org/html/draft-ietf-tls-tls13-22">TLS 1.3 Draft 22</a>.</p>
    <div>
      <h3>Making sure this doesn’t happen again</h3>
      <a href="#making-sure-this-doesnt-happen-again">
        
      </a>
    </div>
    <p>The original protocol negotiation mechanism is unrecoverably burnt. That means it likely can’t be used in a future version of TLS without significant breakage. However, many of the other protocol negotiation features are still flexible, such as ciphersuites selection and extensions. It would be great to keep it this way.</p><p>Last year, Adam Langley wrote a great blog post about cryptographic protocol design (<a href="https://www.imperialviolet.org/2016/05/16/agility.html">https://www.imperialviolet.org/2016/05/16/agility.html</a>) that covers similar ground as this blog post. In his post, he proposes the adage “have one joint and keep it well oiled.” This is great advice for protocol designers. Ossification is real, as we have seen in TLS 1.3.</p><p>Along these lines, David Benjamin proposed a way to keep the most important joints in TLS oiled. His <a href="https://tools.ietf.org/html/draft-ietf-tls-grease-00">GREASE proposal</a> for TLS is designed to throw in random values where a protocol should be tolerant of new values. If popular implementations intersperse unknown ciphers, extensions and versions in real-world deployments, then implementers will be forced to handle them correctly. GREASE is like WD-40 for the Internet.</p><p>One thing to note is that GREASE is intended to prevent servers from ossifying, not middleboxes, so there is still potential for more types of greasing to happen in TLS.</p><p>Even with GREASE, some servers were only found to be intolerant to TLS 1.3 as late as December 2017. GREASE only identifies servers that are intolerant to unknown extensions, but some servers may still be intolerant to specific extension ids. For example, RSA's BSAFE library used the extension id 40 for an experimental extension called "extended_random", associated with a <a href="/how-the-nsa-may-have-put-a-backdoor-in-rsas-cryptography-a-technical-primer/">theorized NSA backdoor</a>. Extension id 40 happens to be the extension id used for TLS 1.3 key shares. David Benjamin <a href="https://www.ietf.org/mail-archive/web/tls/current/msg25168.html">reported</a> that this library is still in use by some printers, which causes them to be TLS 1.3 intolerant. Matthew Green has a <a href="https://blog.cryptographyengineering.com/2017/12/19/the-strange-story-of-extended-random/">detailed write-up</a> of this compatibility issue.</p>
    <div>
      <h3>Help us understand the issue</h3>
      <a href="#help-us-understand-the-issue">
        
      </a>
    </div>
    <p>Cloudflare has been working with the Mozilla Firefox team to help measure this phenomenon, and Google and Facebook have been doing their own measurements. These experiments are hard to perform because the developers need to get protocol variants into browsers, which can take the entire release cycle of the browser (often months) to get into the hands of the users seeing issues. Cloudflare now supports the latest (hopefully) middlebox-compatible TLS 1.3 draft version (draft 22), but there’s always a chance we find a middlebox that is incompatible with this version.</p><p>To sidestep the browser release cycle, we took a shortcut. We <a href="https://tls13.mitm.watch">built a website</a> that you can use to see if TLS 1.3 works from your browser’s vantage point. This test was built by our Crypto intern, Peter Wu. It uses Adobe Flash, because that’s the only widespread cross-platform way to get access to raw sockets from a browser.</p><p>How it works:</p><ul><li><p>We cross-compiled our Golang-based TLS 1.3 client library (tls-tris) to JavaScript</p></li><li><p>We build a JavaScript library (called jssock) that implements tls-tris on the low-level socket interface network exposed through Adobe Flash</p></li><li><p>We connect to a remote server using TLS 1.2 and TLS 1.3 and compare the results</p></li></ul><p>If there is a mismatch, we gather information from the connection on both sides and send it back for analysis</p><p>If you see a failure, <a>let us know</a>! If you’re in a corporate environment, share the middlebox information, if you’re on a residential network, tell us who your ISP is.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7Axq62MNEuKIwQmxQtCTi6/d91d0f0e56f705c4f89271f3e3776b26/Screen-Shot-2017-12-26-at-11.14.28-AM.jpg" />
            
            </figure><p>We’re excited for TLS 1.3 to finally be enabled by default in browsers. This experiment will hopefully help prove that the latest changes make it safe for users to upgrade.</p> ]]></content:encoded>
            <category><![CDATA[TLS]]></category>
            <category><![CDATA[TLS 1.3]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[Research]]></category>
            <category><![CDATA[Cryptography]]></category>
            <guid isPermaLink="false">5kv7ooxy0pFmxIAkvDxFsB</guid>
            <dc:creator>Nick Sullivan</dc:creator>
        </item>
        <item>
            <title><![CDATA[TLS 1.3 is going to save us all, and other reasons why IoT is still insecure]]></title>
            <link>https://blog.cloudflare.com/why-iot-is-insecure/</link>
            <pubDate>Sun, 24 Dec 2017 16:57:44 GMT</pubDate>
            <description><![CDATA[ As I’m writing this, four DDoS attacks are ongoing and being automatically mitigated by Gatebot. Cloudflare’s job is to get attacked. Our network gets attacked constantly. ]]></description>
            <content:encoded><![CDATA[ <p>As I’m writing this, four DDoS attacks are ongoing and being automatically mitigated by <a href="/meet-gatebot-a-bot-that-allows-us-to-sleep/">Gatebot</a>. Cloudflare’s job is to get attacked. Our network gets attacked constantly.</p><p>Around the fall of 2016, we started seeing DDoS attacks that looked a little <a href="/say-cheese-a-snapshot-of-the-massive-ddos-attacks-coming-from-iot-cameras/">different than usual</a>. One attack we saw around that time had traffic coming from 52,467 unique IP addresses. The clients weren’t servers or desktop computers; when we tried to connect to the clients over port 80, we got the login pages to CCTV cameras.</p><p>Obviously it’s important to lock down IoT devices so that they can’t be co-opted into evil botnet armies, but when we talk to some IoT developers, we hear a few concerning security patterns. We’ll dive into two problematic areas and their solutions: software updates and TLS.</p>
    <div>
      <h3>The Trouble With Updates</h3>
      <a href="#the-trouble-with-updates">
        
      </a>
    </div>
    <p>With PCs, the end user is ultimately responsible for securing their devices. People understand that they need to update their computers and phones. <a href="https://www.macrumors.com/2017/01/05/ios-10-installed-on-76-percent-of-ios-devices/">Just 4 months after Apple released iOS 10, it was installed on 76% of active devices</a>.</p><p>People just don’t know that they are supposed to update IoT <i>things</i> like they are supposed to update their computers because they’ve never had to update things in the past. My parents are never going to install a software update for their thermometer.</p><p>And the problem gets worse over time. The longer a device stays on an older software version, the less likely it will be compatible with the newer version. At some point, an update may not be possible anymore. This is a very real concern as the shelf life of a connected thing can be 10 years in the case of a kitchen appliance - have you ever bought a refrigerator?</p><p>This is if the device can be patched at all. First, devices that are low battery are programmed not to receive updates because it’s too draining on the battery. Second, IoT devices are too lightweight to run a full operating system, they run just a compiled binary on firmware which means there’s a limit to the code that can later be pushed to it. Some devices cannot receive specific patches.</p><p>The other thing we hear about updates from IoT developers is that often they are afraid to push a new update because it could mean breaking hundreds of thousands of devices at once.</p><p>All this may not seem like a big deal - ok, so a toaster can get hacked, so what - but two very real things are at stake. First, every device that’s an easy target makes it easier to make other applications a target. Second, once someone is sitting on a device, they are in your network, which can put at stake any traffic sent over the wire.</p><p>The security model that worked for PC doesn’t work for IoT — the end user can’t be responsible, and patching isn’t reliable. We need something else. What’s the solution?</p><p>Traffic to an IoT device passes through many different networks: the transit provider from the application server, the <a href="https://www.cloudflare.com/learning/cdn/what-is-a-cdn/">content delivery network</a> used to deliver device traffic, the ISP to the building where the device sits.</p><p>It is at those network layers that protection can be added. As IoT device traffic moves through these networks, packets can be filtered to only let in good traffic. Even if a device is running vulnerable code, filters added in the network level can keep hackers out.</p>
    <div>
      <h3>The Trouble With TLS</h3>
      <a href="#the-trouble-with-tls">
        
      </a>
    </div>
    <p>TLS is used in two ways in IoT devices: First, TLS is used to encrypt data in transit. This is used for data privacy and to make it harder to reverse engineer the communications used by the device. Second, devices store client <a href="https://www.cloudflare.com/application-services/products/ssl/">TLS certificates</a> that are used to authenticate the devices to the application - makes it one step harder to fake a device.</p><p>There are three problems developers run into when they want to implement TLS in IoT. The first is that while IoT traffic needs to be quick and lightweight, TLS adds an additional two round trips to the start of every session. The second is that certificates can be large files, and device memory is limited in IoT. And the third is that some of the protocols that are being developed for IoT are plaintext by default.</p>
    <div>
      <h3>TLS Isn’t Lightweight</h3>
      <a href="#tls-isnt-lightweight">
        
      </a>
    </div>
    <p>IoT devices run on low power chips. An IoT device may only have 256 or 512 KB of RAM and often need to conserve battery. They send and receive lots of small information constantly. Imagine an internet connected wind sensor - it measures wind speed and every 30 seconds, sends the new wind speed to the application server. It’s just a few bytes of data it needs to get over the wire and it wants to be able to do so without as much overhead as possible to conserve RAM and battery life.</p><p>Here’s an HTTP POST to do that:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/10XN1xsCbML3sq6IUtHQlk/dd09f91c6593270de299f38e9490caad/Screen-Shot-2017-12-23-at-8.39.11-PM.png" />
            
            </figure><p>But let’s say the same device is going to use TLS. Here’s what the same POST looks like with the TLS handshake — this is with TLS 1.2:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4XYspSKePX6SnuugQdO58g/89efbf761a75c8a90df6d4627dc5439d/Screen-Shot-2017-12-23-at-8.40.00-PM.png" />
            
            </figure><p>Depending on distance between the device and the application server and the latency of the server, this can be hundreds of milliseconds added. The solution is likely the newest version of TLS, TLS 1.3.</p><p>TLS 1.3 eliminates a complete round trip in the TLS handshake, which makes TLS much lighter and faster. It cuts the number of round trips in the handshake by half by predicting what key agreement protocol and algorithm the server will decide to use and sends those guessed parameters and the key share directly in the client hello. And if the server likes that, it sends back its own key share for the same algorithm, and the whole handshake is done.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2LB8N0MKT0QFOFfZL3S91v/240f1a0436802f14c6c736f558b5f9f6/Screen-Shot-2017-12-23-at-8.50.04-PM.png" />
            
            </figure><p>If the same IoT device talks to the same server again, there’s actually <a href="/introducing-0-rtt/">no round trip at all</a>. The parameters chosen in the initial handshake are sent alongside application data in the first packet.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2BVBCXVFfkOtvxxvXO0M1Y/1b850b51f228b6d70540a4fbea3f3aa8/Screen-Shot-2017-12-24-at-8.23.08-AM.png" />
            
            </figure><p>Why isn’t every IoT device using 1.3 today? TLS 1.3 is still actively being developed in the IETF standards track and while Chrome as of version 56 in January and Firefox as of version 52 in March support 1.3, not everything does. The biggest problem today are middleboxes that are used by ISP’s and enterprises that panic when they see a 1.3 handshake and close the connection. This also happened when the world was upgrading to TLS 1.2 and middleboxes only understood TLS 1.1, so it’s just a matter of time.</p>
    <div>
      <h3>TLS Certificate Size</h3>
      <a href="#tls-certificate-size">
        
      </a>
    </div>
    <p>In a TLS handshake, the server can use a server-side TLS certificate to authenticate itself to the client, and the client can use a client-side certificate to authenticate itself to the server. Devices often store certificates to authenticate themselves to the application server. However, device memory is often limited in IoT, and certificates can be large. What can we do?</p><p>Most certificates today use the RSA algorithm, which has been around since the 70’s. The certificates are large because the keys in RSA to be secure need to be large - either 1,024 to 2,048 bytes, however, a newer algorithm using elliptic curve cryptography has been in wide use since the early 2000’s that can solve this problem. With elliptic curve cryptography we can use smaller keys with the same level of security as a larger RSA key and save space on the device.</p>
    <div>
      <h3>Default Plaintext IoT Protocols</h3>
      <a href="#default-plaintext-iot-protocols">
        
      </a>
    </div>
    <p>IoT devices need to be lightweight so two emerging protocols are replacing HTTP as the dominant transfer protocol for some IoT devices: MQTT and CoAP.</p><p>MQTT is a pub/sub protocol that has been around almost 20 years. In MQTT, a proxy server acts as a broker. An IoT device or web app publishes a message to the broker, and the broker distributes those messages to all the other IoT devices that need to receive that message.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/E4huJmd0Qc5gRa4N5kuxH/1116aa56a7893b95bd62909c577d525b/Screen-Shot-2017-12-23-at-8.52.51-PM.png" />
            
            </figure><p>When MQTT was written almost 20 years ago, it was written without security by intention. It was written for oil and gas companies and they were just sending sensor data and no one thought it needed to be encrypted.</p><p>CoAP was standardized just three years ago. It has all the same methods as HTTP, but it’s over UDP so it’s really light.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2PLPc3UEGFJ8RVGsEWpGpk/e073dc56e23e24c449aa96f97e1f9fa8/Screen-Shot-2017-12-23-at-8.53.03-PM.png" />
            
            </figure><p>The problem is, if you want to add TLS (DTLS really because CoAP is over UDP), it no longer is light anymore.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7Ih9SbeRbtRJk4OqCZGXgx/698e1d764c758df0cebcd832c5aefa63/Screen-Shot-2017-12-23-at-8.53.41-PM.png" />
            
            </figure>
    <div>
      <h3>The Future</h3>
      <a href="#the-future">
        
      </a>
    </div>
    <p>It will be quite interesting to see how update mechanisms and TLS implementations change as the number of deployed IoT devices continues to grow. If this type of thing interests you, <a href="https://www.cloudflare.com/careers/">come join us</a>.</p> ]]></content:encoded>
            <category><![CDATA[IoT]]></category>
            <category><![CDATA[TLS 1.3]]></category>
            <category><![CDATA[TLS]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[Gatebot]]></category>
            <category><![CDATA[DDoS]]></category>
            <category><![CDATA[Cryptography]]></category>
            <guid isPermaLink="false">3u3LeCl6VoYIZx3NbtHAa0</guid>
            <dc:creator>Dani Grant</dc:creator>
        </item>
        <item>
            <title><![CDATA[CAA of the Wild: Supporting a New Standard]]></title>
            <link>https://blog.cloudflare.com/caa-of-the-wild/</link>
            <pubDate>Thu, 07 Dec 2017 14:00:00 GMT</pubDate>
            <description><![CDATA[ One thing we take pride in at Cloudflare is embracing new protocols and standards that help make the Internet faster and safer. Sometimes this means that we’ll launch support for experimental features or standards still under active development, as we did with TLS 1.3. ]]></description>
            <content:encoded><![CDATA[ <p>One thing we take pride in at Cloudflare is embracing new protocols and standards that help make the Internet faster and safer. Sometimes this means that we’ll launch support for experimental features or standards still under active development, <a href="/introducing-tls-1-3/">as we did</a> with TLS 1.3. Due to the not-quite-final nature of some of these features, we limit the availability at the onset to only the most ardent users so we can observe how these cutting-edge features behave in the wild. Some of our observations have helped the community propose revisions to the corresponding RFCs.</p><p>We began supporting the DNS <a href="https://tools.ietf.org/html/rfc6844">Certification Authority Authorization (CAA) Resource Record</a> in June behind a beta flag. Our goal in doing so was to see how the presence of these records would affect <a href="https://www.cloudflare.com/application-services/products/ssl/">SSL certificate</a> issuance by publicly-trusted certification authorities. We also wanted to do so in advance of the <a href="https://cabforum.org/2017/03/08/ballot-187-make-caa-checking-mandatory/">8 September 2017 enforcement date</a> for mandatory CAA checking at certificate issuance time, without introducing a new and externally unproven behavior to millions of Cloudflare customers at once. This beta period has provided invaluable insight as to how CAA records have changed and will continue to change the commercial public-key infrastructure (PKI) ecosystem.</p><p>As of today, we’ve removed this beta flag and all users are welcome to add CAA records as they see fit—without having to first contact support. Note that if you’ve got Universal SSL enabled, we’ll automatically augment your CAA records to allow issuance from our CA partners; if you’d like to disable Universal SSL and provide your own certificates, you’re welcome to do that too.Below are some additional details on CAA, the purpose of this record type, and how its use has evolved since it was first introduced. If you’d rather just jump to the details of our implementation, <a href="#caa-and-cloudflare">click here</a> and we’ll take you to the relevant section of the post.</p>
    <div>
      <h4>The Publicly-Trusted PKI Ecosystem — Abridged</h4>
      <a href="#the-publicly-trusted-pki-ecosystem-abridged">
        
      </a>
    </div>
    <p>Before diving into CAA it’s helpful to understand the purpose of a public key infrastructure (PKI). Quite simply, PKI is a framework that’s used to secure communications between parties over an insecure public network. In “web PKI”, the PKI system that’s used to secure communications between your web browser and this blog (for example), the TLS protocol is used with SSL certificates and private keys to protect against eavesdropping and tampering.</p><p>While TLS handles the sanctity of the connection, ensuring that nobody can snoop on or mess with HTTPS requests, how does your browser know it’s talking to the actual owner of blog.cloudflare.com and not some imposter? Anyone with access to OpenSSL or a similar tool can generate a certificate purporting to be valid for this hostname but fortunately your browser only trusts certificates issued (or “signed by”) by certain well-known parties.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/tTMIE50NuXq1J2phXhoPr/3d2d302cf9e84a7dbdf6e268f9b55360/illustration-ssl-certificate.png" />
            
            </figure><p>These well-known parties are known as certification authorities (CAs). The private and public key that form the certificate for <code>blog.cloudflare.com</code> were generated on Cloudflare hardware, but the stamp of approval—the signature—was placed on the certificate by a CA. When your browser receives this “leaf” certificate, it follows the issuer all the way to a “root” that it trusts, validating the signatures along the way and deciding whether to accept the certificate as valid for the requested hostname.</p><p>Before placing this stamp of approval, CAs are supposed to take steps to ensure that the certificate requester can demonstrate control over the hostname. (In some cases, as you’ll learn below, this is not always the case, and is one of the reasons that CAA was introduced.)</p>
    <div>
      <h4>Anthropogenic Threats</h4>
      <a href="#anthropogenic-threats">
        
      </a>
    </div>
    <p>Given that people are imperfect beings and prone to making mistakes or poor judgement calls, it should come to the surprise of no one that the PKI ecosystem has a fairly blemished track-record when it comes to maintaining trust. Clients, CAs, servers, and certificate requesters are all created or operated by people who have made mistakes.</p><p><i>Jurassic Park. 1993, Stephen Spielberg [Film] Universal Pictures.</i></p><p>Client providers have been known to <a href="https://en.wikipedia.org/wiki/Superfish">add compromising certificates to the local trust store</a> or <a href="/understanding-the-prevalence-of-web-traffic-interception/">install software to intercept secure connections</a>; servers have been demonstrated to <a href="/answering-the-critical-question-can-you-get-private-ssl-keys-using-heartbleed/">leak private keys</a> or <a href="/microsoft-tls-downgrade-schannel-bug/">be unable to properly rotate session ticket keys</a>; CAs have <a href="https://security.googleblog.com/2016/10/distrusting-wosign-and-startcom.html">knowingly mis-issued certificates</a> or <a href="https://security.googleblog.com/2017/09/chromes-plan-to-distrust-symantec.html">failed to validate hostname ownership or control reliably</a>; and individuals requesting certificates include phishers creating convincing imposter versions of popular domains and <a href="https://bugzilla.mozilla.org/show_bug.cgi?id=1332714">obtaining valid and trusted certificates</a>. Every party in this ecosystem is not completely without blame in contributing to a diminished sense of trust.</p><p>Of these many problems (most of which have already been addressed or are in the process of being resolved), perhaps the most unsettling is the willful mis-issuance of certificates by trusted CAs. Knowingly issuing certificates to parties who haven't demonstrated ownership, or issuance outside the parameters defined by the <a href="https://cabforum.org/about-us/">CA/Browser Forum</a> (a voluntary and democratic governing body for publicly-trusted certificates) by CAs who have certificates present in trust stores severely undermines the value of that trust store, all certificates issued by that CA, and the publicly-trusted PKI ecosystem as a whole.</p>
    <div>
      <h4>Solving One Problem...</h4>
      <a href="#solving-one-problem">
        
      </a>
    </div>
    <p>To help prevent future mis-issuance by publicly trusted CAs, a new DNS resource record was proposed by those CAs to help reduce the risk of certificate mis-issuance: The Certification Authority Authorization (CAA) Resource Record.</p><p>The general idea is that the owner of any given domain (e.g., example.com) would add CAA records at their authoritative DNS provider, specifying one or more CAs who are authorized to issue certificates for their domain.</p><p><a href="https://tools.ietf.org/html/rfc6844">RFC6844</a> currently specifies three property tags for CAA records: <code>issue</code>, <code>issuewild</code>, and <code>iodef</code>.</p><ul><li><p>The <code>issue</code> property tag specifies CAs who are authorized to issue certificates for a domain. For example, the record <code>example.com. CAA 0 issue "certification-authority.net"</code> allows the "Certification Authority" CA to issue certificates for example.com.</p></li><li><p>The <code>issuewild</code> property tag specifies CAs that are only allowed to issue certificates that specify a wildcard domain. E.g., the record <code>example.com. CAA 0 issuewild "certification-authority.net"</code> only allows the "Certification Authority" CA to issue certificates containing wildcard domains, such as <code>*.example.com</code>.</p></li><li><p>The <code>iodef</code> property tag specifies a means of reporting certificate issue requests or cases of certificate issuance for the corresponding domain that violate the security policy of the issuer or the domain name holder. E.g., the record <code>example.com. CAA 0 iodef "mailto:example@example.com"</code> instructs the issuing CA to send violation reports via email to the address provided at the attempted time of issuance.</p></li></ul><p>CAA records with the <code>issue</code> and <code>issuewild</code> tags are additive; if more than one are returned as the response for a <a href="https://www.cloudflare.com/learning/dns/what-is-dns/">DNS query</a> for a given hostname, the CAs specified in both records are both considered authorized.</p><p>If the authoritative DNS provider does not yet support CAA records or none are present in the zone file, the issuing CA is still authorized to issue when no records are present, largely preserving the issuance behavior before CAA records were an adopted standard.</p><p>As of 8 September 2017, all publicly-trusted CAs are now required to check CAA at issuance time for all certificates issued, thereby enabling certificate requestors (domain owners) to dictate which CAs can issue certificates for their domain.</p>
    <div>
      <h4>... with More Problems.</h4>
      <a href="#with-more-problems">
        
      </a>
    </div>
    <p>RFC6844 specifies a very curious CAA record processing algorithm:</p>
            <pre><code>The search for a CAA record climbs the DNS name tree from the
   specified label up to but not including the DNS root '.'.

   Given a request for a specific domain X, or a request for a wildcard
   domain *.X, the relevant record set R(X) is determined as follows:

   Let CAA(X) be the record set returned in response to performing a CAA
   record query on the label X, P(X) be the DNS label immediately above
   X in the DNS hierarchy, and A(X) be the target of a CNAME or DNAME
   alias record specified at the label X.

   o  If CAA(X) is not empty, R(X) = CAA(X), otherwise

   o  If A(X) is not null, and R(A(X)) is not empty, then R(X) =
      R(A(X)), otherwise

   o  If X is not a top-level domain, then R(X) = R(P(X)), otherwise

   o  R(X) is empty.</code></pre>
            <p>While the above algorithm is not easily understood at first, the example immediately following it is much easier to comprehend:</p>
            <pre><code>For example, if a certificate is requested for X.Y.Z the issuer will
   search for the relevant CAA record set in the following order:

      X.Y.Z

      Alias (X.Y.Z)

      Y.Z

      Alias (Y.Z)

      Z

      Alias (Z)

      Return Empty</code></pre>
            <p>In plain English, this means that if the owner of example.com requests a certificate for <code>test.blog.example.com</code>, the issuing CA must</p><ol><li><p>Query for a CAA record at <code>test.blog.example.com.</code>. If a CAA record exists for this hostname, the issuing CA stops checking for CAA records and issues accordingly. If no CAA record exists for this hostname and this hostname exists as an A or AAAA record, the CA then moves up the DNS tree to the next highest label.</p></li><li><p>Query for a CAA record at <code>blog.example.com.</code>. Just like the first check, if no CAA record exists for this hostname and this hostname exists as an A or AAAA record, the CA then continues traversing the DNS tree.</p></li><li><p>Query for a CAA record at <code>example.com.</code></p></li><li><p>Query for a CAA record at <code>com.</code></p></li></ol><p>At the end of the last step, the issuing CA has climbed the entire DNS tree (excluding the root) checking for CAA records. This functionality allows a domain owner to create CAA records at the root of their domain and have those records apply to any and all subdomains.</p><p>However, the CAA record processing algorithm has an additional check if the hostname exists as a CNAME (or DNAME) record. In this case, the issuing CA must also check the <b>target</b> of the CNAME record. Revisiting the example above for <code>test.blog.example.com.</code> where this hostname exists as a CNAME record, the issuing CA must</p><ol><li><p>Query for a CAA record at <code>test.blog.example.com.</code>. Since <code>test.blog.example.com.</code> exists as the CNAME <code>test.blog.example.com. CNAME test.blog-provider.net.</code>, the issuing CA must next check the target for the presence of a CAA record before climbing the DNS tree.</p></li><li><p>Query for a CAA record at <code>test.blog-provider.net.</code></p></li></ol><p>The issuing CA in this example is only at step two in the CAA processing algorithm and it has already come across two separate issues.</p><p>First, the issuing CA has checked the hostname requested on the certificate (<code>test.blog.example.com.</code>) and since that hostname exists as a CNAME record, it has also checked the target of that record (<code>test.blog-provider.net.</code>). However, if <code>test.blog-provider.net.</code> itself is also a CNAME record, the the CAA record processing algorithm states that the issuing CA must check the target of that CNAME as well.</p><p>In this case it is fairly simple to create a CNAME loop (or very long CNAME chain) either via an accidental misconfiguration or with malicious intent to prevent the issuing CA from completing the CAA check.</p><p>Second, <code>example.com</code> and <code>blog-provider.net</code> might not be owned and operated by the same entity or even exist in the same network. The RFC authors appear to be operating under the assumption that CNAME records are still used as they were in <a href="https://tools.ietf.org/html/rfc1034#3.6.2">November 1987</a>:</p><blockquote><p>A CNAME RR identifies its owner name as an alias, and specifies the corresponding canonical name in the RDATA section of the RR.</p></blockquote><p>This behavior may have been true thirty years ago, but consider the number and prevalence of SaaS providers in 2017; who truly is authoritative for the content addressed by a DNS record, the content creator (and likely <a href="https://www.cloudflare.com/learning/dns/glossary/what-is-a-domain-name/">domain name</a> owner who is using a CNAME) or the SaaS provider to whom the content creator subscribes?</p><p>Many if not all major service or application providers include clauses in their respective terms of service with regards to content added by the user to their account for the service. Intellectual property must be user-generated, properly licensed, or otherwise lawfully obtained. As content uploaded by users is difficult to control, liability limitations, indemnification, and service termination are all commonly used when the intellectual property added to the service is owned by another party, is unlawful, or is outside the definition of acceptable use. From a content perspective within the scope of a service provider's terms of service, it’s difficult to consider the service provider canonical for the content hosted there.</p><p>Using a real world example, there are currently 401,716 CNAME records in the zone files for domains on Cloudflare whose target is <code>ghs.google.com</code>. <a href="https://support.google.com/blogger/answer/58317?hl=en">This hostname</a> is given to subscribers of Google's Blogger service to use as the target of a CNAME so that a Blogger subscriber may use their own domain name in front of Google's service. With one CAA record, Google could dictate that nearly half a million blogs with a vanity domain name can only have one CA or no CAs issue certificates unless each of those hostnames created their own CAA records to allow issuance.</p><p>Even without following CNAMEs to their targets, the DNS climbing algorithm is not unproblematic. Operators of <a href="https://en.wikipedia.org/wiki/Top-level_domain">top-level domains</a> may decide that only certain issuing CAs are trustworthy or certain CAs are advantageous for business and create CAA records only allowing those CAs to issue. We already see this in action today with the pseudo-top level domain <a href="https://nl.eu.org/">nl.eu.org</a> which has the CAA record <code>nl.eu.org. CAA 0 issue "letsencrypt.org"</code>, which only allows issuance through Let's Encrypt without CAA records being present at the subdomains of <code>nl.eu.org</code>.</p><p>The authors of RFC6844 were also unwilling to secure their record and its use from any potential on-path attacks—</p><blockquote><p>Use of DNSSEC to authenticate CAA RRs is strongly RECOMMENDED but not required.</p></blockquote><p>Without DNSSEC, DNS responses are returned to the requestor in plain-text. Anyone in a privileged network position (a recursive DNS provider or ISP) could alter the response to a CAA query to allow or deny issuance as desired. As only about <a href="http://scoreboard.verisignlabs.com/">820,000</a> <code>.com</code> domain names out of the more than <a href="http://research.domaintools.com/statistics/tld-counts/">130 million</a> registered <code>.com</code> domain names are secured with DNSSEC, perhaps the low adoption rates influenced the decision to not make DNSSEC with CAA mandatory.</p><p>Moving beyond the RFC, the CA/Browser Forum's <a href="https://cabforum.org/wp-content/uploads/CA-Browser-Forum-BR-1.5.0.pdf">Baseline Requirements</a> (BR) attempt to clarify aspects regarding the behavior a CA should take when checking for CAA.</p><blockquote><p>CAs are permitted to treat a record lookup failure as permission to issue if:</p></blockquote><ul><li><p>the failure is outside the CA's infrastructure;</p></li><li><p>the lookup has been retried at least once; and</p></li><li><p>the domain's zone does not have a DNSSEC validation chain to the ICANN root</p></li></ul><p>Effectively, this means that if the CAA response is either a <code>SERVFAIL</code>, <code>REFUSED</code>, or the query times out, regardless of whether or not a CAA record exists, the CA is permitted to issue if this query fails more than once while attempting to issue a certificate. However, multiple CAs have told us that DNS lookup failures will prevent issuance regardless of the above conditions in the BR. In this case, something as benign as a transient network error could result in a denial of issuance or worse, any recursor that doesn’t understand and a query for CAA record will prevent issuance. We actually saw this with Comcast’s resolvers and reported the bug to their DNS provider.</p><p>There’s an additional security gap in that neither the RFC nor the BR indicate where the issuing CA should query for CAA records. It is acceptable within the current standards to query any DNS recursor for these records as well as the authoritative DNS provider for a domain. For example, an issuing CA could query <a href="https://www.cloudflare.com/cloudflare-vs-google-dns/">Google’s Public DNS</a> or a DNS recursor provided by their ISP for these responses. This enables compromised DNS recursors or one run by a rogue operator to alter these responses, either denying issuance or allowing issuance by a CA not approved by the domain owner. The RFC and BR should be amended so that an issuing CA must always query these records at the authoritative provider to close this gap.</p>
    <div>
      <h4>CAA and Cloudflare</h4>
      <a href="#caa-and-cloudflare">
        
      </a>
    </div>
    <p>As of today, CAA records are no longer in beta and all customers are able to add CAA records for their zones. This can be done in the DNS tab of the Cloudflare dashboard or via our <a href="https://api.cloudflare.com/#dns-records-for-a-zone-create-dns-record">API</a>.</p><p>When creating CAA records in the dashboard, an additional modal appears to help clarify the different CAA tag options and format the record correctly as an incorrectly formatted record would result in every CA not being able to issue a certificate.</p><p>Cloudflare is in a unique position to be able to complete SSL validation for a domain and have certificates issued on the domain owner's behalf. This is how Cloudflare is able to automatically provision and have our <a href="/introducing-universal-ssl/">Universal SSL</a> certificates issued for every domain active on Cloudflare for free.</p><p>Cloudflare partners with multiple CAs to issue certificates for our managed SSL products: Universal SSL, <a href="/dedicated-ssl-certificates/">Dedicated Certificates</a>, and <a href="/introducing-ssl-for-saas/">SSL for SaaS</a>. Since CAA record checking is now mandatory for publicly-trusted CAs, Cloudflare automatically adds the requisite CAA records to the zone when a user adds one or more CAA records in the Cloudflare dashboard for their domain in order to allow our partner CAs to continue to be able to issue certificates for each of our SSL products.</p><p>Some site owners may want to manage their own SSL certificates in order to be compliant with their own standard operating procedures or policies. Alternately, domain owners may only want to trust specific CAs that are not the CAs Cloudflare currently partners with to issue Universal SSL certificates. In the latter case, users now have the ability to disable Universal SSL.</p><p>When Universal SSL is disabled, the CAA records added to allow our partner CAs to issue certificates are deleted and all Universal SSL certificates available for the zone are removed from our edge. Dedicated certificates, custom certificates, as well as <a href="https://www.cloudflare.com/application-services/products/ssl-for-saas-providers/">SSL for SaaS certificates</a> are all individually managed by each customer and can be added or removed as needed.</p>
    <div>
      <h4>A Bright and Hopefully Not-Too-Distant Future</h4>
      <a href="#a-bright-and-hopefully-not-too-distant-future">
        
      </a>
    </div>
    <p>CAA as it exists today does very little to <a href="https://www.cloudflare.com/application-services/products/securitycenter/">reduce the attack surface</a> around certificate issuance while making it more difficult for well-intended parties to participate. That said, CAA is a young standard recently adopted by the web PKI community with many involved individuals as well as active parties working toward addressing some of the gaps present in the current RFC and committed to making the overall CAA experience better for both the certificate requesters and CAs.</p><p>To start, an <a href="https://www.rfc-editor.org/errata/eid5065">errata report</a> exists to clarify the CAA record processing algorithm and to reduce the degree in which the targets of CNAME records should be checked. Similarly, the DNS tree-climbing behavior of the CAA record processing algorithm is still <a href="https://mailarchive.ietf.org/arch/search/?email_list=spasm&amp;gbt=1&amp;index=QkL2PKWUadWpXBZULetCbk-9ViU">up for debate</a>. There are also <a href="https://mailarchive.ietf.org/arch/search/?email_list=spasm&amp;gbt=1&amp;index=ewf8iAv2cqS1_5ZmvfuK57goLYI">active discussions</a> around implementation issues, such as the recognition that some authoritative DNS resolvers incorrectly sign empty responses to CAA queries when DNSSEC is enabled for a zone and how to handle these cases in a way that would still allow CAs to issue. <a href="https://www.ietf.org/id/draft-hoffman-andrews-caa-simplification-02.txt">Proposals exist</a> suggesting new tags or property fields in CAA records, such as a CA requiring an account number in the property value for issue tags, or only allowing specific validation types (e.g., DV, OV, or EV). The Electronic Frontier Foundation (EFF) has expressed interest in hardening CAs against non-cryptographic attacks, particularly with a focus on the domain validation process for obtaining certificates. While not directly pertaining to CAA, any ideas or proposed hardening solutions might increase reliance on CAA or completely obviate the need for it altogether.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/35HzvskNIkHkztXA9Eq8fp/79076092e3540a211612ff53bc723d5b/standards.png" />
            
            </figure><p> <a href="https://creativecommons.org/licenses/by-nc/2.5/">CC BY-NC 2.5</a> by <a href="https://xkcd.com/about/">xkcd.com</a></p><p>As with all internet standards, none are perfect and all are far from permanent—CAA included. Being in the position to implement new standards at scale, seeing what effect adoption of those standards has, and working with the Internet community to address any issues or gaps is a privilege and allows us to live up to our mission of building a better Internet.</p><p>We're thrilled to be involved in efforts to make CAA (and any other standard) better for anyone and everyone.</p> ]]></content:encoded>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[SSL]]></category>
            <category><![CDATA[DNS]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[TLS 1.3]]></category>
            <category><![CDATA[TLS]]></category>
            <category><![CDATA[Reliability]]></category>
            <category><![CDATA[Universal SSL]]></category>
            <category><![CDATA[Certificate Authority]]></category>
            <guid isPermaLink="false">3mWSshxk3v9PwoHKztcjAp</guid>
            <dc:creator>Max Nystrom</dc:creator>
        </item>
        <item>
            <title><![CDATA[Make SSL boring again]]></title>
            <link>https://blog.cloudflare.com/make-ssl-boring-again/</link>
            <pubDate>Wed, 06 Dec 2017 14:00:00 GMT</pubDate>
            <description><![CDATA[ It may (or may not!) come as surprise, but a few months ago we migrated Cloudflare’s edge SSL connection termination stack to use BoringSSL: Google's crypto and SSL implementation that started as a fork of OpenSSL. ]]></description>
            <content:encoded><![CDATA[ <p>It may (or may not!) come as surprise, but a few months ago we migrated Cloudflare’s edge SSL connection termination stack to use <a href="https://boringssl.googlesource.com/boringssl/">BoringSSL</a>: Google's crypto and SSL implementation that started as a fork of OpenSSL.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5ocB2kjFK3yu7bQpWq7NCu/28291ba6a3d750709fe632d9f214ca95/2017-12-05-160528_621x216_scrot-1.png" />
            
            </figure><p>We dedicated several months of work to make this happen without negative impact on customer traffic. We had a few bumps along the way, and had to overcome some challenges, but we ended up in a better place than we were in a few months ago.</p>
    <div>
      <h3>TLS 1.3</h3>
      <a href="#tls-1-3">
        
      </a>
    </div>
    <p>We have <a href="/introducing-tls-1-3/">already</a> <a href="/tls-1-3-overview-and-q-and-a/">blogged</a> <a href="/tls-1-3-explained-by-the-cloudflare-crypto-team-at-33c3/">extensively</a> about TLS 1.3. Our original TLS 1.3 stack required our main SSL termination software (which was based on OpenSSL) to hand off TCP connections to a separate system based on <a href="https://github.com/cloudflare/tls-tris">our fork of Go's crypto/tls standard library</a>, which was specifically developed to only handle TLS 1.3 connections. This proved handy as an experiment that we could roll out to our client base in relative safety.</p><p>However, over time, this separate system started to make our lives more complicated: most of our SSL-related business logic needed to be duplicated in the new system, which caused a few subtle bugs to pop up, and made it harder to roll-out new features such as <a href="/introducing-tls-client-auth/">Client Auth</a> to all our clients.</p><p>As it happens, BoringSSL has supported TLS 1.3 for quite a long time (it was one the first open source SSL implementations to work on this feature), so now all of our edge SSL traffic (including TLS 1.3 connections) is handled by the same system, with no duplication, no added complexity, and no increased latency. Yay!</p>
    <div>
      <h3>Fancy new crypto, part 1: X25519 for TLS 1.2 (and earlier)</h3>
      <a href="#fancy-new-crypto-part-1-x25519-for-tls-1-2-and-earlier">
        
      </a>
    </div>
    <p>When establishing an SSL connection, client and server will negotiate connection-specific secret keys that will then be used to encrypt the application traffic. There are a few different methods for doing this, the most popular one being ECDH (Elliptic Curve Diffie–Hellman). Long story short this depends on an <a href="/a-relatively-easy-to-understand-primer-on-elliptic-curve-cryptography/">elliptic curve</a> being negotiated between client and server.</p><p>For the longest time the only widely supported curves available were the ones defined by NIST, until Daniel J. Bernstein proposed Curve25519 (X25519 is the mechanism used for ECDH based on Curve25519), which has quickly gained popularity and is now the default choice of many popular browsers (including Chrome).</p><p>This was already supported for TLS 1.3 connections, and with BoringSSL we are now able to support key negotiation based on X25519 at our edge for TLS 1.2 (and earlier) connections as well.</p><p>X25519 is now the second most popular elliptic curve algorithm that is being used on our network:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1bi0W8QN1x6ftLgdkeHx4A/4e9b0526e02faa55c06fdda2e3d7db13/curves-metrics-with-logo-1.png" />
            
            </figure>
    <div>
      <h3>Fancy new crypto, part 2: RSA-PSS for TLS 1.2</h3>
      <a href="#fancy-new-crypto-part-2-rsa-pss-for-tls-1-2">
        
      </a>
    </div>
    <p>Another one of the changes introduced by TLS 1.3 is the adoption of the PSS padding scheme for RSA signatures (RSASSA-PSS). This replaces the more fragile, and historically prone to security vulnerabilities, RSASSA-PKCS1-v1.5, for all TLS 1.3 connections.</p><p>RSA PKCS#v1.5 has been known to be vulnerable to known ciphertext attacks since <a href="http://archiv.infsec.ethz.ch/education/fs08/secsem/bleichenbacher98.pdf">Bleichenbacher’s CRYPTO 98 paper</a> which showed SSL/TLS to be vulnerable to this kind of attacks as well.</p><p>The attacker exploits an “oracle”, in this case a TLS server that allows them to determine whether a given ciphertext has been correctly padded under the rules of PKCS1-v1.5 or not. For example, if the server returns a different error for correct padding vs. incorrect padding, that information can be used as an oracle (this is how Bleichenbacher broke SSLv3 in 1998). If incorrect padding causes the handshake to take a measurably different amount of time compared to correct padding, that’s called a timing oracle.</p><p>If an attacker has access to an oracle, it can take as little as <a href="http://csf2012.seas.harvard.edu/5min_abstracts/MillionMessageAttack.pdf">15,000</a> messages to gain enough information to perform an RSA secret-key operation without possessing the secret key. This is enough for the attacker to either decrypt a ciphertext encrypted with RSA, or to forge a signature. Forging a signature allows the attacker to <a href="https://www.nds.rub.de/media/nds/veroeffentlichungen/2015/08/21/Tls13QuicAttacks.pdf">hijack TLS connections</a>, and decrypting a ciphertext allows the attacker to decrypt any connection that do not use <a href="/staying-on-top-of-tls-attacks/">forward secrecy</a>.</p><p>Since then, SSL/TLS implementations have adopted mitigations to prevent these attacks, but they are tricky to get right, as the recently published <a href="https://support.f5.com/csp/article/K21905460">F5 vulnerability</a> shows.</p><p>With the switch to BoringSSL we made RSA PSS available to TLS 1.2 connections as well. This is already supported "in the wild", and is the preferred scheme by modern browsers like Chrome when dealing with RSA server certificates.</p>
    <div>
      <h3>The dark side of the moon</h3>
      <a href="#the-dark-side-of-the-moon">
        
      </a>
    </div>
    <p>Besides all these new exciting features that we are now offering to all our clients, BoringSSL also has a few internal features that end users won't notice, but that made our life so much easier.</p><p>Some of our SSL features required special patches that we maintained in our internal OpenSSL fork, however BoringSSL provides replacements for these features (and more!) out of the box.</p><p>Some examples include its <a href="https://github.com/google/boringssl/blob/b2c312d670b9967cf881419758f0ec286b66a25f/include/openssl/ssl.h#L1123">private key callback</a> support that we now use to implement <a href="/keyless-ssl-the-nitty-gritty-technical-details/">Keyless SSL</a>, its <a href="https://github.com/google/boringssl/blob/b2c312d670b9967cf881419758f0ec286b66a25f/include/openssl/ssl.h#L2004">asynchronous session lookup callback</a> that we use to support <a href="/tls-session-resumption-full-speed-and-secure/">distributed session ID caches</a> (for session resumption with clients that, for whatever reason, don't support session tickets), its <a href="https://github.com/google/boringssl/blob/b2c312d670b9967cf881419758f0ec286b66a25f/include/openssl/ssl.h#L1385">equal-preference cipher grouping</a> that allows us to offer <a href="/it-takes-two-to-chacha-poly/">ChaCha20-Poly1305 ciphers</a> alongside AES GCM ones and let clients decide which they prefer, or its <a href="https://github.com/google/boringssl/blob/b2c312d670b9967cf881419758f0ec286b66a25f/include/openssl/ssl.h#L3353">"select_certificate" callback</a> that we use for inspecting and logging ClientHellos, and for dynamically enabling features depending on the user’s configuration (we were previously using the “cert_cb” callback for the latter, which is also supported by OpenSSL, but we ran into some limitations like the fact that you can’t dynamically change the supported protocol versions with it, or the fact that it is not executed during session resumption).</p>
    <div>
      <h3>The case of the missing OCSP</h3>
      <a href="#the-case-of-the-missing-ocsp">
        
      </a>
    </div>
    <p>Apart from adding new features, the BoringSSL developers have also been busy working on <i>removing</i> features that most people don't care about, to make the codebase lighter and easier to maintain. For the most part this worked out very well: a huge amount of code has been removed from BoringSSL without anyone noticing.</p><p>However one of the features that also got the axe was OCSP. We relied heavily on this feature at our edge to offer OCSP stapling to all clients automatically. So in order to avoid losing this functionality we spent a few weeks working on a replacement, and, surprise! we ended up with a far more reliable OCSP pipeline than when we started. You can read more about the work we did in <a href="/high-reliability-ocsp-stapling/">this blog post</a>.</p>
    <div>
      <h3>ChaCha20-Poly1305 draft</h3>
      <a href="#chacha20-poly1305-draft">
        
      </a>
    </div>
    <p>Another feature that was removed was support for the <a href="/it-takes-two-to-chacha-poly/">legacy ChaCha20-Poly1305 ciphers</a> (not to be confused with the ciphers standardized in <a href="https://tools.ietf.org/html/rfc7905">RFC7905</a>). These ciphers were deployed by some browsers before the standardization process finished and ended up being incompatible with the standard ciphers later ratified.</p><p>We looked at our metrics and realized that a significant percentage of clients still relied on this feature. These would be older mobile clients that don't have AES hardware offloading, and that didn't get software updated to get the newer ChaCha20 ciphers.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1RtyTjVTdSV5RtFn6pJKTF/dd9e482b0233834b26a93b9a185ced00/chacha-metrics-with-logo.png" />
            
            </figure><p>We decided to add support for these ciphers back to our own internal BoringSSL fork so that those older clients could still take advantage of them. We will keep monitoring our metrics and decide whether to remove them once the usage drops significantly.</p>
    <div>
      <h3>Slow Base64: veni, vidi, vici</h3>
      <a href="#slow-base64-veni-vidi-vici">
        
      </a>
    </div>
    <p>One somewhat annoying problem we noticed during a test deployment, was an increase in the startup time of our NGINX instances. Armed with perf and flamegraphs we looked into what was going on and realized the CPU was spending a ridiculous amount of time in BoringSSL’s base64 decoder.</p><p>It turns out that we were loading CA trusted certificates from disk (in PEM format, which uses base64) over and over and over in different parts of our NGINX configuration, and because of a <a href="https://github.com/google/boringssl/commit/536036abf46a13e52a43a92f6e44a87404e8755f#diff-c7192c0c5ad80a961243b0ad5c434176">change</a> in BoringSSL that was intended to make the base64 decoder constant-time, but also made it <a href="https://boringssl-review.googlesource.com/c/boringssl/+/16384#message-06ca2814d05ae91a486a2126e017cc38f2e514b3">several times slower</a> than the decoder in OpenSSL, our startup times also suffered.</p><p>Of course the astute reader might ask, why were you loading those certificates from disk multiple times in the first place? And indeed there was no particular reason, other than the fact that the problem went unnoticed until it actually became a problem. So we fixed our configuration to only load the certificates from disk in the configuration sections where they are actually needed, and lived happily ever after.</p>
    <div>
      <h3>Conclusion</h3>
      <a href="#conclusion">
        
      </a>
    </div>
    <p>Despite a few hiccups, this whole process turned out to be fairly smooth, also thanks to the rock-solid stability of the BoringSSL codebase, not to mention its extensive documentation. Not only we ended up with a much better and more easily maintainable system than we had before, but we also managed to <a href="https://github.com/google/boringssl/commits?author=vkrasnov">contribute</a> a <a href="https://github.com/google/boringssl/commits?author=ghedo">little</a> back to the open-source community.</p><p>As a final note we’d like to thank the BoringSSL developers for the great work they poured into the project and for the help they provided us along the way.</p> ]]></content:encoded>
            <category><![CDATA[SSL]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[TLS 1.3]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[TLS]]></category>
            <guid isPermaLink="false">2ur99McG9vFBRl1LPt71Mn</guid>
            <dc:creator>Alessandro Ghedini</dc:creator>
        </item>
        <item>
            <title><![CDATA[SIDH in Go for quantum-resistant TLS 1.3]]></title>
            <link>https://blog.cloudflare.com/sidh-go/</link>
            <pubDate>Fri, 01 Sep 2017 20:48:00 GMT</pubDate>
            <description><![CDATA[ Most of today's cryptography is designed to be secure against an adversary with enormous amounts of computational power. This means estimating how much work certain computations require, and choosing cryptographic parameters based on our best estimates. ]]></description>
            <content:encoded><![CDATA[ <p><i>This blog post presents a new cryptography primitive that we open sourced: </i><a href="https://github.com/cloudflare/p751sidh"><i>p751sidh</i></a><i>.</i></p>
    <div>
      <h3>The Quantum Threat</h3>
      <a href="#the-quantum-threat">
        
      </a>
    </div>
    <p>Most of today's cryptography is designed to be secure against an adversary with enormous amounts of computational power. This means estimating how much work certain computations (such as factoring a number, or finding a discrete logarithm) require, and choosing cryptographic parameters based on our best estimate of how much work would be required to break the system.</p><p>If it were possible to build a large-scale <a href="https://en.wikipedia.org/wiki/Quantum_computing">quantum computer</a>, many of the problems whose difficulty we rely on for security would no longer be difficult to solve. While it remains unknown whether large-scale quantum computers are possible (see <a href="http://www.ams.org/journals/notices/201605/rnoti-p508.pdf">this article</a> for a good overview), it's a sufficient risk that there's wide interest in developing <i>quantum-resistant</i> (or <i>post-quantum</i>) cryptography: cryptography that works on ordinary computers we have today, but which is secure against a possible quantum computer.</p><p>At Cloudflare, our biggest use of cryptography is TLS, which we use both for serving our customers' websites (all Cloudflare sites get free HTTPS), as well as for internal inter-datacenter communication on our backend.</p><p>In the TLS context, we want to create a secure connection between a client and a server. There are basically three cryptographic problems here:</p><ol><li><p>Authenticity: the server needs to prove to the client that it is the real server (and optionally, the client can prove to the server that it's the real client);</p></li><li><p>Key agreement: the server and client need to agree, over an insecure connection, on a temporary shared secret known only to them;</p></li><li><p>Symmetric encryption: the server and client need to use their shared secret to encrypt the data they want to send over a secure connection.</p></li></ol><p>Authenticity protects against active attackers, but because quantum computers aren't yet believed to exist, the main risk is a retroactive attack: for instance, a nation-state adversary (let's say, "NSA" for short) could record encrypted traffic, wait to build a quantum computer, and try to decrypt past traffic. Moreover, quantum algorithms seem to give only a small speedup against symmetric encryption, so the "key" problem to solve is #2, <i>quantum-resistant key agreement</i>.</p><p>This is an active area of research, both in the design of new cryptosystems and in their implementation and deployment. For instance, last year, Google <a href="https://security.googleblog.com/2016/07/experimenting-with-post-quantum.html">concluded an experiment</a> using a lattice-based key exchange in Chrome. Lattice-based cryptosystems are an extremely promising family of quantum-resistant algorithms. Their security relies on well-studied computational problems, and they are computationally efficient. However, they have large key sizes and can require extra communication (which can necessitate additional round-trips in protocols like TLS).</p><p>Another family of cryptosystems are <i>supersingular isogeny</i> systems, in particular <i>supersingular isogeny Diffie-Hellman</i> (SIDH). In contrast to lattice-based systems, they rely on more exotic computational problems and are much more computationally expensive. However, they have much smaller key sizes and do not require extra communication: SIDH fits perfectly into TLS 1.3's key agreement mechanism.</p><p>TLS 1.3 is the latest version of the TLS protocol. This summer, I've been working at Cloudflare on an experiment for a <i>quantum-resistant</i> version of TLS 1.3 using a hybrid key agreement combining X25519 and supersingular isogeny Diffie-Hellman (SIDH). To achieve this, I implemented a TLS 1.3 client in Go (as part of Cloudflare's <a href="https://github.com/cloudflare/tls-tris">tls-tris</a>), implemented SIDH in Go for the <code>amd64</code> architecture, and combined the SIDH implementation with the TLS 1.3 key agreement mechanism to perform a quantum-resistant TLS 1.3 handshake. This extends previous work by Microsoft Research on a SIDH-based key exchange for TLS 1.2, discussed below.</p>
    <div>
      <h3>Diffie-Hellman key agreement in TLS 1.3</h3>
      <a href="#diffie-hellman-key-agreement-in-tls-1-3">
        
      </a>
    </div>
    <p>In the most recent version of TLS, TLS 1.3, the key agreement mechanism (part 2) is cleanly separated from the authentication mechanism (part 1). TLS 1.3 does key agreement using <a href="https://en.wikipedia.org/wiki/Diffie%E2%80%93Hellman_key_exchange">Diffie-Hellman</a>, usually with an elliptic curve group. Before diving into the quantum-resistant version, let's review how Diffie-Hellman (DH) works, and how it works in the context of TLS 1.3.</p><p>In Diffie-Hellman, we have two parties, Alice and Bob, wishing to establish a shared secret. They fix an <a href="https://en.wikipedia.org/wiki/Abelian_group">abelian group</a> <code>G</code> of prime order <code>p</code>, written additively, as well as a generator <code>P</code> of <code>G</code> (the basepoint). Alice then selects a uniformly random integer <code>a</code> in the range <code>[0,p]</code>. This determines a multiplication-by-<code>a</code> map, usually denoted <code>[a] : G -&gt; G</code>. Alice computes the <code>[a]P</code>, the image of the basepoint under her map, and sends it to Bob. Similarly, Bob chooses a random integer <code>b</code> in the range <code>[0,p]</code>, determining the map <code>[b]</code>, computes <code>[b]P</code>, and sends it to Alice. Alice and Bob then agree on a shared secret <code>[ab]P</code>, which Alice computes as <code>[a]([b]P)</code> and Bob computes as <code>[b]([a]P)</code>:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2ylTypDJrBnx7ePlqXUy9z/53b9ef381ae71e6645e91875df6faa27/dh_diagram_1024.jpg" />
            
            </figure><p>(Here I'm describing the process in terms of maps, in order to show similarity with SIDH later).</p><p>In the TLS 1.3 context, this works as follows. A client initiates a connection by sending a TLS ClientHello message, which contains (among other data) a list of DH groups supported by the client, as well as "keyshares" (i.e., the <code>[a]P</code> values) for some (or all) of these groups.</p><p>The server selects one of the DH groups supported by both the server and the client. In the happy case, the server selects a group the client provided a keyshare for, and sends back a ServerHello message containing the server's keyshare. From this point on, all handshake messages between the client and server, such as certificates, extensions, etc., are encrypted using a "handshake secret" derived from the keyshares. (In the unhappy case, where the client did not provide an acceptable keyshare, the server asks the client to retry, forcing an extra round-trip).</p><p>Application data is later encrypted with a key derived from the handshake secret, as well as other data, so the security of the application data depends on the security of the key agreement. However, all existing DH groups in TLS are vulnerable to quantum algorithms.</p>
    <div>
      <h3>Supersingular-isogeny Diffie-Hellman</h3>
      <a href="#supersingular-isogeny-diffie-hellman">
        
      </a>
    </div>
    <p>SIDH, proposed in 2011 by Luca De Feo and David Jao, is a relatively recent proposal for using elliptic curves to build a quantum-resistant Diffie-Hellman scheme.</p><p>Roughly speaking, rather than working within a single elliptic curve group, SIDH works within a family of related, "isogenous" elliptic curves.</p><p>An <i>isogeny</i> is a map <code>phi : E_1 -&gt; E_2</code> of elliptic curves which sends the identity element of the source curve <code>E_1</code> to the identity of the target curve <code>E_2</code>. It turns out that for every isogeny <code>phi: E_1 -&gt; E_2</code>, there's a dual isogeny <code>psi: E_2 -&gt; E_1</code>, so we can say that two curves are <i>isogenous</i> if they're linked by an isogeny.</p><p>Now we can consider an <i>isogeny graph</i>, whose edges are isogenies and whose vertices are elliptic curves. Instead of choosing secret multiplication-by-<code>n</code> maps to move around inside one elliptic curve, Alice and Bob choose secret isogenies to move around inside a family of isogenous curves (i.e., they choose a random path through the isogeny graph), and the security of the system is related to the difficulty of computing isogenies between arbitrary curves.</p><p>The resulting diagram is slightly more complicated, but structurally similar to the one above:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6yH9scwz585cUkMWgFT5Pa/bc4315826352945e63e4867c541a763d/sidh_phi_prim_kernel.jpg" />
            
            </figure><p>What exactly is going on here? The starting curve <code>E_0</code>, as well as the points <code>P_A, Q_A, P_B, Q_B</code>, are system parameters.</p><p>An isogeny is uniquely determined by its kernel (the subgroup of points on the source curve which the isogeny maps to the identity point of the target curve). To choose an secret isogeny <code>phi_A</code>, Alice chooses secret scalars <code>m_A, n_A</code>, which determine a secret point <code>[m_A]P_A + [n_A]Q_A</code>, which generates the kernel subgroup <code>&lt;[m_A]P_A + [n_A]Q_A&gt;</code> and hence determines her secret isogeny <code>phi_A</code>. Alice evaluates <code>phi_A</code> at the points <code>P_B</code>, <code>Q_B</code>, and sends <code>E_A</code>, <code>phi_A(P_B)</code>, <code>phi_A(Q_B)</code> to Bob, who does the same steps with <code>A</code> and <code>B</code> swapped.</p><p>Next, Alice uses <code>E_B</code>, <code>phi_B(P_A)</code>, <code>phi_B(Q_A)</code> to construct an isogeny <code>phi'_A</code> with kernel <code>&lt;[m_A]phi_B(P_A) + [n_A]phi_B(Q_A)&gt;</code>, while Bob uses <code>E_A</code>, <code>phi_A(P_B)</code>, <code>phi_A(Q_B)</code> to construct an isogeny <code>phi'_B</code> with kernel <code>&lt;[m_B]phi_A(P_B) + [n_B]phi_A(Q_B)&gt;</code>.</p><p>Now <code>phi'_A</code> maps to the curve <code>E_AB</code>, while <code>phi'_B</code> maps to the curve <code>E_BA</code>. The curves <code>E_AB</code> and <code>E_BA</code> are isomorphic. Since elliptic curves are classified by a number called the <i>j-invariant</i>, <code>j(E_AB) = j(E_BA)</code>, and this is the shared secret between Alice and Bob.</p><p>A detailed technical explanation of this process can be found in the <a href="https://eprint.iacr.org/2011/506">expansion to the SIDH paper</a> by Luca De Feo, David Jao, and Jérôme Plût (the diagram above is Figure 1 of that paper), and an explanation in terms of rocket ships traveling through supersingular space-time can be found in <a href="https://gist.github.com/defeo/163444a53252ba90cca6a3b550e6dd31">this article</a> by Luca De Feo. Alternately, there's <a href="https://www.youtube.com/watch?v=ctP24WKusX0">a recording here</a> of Deirdre Connolly's talk at the February 2017 Cloudflare Crypto Meetup.</p><p>In 2016, Craig Costello, Patrick Longa, and Michael Naehrig, at Microsoft Research, published a paper on <a href="https://eprint.iacr.org/2016/413">efficient algorithms for SIDH</a>, applying optimization techniques from high-speed ECC to the original SIDH proposal.</p><p>They also published a constant-time, optimized <a href="https://github.com/Microsoft/PQCrypto-SIDH">implementation</a> written in C and assembly, and a <a href="https://github.com/dconnolly/sidh-for-openssl-patch">patch to OpenSSL</a> to create SIDH ciphersuites for TLS 1.2. My Go implementation builds on their work (both algorithms and code), as discussed below.</p>
    <div>
      <h3>SIDH key agreement in Go TLS</h3>
      <a href="#sidh-key-agreement-in-go-tls">
        
      </a>
    </div>
    <p>The SIDH implementation in the <a href="https://github.com/cloudflare/p751sidh"><code>p751sidh</code> package</a> has two parts: an outer <code>p751sidh</code> package containing SIDH functionality, and an inner <code>p751toolbox</code> package providing low-level functionality.</p><p>Because SIDH is implemented in terms of operations in a large finite field, the performance of the field arithmetic is critical to the performance of the protocol. Unfortunately, this requires writing assembly, because writing high-performance arithmetic is not possible in Go — it's simply not a design goal of the language. (There are a few reasons, most notably that there's no way to directly compute the (128-bit) product of 64-bit integers.)</p><p>The code is partially derived from the Microsoft Research implementation mentioned above. In particular, the field arithmetic is ported from the MSR assembly, and the implementation strategy follows their paper. (I experimented with a prototype implementation of field arithmetic using AVX2 and unsaturated limbs, but decided not to use it, since it got similar performance at the cost of less portability and more power use).</p><p>The assembly code for the lowest level field arithmetic is oriented around pointers to fixed-size buffers; this is wrapped in a Go API modeled after the <code>big.Int</code> API. To test that the code behaves correctly, I used Go's <code>testing/quick</code> package to write property-based tests, which generate random field elements and compare the results of various operations against the same operations using <code>big.Int</code>.</p><p>Curve and isogeny functions are implemented using the Go API, and the outer-level SIDH functions achieve comparable performance as compared to the MSR implementation. In rough benchmarks, the Go implementation appears to be within 2-6% of the MSR implementation. The entire implementation is constant-time.</p><p>Concretely, on a T460s, Skylake i7-6600U @2.6GHz<a href="#fn1">[1]</a>, key generation and shared secret computations take 11-13ms. Note that unlike classic Diffie-Hellman, Alice and Bob's computations are slightly different, so they have different timings.</p>
            <pre><code>BenchmarkAliceKeyGen             	     11,709,778 ns/op
BenchmarkBobKeyGen               	     13,073,380 ns/op
BenchmarkSharedSecretAlice       	     11,256,985 ns/op
BenchmarkSharedSecretBob         	     12,984,817 ns/op</code></pre>
            <p>This is much more computationally expensive than a conventional ECDH key agreement, or a lattice-based key agreement. However, from the point of view of latency, this might not be so bad. For example, 12 milliseconds is the round-trip distance between Paris and Amsterdam, and so a key agreement requiring extra communication could easily take longer, even if the computations were less expensive.</p><p>Because SIDH is still new and unproven, the TLS integration performs a hybrid key exchange: it sends both an X25519 keyshare and an SIDH keyshare, performs both X25519 and SIDH shared secret computations, and feeds both shared secrets into the TLS key derivation mechanism. This ensures that even if SIDH turns out to be broken, the key agreement is at least as secure as X25519.</p><p>The TLS component is implemented as part of <a href="https://github.com/cloudflare/tls-tris"><code>tls-tris</code></a>, Cloudflare's fork of Go's <code>crypto/tls</code> package, which has a partial implementation of TLS 1.3, Draft 18. Because <code>tris</code> didn't support client functionality, I implemented a basic TLS 1.3 client before working on SIDH.</p><p>The hybrid key exchange is specified using the group identifier <code>0xFE24</code>. The <code>0xFE</code> places it in the private-use reserved codeblock <code>0xFE00..0xFEFF</code>, since standardizing SIDH would be premature at this time; the number <code>24</code> was chosen due to its deep <a href="https://en.wikipedia.org/wiki/24_(number)#In_mathematics">mathematical</a> <a href="http://math.ucr.edu/home/baez/numbers/24.pdf">significance</a> and connection to <a href="https://en.wikipedia.org/wiki/Monstrous_moonshine">moonshine</a>.</p><p>The entire SIDH integration takes <a href="https://github.com/cloudflare/tls-tris/commit/e47a4e7e1a5e11d1a3adab392248399ae7426695">less than 100 lines of code.</a></p>
    <div>
      <h3>Misassembly hazards in Go assembly</h3>
      <a href="#misassembly-hazards-in-go-assembly">
        
      </a>
    </div>
    <p>The Microsoft Research SIDH implementation provides x64 assembly for field arithmetic, but Go's assembler uses <a href="https://golang.org/doc/asm">a bespoke syntax derived from Plan 9</a>, so reusing their assembly meant porting it to Go assembly.</p><p>When I first did this, the code produced incorrect results, even though all the instructions should have been exactly the same. I was eventually able to find the problem by dissassembling the generated Go binary, and comparing to the original assembly.</p><p>The original assembly was roughly of the form</p>
            <pre><code>  ...
  sbb    r10, rax
  movq   rax, 0
  sbb    rax, 0
  ...</code></pre>
            <p>The <code>sbb dst, src</code> instruction is "subtract with borrow"; this instruction reads the carry flag <code>CF</code> and sets <code>dst = dst - (src + CF)</code>, <code>CF = 1</code> if <code>dst &lt; src+CF</code>. So, this code is supposed to set the <code>rax</code> register to <code>0</code> if the first subtraction did not underflow, and to <code>1111...11</code> if it did. (This value is used later in the computation as a mask). However, writing</p>
            <pre><code>  ...
  SBBQ    AX, R10
  MOVQ    $0, AX
  SBBQ    $0, AX
  ...</code></pre>
            <p>does not have the same result. The reason is that the Go assembler misassembles the <code>MOVQ $0, AX</code> instruction to <code>xor eax, eax</code>. This instruction has a shorter encoding. Unfortunately, it also has different behaviour: it clears the carry flag, breaking the program.</p><p>The reason this happens is that <code>MOV</code> in Go assembly is declared to be a "pseudoinstruction", which does not necessarily correspond to a literal <code>mov</code> instruction. Unfortunately, there's no specification of which instructions are pseudoinstructions, and what their behaviour is — <code>MOV</code> in Go assembly is defined to clobber flags, but this isn't documented outside of compiler internals.</p><p>To work around this issue, we can drop literal bytes into the instruction stream. In this case, we write</p>
            <pre><code>  #define ZERO_AX_WITHOUT_CLOBBERING_FLAGS BYTE   $0xB8; BYTE $0; BYTE $0; BYTE $0; BYTE $0;
  ...
  SBBQ    AX, R10
  ZERO_AX_WITHOUT_CLOBBERING_FLAGS
  SBBQ    $0, AX</code></pre>
            <p>to insert the bytes encoding the <code>mov eax, 0</code> instruction, which leaves the carry flag intact.</p>
    <div>
      <h3>Source Code</h3>
      <a href="#source-code">
        
      </a>
    </div>
    <p>This implementation is still experimental, and should not be used in production without review. The computational cost of SIDH may keep it from being practical for short-lived client connections (at least in the near term). However, it may be suitable for long-lived connections, such as inter-datacenter connections, where the cost of the handshake is amortized over the length of the connection.</p><p>To find out more, the SIDH implementation can be found on GitHub as the <a href="https://github.com/cloudflare/p751sidh"><code>p751sidh</code> package</a>. The TLS integration can be found on my <a href="https://github.com/cloudflare/tls-tris/tree/hdevalence/sidh"><code>hdevalence/sidh</code> branch</a> of <a href="https://github.com/cloudflare/tls-tris"><code>tls-tris</code></a>.</p><p>Thanks to Craig Costello, Diego Aranha, Deirdre Connolly, Nick Sullivan, Watson Ladd, Filippo Valsorda, and George Tankersley for their advice, comments, and discussions.</p><hr /><ol><li><p>Unfortunately, Lenovo's BIOS does not allow disabling Turbo Boost. <a href="#fnref1">↩︎</a></p></li></ol> ]]></content:encoded>
            <category><![CDATA[TLS 1.3]]></category>
            <category><![CDATA[TLS]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[Programming]]></category>
            <category><![CDATA[Cryptography]]></category>
            <category><![CDATA[Research]]></category>
            <category><![CDATA[Post-Quantum]]></category>
            <guid isPermaLink="false">Q8E7P5bXIbEbdoK0zOdXR</guid>
            <dc:creator>Henry de Valence</dc:creator>
        </item>
        <item>
            <title><![CDATA[Introducing Zero Round Trip Time Resumption (0-RTT)]]></title>
            <link>https://blog.cloudflare.com/introducing-0-rtt/</link>
            <pubDate>Wed, 15 Mar 2017 14:00:41 GMT</pubDate>
            <description><![CDATA[ Cloudflare’s mission is to help build a faster and more secure Internet. Over the last several years, the Internet Engineering Task Force (IETF) has been working on a new version of TLS, the protocol that powers the secure web. ]]></description>
            <content:encoded><![CDATA[ <p><sub><i></i></sub></p><blockquote><p><sub><i>NOTE: This post has been updated to reflect that the feature is no longer enabled by default. Further details </i></sub><a href="https://developers.cloudflare.com/speed/optimization/protocol/0-rtt-connection-resumption/"><sub><i>outlined here</i></sub></a><sub><i>.</i></sub></p></blockquote><p>Cloudflare’s mission is to help build a faster and more secure Internet. Over the last several years, the Internet Engineering Task Force (IETF) has been working on a new version of TLS, the protocol that powers the secure web. <a href="/introducing-tls-1-3/">Last September</a>, Cloudflare was the first service provider to enable people to use this new version of the protocol, <a href="/tag/tls%201.3/">TLS 1.3</a>, improving security and performance for millions of customers.</p><p>Today we are introducing another performance-enhancing feature: zero <a href="https://www.cloudflare.com/learning/cdn/glossary/round-trip-time-rtt/">round trip</a> time resumption, abbreviated as <i>0-RTT</i>. About 60% of the connections we see are from people who are visiting a site for the first time or revisiting after an extended period of time. TLS 1.3 speeds up these connections significantly. The remaining 40% of connections are from visitors who have recently visited a site and are resuming a previous connection. For these resumed connections, standard TLS 1.3 is safer but no faster than any previous version of TLS. 0-RTT changes this. It dramatically speeds up resumed connections, leading to a faster and smoother web experience for web sites that you visit regularly. This speed boost is especially noticeable on mobile networks.</p><p>We’re happy to announce that 0-RTT is <b>NOT</b> enabled by default for all sites on Cloudflare’s free service. For paid customers, it can be enabled in the Crypto app in the Cloudflare dashboard. </p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7jnAtBFcCo19Qvyjxa2aVI/fe6a5395324791599304b84c0d5baa38/image05.jpg" />
            
            </figure><p>This is an experimental feature, and therefore subject to change.</p><p>If you're just looking for a live demo, click <a href="#demo">here</a>.</p>
    <div>
      <h3>The cost of latency</h3>
      <a href="#the-cost-of-latency">
        
      </a>
    </div>
    <p>A big component of web performance is transmission latency. Simply put, transmission latency is the amount of time it takes for a message to get from one party to another over a network. Lower latency means snappier web pages and more responsive APIs; when it comes to responsiveness, every millisecond counts.</p><p>The diagram below comes from a recent latency test of Cloudflare’s network using the RIPE Atlas project. In the experiment, hundreds of probes from around the world sent a single <a href="https://en.wikipedia.org/wiki/Ping_(networking_utility)">“ping”</a> message to Cloudflare and measured the time it took to get an answer in reply. This time is a good approximation of how long it takes for data to make the round trip from the probe to the server and back, so-called <i>round-trip latency</i>.</p>
            <figure>
            <a href="https://atlas.ripe.net/measurements/7883648/">
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3eiU71K5ZPEX2slK1J1mYb/68441b9ae1d68d8cb9ef7579e84625fc/image03.png" />
            </a>
            </figure><p>Latency is usually measured in milliseconds or thousandths of a second. A thousandth of a second may not seem like a long time, but they can add up quickly. It’s <a href="https://www.nngroup.com/articles/response-times-3-important-limits/">generally accepted</a> that the threshold over which humans no longer perceive something as instantaneous is 100ms. Anything above 100ms will seem fast, but not immediate. For example, Usain Bolt’s reaction time out of the starting blocks in the hundred meter sprint is around <a href="https://www.nytimes.com/interactive/2016/08/13/sports/olympics/can-you-beat-usain-bolt-out-of-the-blocks.html">155ms</a>, a good reference point for thinking about latency. Let’s use 155ms, a fast but human perceptible amount of time, as a unit of time measurement. Call 155ms “one Bolt.”</p><p>The map above shows that most probes have very low round-trip latency (&lt;20ms) to Cloudflare’s global network. However, for a percentage of probes, the time it takes to reach the nearest Cloudflare data center is much longer, in some cases exceeding 300ms (or two bolts!).</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/59Z9OOkEOcRLQzaJM6A2ZB/4ee8f0a294467572f88e092df1dffbc9/image01.jpg" />
            
            </figure><p><sub>Creative Commons Attribution 2.0 Generic [</sub><a href="https://commons.wikimedia.org/wiki/File:London\_2012\_200m\_heat\_1\_start.jpg"><sub>Nick J. Webb</sub></a><sub>]</sub></p><p>Connections that travel over longer distances have higher latency. Data travel speed is limited by the speed of light. When Cloudflare opens a new datacenter in a <a href="/panama-expands-cloudflare-network-to-50-countries/">new city</a>, latency is reduced for people in the surrounding areas when visiting sites that use Cloudflare. This improvement is often simply because data has a shorter distance to travel.</p><p>Geographic proximity is not the only contributor to latency. WiFi and cellular networks can add <a href="https://hpbn.co/wifi/#measuring-your-wifi-first-hop-latency">tens</a> or even <a href="https://opensignal.com/blog/2014/03/10/lte-latency-how-does-it-compare-to-other-technologies/">hundreds</a> of milliseconds to transmission latency. For example, using a 3G cellular network adds around 1.5 bolts to every transmission. Satellite Internet connections are even worse, adding up to <a href="https://arstechnica.com/information-technology/2013/02/satellite-internet-faster-than-advertised-but-latency-still-awful/">4 bolts to every transmission</a>.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1wIWoXVW0V1AugzJb4lVoh/3269fe660fdf6bebaa8577de42d98432/boltchart.jpg" />
            
            </figure><p>Round-trip latency makes an especially big difference for HTTPS. When making a secure connection to a server, there is an additional set-up phase that can require up to three messages to make the round trip between the client and the server before the first request can even be sent. For a visitor 250ms away, this can result in an excruciating one second (1000ms) delay before a site starts loading. During this time Usain Bolt has run 10 meters and you’re still waiting for a web page. TLS 1.3 and 0-RTT can’t reduce the round trip latency of a transmission, but it can reduce the number of round trips required for setting up an HTTPS connection.</p>
    <div>
      <h3>HTTPS round trips</h3>
      <a href="#https-round-trips">
        
      </a>
    </div>
    <p>For a browser to download a web page over HTTPS, there is a some setup that goes on behind the scenes. Here are the 4 phases that need to happen the first time your browser tries to access a site.</p>
    <div>
      <h4>Phase 1: DNS Lookup</h4>
      <a href="#phase-1-dns-lookup">
        
      </a>
    </div>
    <p>Your browser needs to convert the hostname of the website (say blog.cloudflare.com) into an Internet IP address (like 2400:cb00:2048:1::6813:c166 or 104.19.193.102) before it can connect to it. DNS resolvers operated by your ISP usually cache the IP address for popular domains, and latency to your ISP is fairly low, so this step often takes a negligible amount of time.</p>
    <div>
      <h4>Phase 2: TCP Handshake (1 round trip)</h4>
      <a href="#phase-2-tcp-handshake-1-round-trip">
        
      </a>
    </div>
    <p>The next step is to establish a TCP connection to the server. This phase consists of the client sending a SYN packet to the server, and the server responding with an ACK pack. The details don’t matter as much as the fact that this requires data to be sent from client to server and back. This takes one round trip.</p>
    <div>
      <h4>Phase 3: TLS Handshake (2 round trips)</h4>
      <a href="#phase-3-tls-handshake-2-round-trips">
        
      </a>
    </div>
    <p>In this phase, the client and server exchange cryptographic key material and set up an encrypted connection. For TLS 1.2 and earlier, this takes <a href="/keyless-ssl-the-nitty-gritty-technical-details/">two round trips</a>.</p>
    <div>
      <h4>Phase 4: HTTP (1 round trip)</h4>
      <a href="#phase-4-http-1-round-trip">
        
      </a>
    </div>
    <p>Once the TLS connection has been established, your browser can send an encrypted HTTP request using it. This can be a GET request for a specific URL such as /, for example. The server will respond with an HTTP response containing the webpage’s HTML and the browser will start displaying the page.</p><p>Assuming DNS is instantaneous, this leaves 4 round trips before the browser can start showing the page. If you’re visiting a site you’ve recently connected to, the TLS handshake phase can be shortened from two round trips to one with <a href="/tls-session-resumption-full-speed-and-secure/">TLS session resumption</a>.</p><p>This leaves the following minimum wait times:</p><ul><li><p>New Connection: 4 RTT + DNS</p></li><li><p>Resumed Connection: 3 RTT + DNS</p></li></ul>
    <div>
      <h3>How do TLS 1.3 and 0-RTT improve connection times?</h3>
      <a href="#how-do-tls-1-3-and-0-rtt-improve-connection-times">
        
      </a>
    </div>
    <p>One of the biggest advantages of TLS 1.3 over earlier versions is that it only requires one round trip to set up the connection, resumed or not. This provides a significant speed up for new connections, but none for resumed connections. Our measurements show that around 40% of HTTPS connections are resumptions (either via session IDs or session tickets). With 0-RTT, a round trip can be eliminated for most of that 40%.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/53zGSmQOGgFg0ARVyqQu0B/8f7c5096eb40fe509a5991c29b314a06/image04.jpg" />
            
            </figure><p>TLS connection reuse by time of day.</p><p>To summarize the performance differences:</p><p><i>TLS 1.2 (and earlier)</i></p><ul><li><p>New Connection: 4 RTT + DNS</p></li><li><p>Resumed Connection: 3 RTT + DNS</p></li></ul><p><i>TLS 1.3</i></p><ul><li><p>New Connection: 3 RTT + DNS</p></li><li><p>Resumed Connection: 3 RTT + DNS</p></li></ul><p><i>TLS 1.3 + 0-RTT</i></p><ul><li><p>New Connection: 3 RTT + DNS</p></li><li><p>Resumed Connection: 2 RTT + DNS</p></li></ul><p>The performance gains are huge.</p>
    <div>
      <h3>0-RTT in action</h3>
      <a href="#0-rtt-in-action">
        
      </a>
    </div>
    <p>Both <a href="https://www.mozilla.org/en-US/firefox/channel/desktop/">Firefox Beta</a> and <a href="https://www.google.com/chrome/browser/beta.html">Chrome Beta</a> have TLS 1.3 enabled by default. The stable versions of Chrome and Firefox also ship with TLS 1.3 support, but it has to be enabled manually for now. The only browsers which supports 0-RTT as of March 2017 are Firefox Nightly and Aurora. To enable it, do the following:</p><ul><li><p>Enter <code>about:config</code> in the address bar</p></li><li><p>Ensure <code>security.tls.version.max</code> is 4 (this enables TLS 1.3)</p></li><li><p>Set <code>security.tls.enable_0rtt_data</code> to true</p></li></ul><p>This demo loads an image from a server that runs the Cloudflare TLS 1.3 0-RTT proxy. In order to emphasize the latency differences, we used Cloudflare's new <a href="https://www.cloudflare.com/load-balancing/">DNS Load Balancer</a> to direct you to a far away server. If the image is loaded over 0-RTT it will be served orange, otherwise black, based on the <code>CF-0RTT-Unique</code> header.</p><p>The image is loaded twice: with and without a query string. 0-RTT is disabled transparently when a query string is used to prevent replays.</p><p>The connection is pre-warmed, Keep-Alives are off and caching is disabled to simulate the first request of a resumed connection.</p><hr /><p>Preparing the live demo to run (requires Javascript)...</p><p>Click here to start the live demo.</p><p>Live demo unavailable: your browser doesn't support TLS 1.3, or the connection failed.</p><p>Live demo unavailable: your browser doesn't support 0-RTT.</p><p>0-RTT took: ...ms</p><p>1-RTT took: ...ms</p><p>.demo { text-align: center; } .demo a { font-weight: bold; text-decoration: none; color: inherit; } .demo p { margin-top: 1.5em; } .images { max-width: 440px; margin: 0 auto; } .images &gt; div { width: 200px; float: left; margin: 0 10px; } .images &gt; div.clear { float: none; clear: both; } .images img { width: 100%; } var currentStatus = "preparing"; function demoStatus(s) { document.querySelector(".demo .message."+currentStatus).style.display = "none"; document.querySelector(".demo .message."+s).style.display = ""; currentStatus = s; } function demoLaunch() { var startTime, startTime0; var img = document.querySelector(".demo .images img.nozrtt"); img.onloadstart = function() { startTime = new Date().getTime(); }; img.onload = function() { var loadtime = new Date().getTime() - startTime; document.querySelector(".demo .images .time.nozrtt").innerText = loadtime; }; var img0 = document.querySelector(".demo .images img.zrtt"); img0.onloadstart = function() { startTime0 = new Date().getTime(); }; img0.onload = function() { var loadtime = new Date().getTime() - startTime0; document.querySelector(".demo .images .time.zrtt").innerText = loadtime; window.setTimeout(function() { img.src = "<a href="https://0rtt.tls13.com/img.png?no0rtt">https://0rtt.tls13.com/img.png?no0rtt</a>"; }, 2000); }; img0.src = "<a href="https://0rtt.tls13.com/img.png">https://0rtt.tls13.com/img.png</a>"; document.querySelector(".demo .message.ready").style.display = "none"; document.querySelector(".demo .images").style.display = ""; }; document.querySelector(".demo .message.ready a").onclick = function() { // var r = new XMLHttpRequest(); // r.addEventListener("load", demoLaunch); // r.open("GET", "<a href="https://0rtt.tls13.com/img.png?warm">https://0rtt.tls13.com/img.png?warm</a>"); // r.send(); demoLaunch(); return false; }; var r = new XMLHttpRequest(); r.addEventListener("error", function() { demoStatus("no13"); document.querySelector(".demo .fallback").style.display = ""; }); r.addEventListener("loadend", function() { var r = new XMLHttpRequest(); r.addEventListener("error", function() { demoStatus("no13"); document.querySelector(".demo .fallback").style.display = ""; }); r.addEventListener("load", function() { console.log(this.getResponseHeader("X-0rtt")); if (this.getResponseHeader("X-0rtt") == "1") { demoStatus("ready"); } else { demoStatus("no0rtt"); document.querySelector(".demo .fallback").style.display = ""; } }); window.setTimeout(function() { r.open("GET", "<a href="https://0rtt.tls13.com/img.png">https://0rtt.tls13.com/img.png</a>"); r.send(); }, 10000); }); r.open("GET", "<a href="https://0rtt.tls13.com/img.png">https://0rtt.tls13.com/img.png</a>"); r.send();</p><hr /><p>To see what’s going on under the hood, take a look in Firefox’s Developer Tools. We’ve taken a screenshot of a version of this demo as run by a user in San Francisco. In the first screenshot, the image is served with TLS 1.3, in the second with TLS 1.3 and 0-RTT.</p>
            <figure>
            <a href="http://staging.blog.mrk.cfdata.org/content/images/2017/03/image02.jpg">
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4IFlqIKCZ4NeVGJv3uuidK/6b46ab2c2cdc758cf452e0c2b50c8289/image02.jpg" />
            </a>
            </figure><p>In the top image, you can see that the blue “Waiting” bar is around 250ms shorter than it is for the second image. This 250ms represents the time it took for the extra round trip between the browser and the server. If you’re in San Francisco, 0-RTT enables the image to load 1.5 bolts faster than it would have otherwise.</p>
    <div>
      <h3>What’s the catch?</h3>
      <a href="#whats-the-catch">
        
      </a>
    </div>
    <p>0-RTT is cutting edge protocol technology. With it, encrypted HTTPS requests become just as fast as an unencrypted HTTP requests. This sort of breakthrough comes at a cost. This cost is that the security properties that TLS provides to 0-RTT request are slightly weaker than those it provides to regular requests. However, this weakness is manageable, and applications and websites that follow HTTP semantics shouldn’t have anything to worry about. The weakness has to do with replays.</p><p>Unlike any other requests sent over TLS, requests sent as part of 0-RTT resumption are vulnerable to what’s called a replay attack. If an attacker has access to your encrypted connection, they can take a copy of the encrypted 0-RTT data (containing your first request) and send it to the server again pretending to be you. This can result in the server seeing repeated requests from you when you only sent one.</p><p>This doesn’t sound like a big deal until you consider that HTTP requests are used for more than just downloading web pages. For example, HTTP requests can trigger transfers of money. If someone makes a request to their bank to “pay $1000 to Craig” and that request is replayed, it could cause Craig to be paid multiple times. A good deal if you’re Craig.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6npVssOtArI92M8EEGPgvM/05162655ebe86e376b1abd35d934c4ba/image00.jpg" />
            
            </figure><p>Luckily, the example above is somewhat contrived. Applications need to be replay safe to work with modern browsers, whether they support 0-RTT or not. Browsers replay data all the time due to normal network glitches, and researchers from Google have even shown that attackers can <a href="https://vnhacker.blogspot.com/2015/12/bad-life-advice-never-give-up-replay.html">trick the browser into to replaying requests</a> in almost any circumstance by triggering a particular type of network error. In order to be resilient against this reality, well-designed web applications that handle sensitive requests use application-layer mechanisms to prevent replayed requests from affecting them.</p><p>Although web applications should be replay resilient, that’s not always the reality. To protect these applications from malicious replays, Cloudflare took an extremely conservative approach to choosing which 0-RTT requests would be answered. Specifically, only <b>GET requests with no query parameters</b> are answered over 0-RTT. According to the HTTP specification, GET requests are supposed to be <a href="https://www.w3.org/Protocols/rfc2616/rfc2616-sec9.html">idempotent</a>, meaning that they don’t change the state on the server and shouldn’t be used for things like funds transfer. We also implement a maximum size of 0-RTT requests, and limit how long they can be replayed.</p><p>Furthermore, Cloudflare can uniquely identify connection resumption attempts, so we relay this information to the origin by adding an extra header to 0-RTT requests. This header uniquely identifies the request, so if one gets repeated, the origin will know it's a replay attack.</p><p>Here’s what the header looks like:</p>
            <pre><code>Cf-0rtt-Unique: 37033bcb6b42d2bcf08af3b8dbae305a</code></pre>
            <p>The hexadecimal value is derived from a piece of data called a <a href="https://tlswg.github.io/tls13-spec/#pre-shared-key-extension">PSK binder</a>, which is unique per 0-RTT request.</p><p>Generally speaking, 0-RTT is safe for most web sites and applications. If your web application does strange things and you’re concerned about its replay safety, consider not using 0-RTT until you can be certain that there are no negative effects.</p>
    <div>
      <h3>Conclusion</h3>
      <a href="#conclusion">
        
      </a>
    </div>
    <p>TLS 1.3 is a big step forward for web performance and security. By combining TLS 1.3 with 0-RTT, the performance gains are even more dramatic. Combine this with HTTP/2 and the encrypted web has never been faster, especially on mobile networks. Cloudflare is happy to be the first to introduce this feature on a wide scale.</p> ]]></content:encoded>
            <category><![CDATA[TLS]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[TLS 1.3]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[Cryptography]]></category>
            <guid isPermaLink="false">7hwYueM2wLGedpCkezUfnw</guid>
            <dc:creator>Nick Sullivan</dc:creator>
        </item>
        <item>
            <title><![CDATA[NCC Group's Cryptography Services audits our Go TLS 1.3 stack]]></title>
            <link>https://blog.cloudflare.com/ncc-groups-cryptography-services-audit-of-tls-1-3/</link>
            <pubDate>Wed, 15 Feb 2017 00:49:19 GMT</pubDate>
            <description><![CDATA[ The Cloudflare TLS 1.3 beta is run by a Go implementation of the protocol based on the Go standard library, crypto/tls.  ]]></description>
            <content:encoded><![CDATA[ <p><i>The following is a guest post from Scott Stender</i></p><p>The Cloudflare <a href="/tls-1-3-explained-by-the-cloudflare-crypto-team-at-33c3/">TLS 1.3 beta</a> is run by a Go implementation of the protocol based on the Go standard library, <code>crypto/tls</code>. Starting from that excellent Go codebase allowed us to quickly start experimenting, to be the first wide server deployment of the protocol, and to effectively track the changes to the specification draft.</p><p>Of course, the security of a TLS implementation is critical, so we engaged NCC Group's Cryptography Services to perform an audit at the end of 2016.</p><p>You can find the codebase <a href="https://github.com/cloudflare/tls-tris">on the Cloudflare GitHub</a>. It's a drop-in replacement for <code>crypto/tls</code> and comes with a <code>go</code> wrapper to patch the standard library as needed.</p><p>The code is developed in the open but is currently targeted only at internal use: the repository is frequently rebased and the API is not guaranteed to be stable or fully documented. You can take <a href="https://gist.github.com/FiloSottile/37d6516af411582e2aa35a981bf12102">a sneak peek at the API here</a>.</p><p>The final goal is to upstream the patches to the Go project so that all users of the Go standard library benefit from it. You can <a href="https://go-review.googlesource.com/q/status:open+project:go+branch:dev.tls">follow the process here</a>.</p><p>Below we republish the article about the audit first appeared <a href="https://www.nccgroup.trust/us/about-us/newsroom-and-events/blog/2017/february/ncc-groups-cryptography-services-complete-an-audit-of-cloudfares-tls13/">on the NCC Group's blog</a>.</p>
    <div>
      <h3>NCC Group's Cryptography Services Complete an Audit of Cloudflare's TLS1.3</h3>
      <a href="#ncc-groups-cryptography-services-complete-an-audit-of-cloudflares-tls1-3">
        
      </a>
    </div>
    <p>NCC Group's Cryptography Services practice recently completed a two-week audit of Cloudflare's TLS 1.3 implementation. The audit took place between November 11, 2016 and December 9, 2016.</p><p>The TLS standard was last updated almost ten years ago and this version brings new features and a simplified handshake to the protocol. Many old cryptographic algorithms have been replaced with more modern ones, key exchanges have forward secrecy by default, the handshake phase will be faster, certificates will be able to enjoy security-proven signature schemes, MAC-then-Encrypt constructions are out—the weakest features of older TLS versions have been updated or removed.</p><p>Cryptography Services analyzed Cloudflare's TLS 1.3 implementation for protocol-level flaws and for deviations from the draft specification. The team found a small number of issues during the review—all of which were promptly fixed—and was pleased with the quality of the code.</p><p>Cloudflare built their implementation of TLS 1.3 on the Go programming language's standard TLS library, making use of the existing base to correctly and safely parse TLS packets. While building on top of older versions can be challenging, Cloudflare has added TLS 1.3 code in a safe and segregated way, with new defenses against downgrade attacks being added in the final implementation of the specification. This permits support for older versions of TLS while being free from unexpected conflicts or downgrades.</p><p>Using Go and its standard libraries enables Cloudflare to avoid common implementation issues stemming from vulnerable strcpy and memcpy operations, pointer arithmetic and manual memory management while providing a best-in-class crypto API.</p><p>Cloudflare implemented a conservative subset of the TLS 1.3 specification. State-of-the-art algorithms, such as Curve25519, are given priority over legacy algorithms. Session resumption is limited to the forward secure option. Cloudflare's implementation also considers efficiency, using AES-GCM if it detects accelerated hardware support and the faster-in-software Chacha20-Poly1305 in its absence.</p><p>There is still work to be done before TLS 1.3 enjoys large scale adoption. Cloudflare is paving the way with its reliable server implementation of TLS 1.3, and Firefox and Chrome's client implementations make end-to-end testing of the draft specification possible. NCC Group applauds the work of the IETF and these early implementers.</p> ]]></content:encoded>
            <category><![CDATA[TLS 1.3]]></category>
            <category><![CDATA[Beta]]></category>
            <category><![CDATA[Security]]></category>
            <guid isPermaLink="false">7Ku7L3U2Iem97ina6czjj5</guid>
            <dc:creator>Guest Author</dc:creator>
        </item>
        <item>
            <title><![CDATA[TLS 1.3 explained by the Cloudflare Crypto Team at 33c3]]></title>
            <link>https://blog.cloudflare.com/tls-1-3-explained-by-the-cloudflare-crypto-team-at-33c3/</link>
            <pubDate>Wed, 01 Feb 2017 14:57:00 GMT</pubDate>
            <description><![CDATA[ Nick Sullivan and I gave a talk about TLS 1.3 at 33c3, the latest Chaos Communication Congress. The congress, attended by more that 13,000 hackers in Hamburg, has been one of the hallmark events of the security community for more than 30 years. ]]></description>
            <content:encoded><![CDATA[ <p><a href="/author/nick-sullivan/">Nick Sullivan</a> and I gave a talk about <a href="/tag/tls%201.3/">TLS 1.3</a> at <a href="https://events.ccc.de/tag/33c3/">33c3</a>, the latest Chaos Communication Congress. The congress, attended by more that 13,000 hackers in Hamburg, has been one of the hallmark events of the security community for more than 30 years.</p><p>You can watch the recording below, or <a href="https://media.ccc.de/v/33c3-8348-deploying_tls_1_3_the_great_the_good_and_the_bad">download it in multiple formats and languages on the CCC website</a>.</p><p>The talk introduces TLS 1.3 and explains how it works in technical detail, why it is faster and more secure, and touches on its history and current status.</p><p>.fluid-width-video-wrapper { margin-bottom: 45px; }</p><p>The <a href="https://speakerdeck.com/filosottile/tls-1-dot-3-at-33c3">slide deck is also online</a>.</p><p>This was an expanded and updated version of the <a href="/tls-1-3-overview-and-q-and-a/">internal talk previously transcribed on this blog</a>.</p>
    <div>
      <h3>TLS 1.3 hits Chrome and Firefox Stable</h3>
      <a href="#tls-1-3-hits-chrome-and-firefox-stable">
        
      </a>
    </div>
    <p>In related news, TLS 1.3 is reaching a percentage of Chrome and Firefox users this week, so websites with the Cloudflare TLS 1.3 beta enabled will load faster and more securely for all those new users.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/lIyLFsHXlAipFcgZ1nPWr/e71e81c8a7849214051b75430e1c169e/Screen-Shot-2017-01-30-at-20.14.53.png" />
            
            </figure><p>You can enable the TLS 1.3 beta from the Crypto section of your control panel.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7jji24riIIZQ2OEC6Xc93r/88d0ae02211b14fd407c065c5880ad31/image00.png" />
            
            </figure> ]]></content:encoded>
            <category><![CDATA[TLS 1.3]]></category>
            <category><![CDATA[TLS]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[Chrome]]></category>
            <category><![CDATA[Firefox]]></category>
            <category><![CDATA[Beta]]></category>
            <category><![CDATA[Cryptography]]></category>
            <guid isPermaLink="false">2zdHVDhrFKGUtMgVjYallG</guid>
            <dc:creator>Filippo Valsorda</dc:creator>
        </item>
        <item>
            <title><![CDATA[Cloudflare Crypto Meetup #4: November 22]]></title>
            <link>https://blog.cloudflare.com/cloudflare-crypto-meetup-4-november-22-2/</link>
            <pubDate>Wed, 02 Nov 2016 23:45:38 GMT</pubDate>
            <description><![CDATA[ Come join us on Cloudflare HQ in San Francisco on Tuesday, November 22 for another cryptography meetup. We had such a great time at the last one, we decided to host another. ]]></description>
            <content:encoded><![CDATA[ <p>Come join us on Cloudflare HQ in San Francisco on Tuesday, November 22 for another cryptography meetup. We had such a great time at the <a href="/cloudflare-crypto-meetup-april-21-2016/">last one</a>, we decided to host another.</p><p>We’ll start the evening at 6:00p.m. with time for networking, followed up with short talks by leading experts starting at 6:30p.m. Pizza and beer are provided! RSVP <a href="https://www.meetup.com/CloudFlare-Meetups/events/235308865/"><b>here</b></a>.</p><p>Here are the confirmed speakers:</p>
    <div>
      <h3>Emily Stark</h3>
      <a href="#emily-stark">
        
      </a>
    </div>
    <p>Emily Stark is a software engineer on the Google Chrome security team, where she focuses on making TLS more usable and secure. She spends lots of time analyzing field data about the HTTPS ecosystem and improving web platform features like Referrer Policy and Content Security Policy that help developers migrate their sites to HTTPS. She has also worked on the DevTools security panel and the browser plumbing that supports other security UI surfaces like the omnibox. (That green lock icon is more complicated than you'd think!)</p><p>Previously, she was a core developer at Meteor Development Group, where she worked on web framework security and internal infrastructure, and a graduate student researching client-side cryptography in web browsers. Emily has a master's degree from MIT and a bachelor's degree from Stanford, both in computer science.</p>
    <div>
      <h4>How hard is it to send an HTTPS request?</h4>
      <a href="#how-hard-is-it-to-send-an-https-request">
        
      </a>
    </div>
    <p>In theory, sending an HTTP request over TLS is a simple matter of performing a handshake and validating a certificate chain. In practice, however, client, server, and network misconfigurations get in the way, causing hundreds of millions of HTTPS errors in Chrome every month. This talk will describe projects and results from the Chrome security team's work to understand these errors, fix as many of them as possible, and prevent new problems from cropping up.</p>
    <div>
      <h3>Jon McLachlan</h3>
      <a href="#jon-mclachlan">
        
      </a>
    </div>
    <p>Jon McLachlan leads security engineering at Symphony. Prior to joining Symphony he was a senior security engineer at Apple where he worked on foundational protection mechanisms for iTunes as well as digital rights management. Jon has 8 patents to his name, all in the area of security. He is a passionate biker and husband.</p><p>Symphony has taken a unique approach to delivering enterprise-grade secure communications. During the presentation you will learn how most cloud services today require an act of faith on behalf of the customer who must ‘trust’ the service provider. Symphony’s approach means that Symphony employees are unable to see customer data – it’s protected using keys controlled by customer-owned infrastructure. Symphony sets a new benchmark for safe and secure cloud communications.</p>
    <div>
      <h4>An Introduction to Symphony Security</h4>
      <a href="#an-introduction-to-symphony-security">
        
      </a>
    </div>
    <p>An Introduction to Symphony Security will explore how we selected our threat model, some of the technical challenges that the threat model presented, and how we overcame these technical challenges to deliver a secure, seamless communication platform.</p>
    <div>
      <h3>Special Guest</h3>
      <a href="#special-guest">
        
      </a>
    </div>
    <p>Eric Rescorla from Mozilla will be speaking about TLS 1.3</p>
    <div>
      <h3>Videos from last time</h3>
      <a href="#videos-from-last-time">
        
      </a>
    </div>
    <p>We had a great turnout last time. Here are some videos of the talks:</p><p>Brian Warner: <a href="https://www.youtube.com/watch?v=v5vaBaHR9gQ">Magic Wormhole</a>.</p><p>Zakir Durumeric: <a href="https://www.youtube.com/watch?v=6ZND55Rcq4I">Real World Email Delivery Security</a>.</p> ]]></content:encoded>
            <category><![CDATA[Events]]></category>
            <category><![CDATA[TLS 1.3]]></category>
            <category><![CDATA[TLS]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[MeetUp]]></category>
            <category><![CDATA[Cloudflare Meetups]]></category>
            <category><![CDATA[Cryptography]]></category>
            <guid isPermaLink="false">3rkbgBVN6dRllxNP8qs4EF</guid>
            <dc:creator>Nick Sullivan</dc:creator>
        </item>
        <item>
            <title><![CDATA[TLS nonce-nse]]></title>
            <link>https://blog.cloudflare.com/tls-nonce-nse/</link>
            <pubDate>Wed, 12 Oct 2016 15:05:00 GMT</pubDate>
            <description><![CDATA[ One of the base principles of cryptography is that you can't just encrypt multiple messages with the same key. At the very least, what will happen is that two messages that have identical plaintext will also have identical ciphertext, which is a dangerous leak.  ]]></description>
            <content:encoded><![CDATA[ <p>One of the base principles of cryptography is that you can't <i>just</i> encrypt multiple messages with the same key. At the very least, what will happen is that two messages that have identical plaintext will also have identical ciphertext, which is a dangerous leak. (This is similar to why you can't encrypt blocks with <a href="https://blog.filippo.io/the-ecb-penguin/">ECB</a>.)</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7e62kSNRA1U4fnYOyOzcnf/f7170ef03a53336e04f0ce640fe779fe/19fq1n.jpg" />
            
            </figure><p>If you think about it, a pure encryption function is just like any other pure computer function: deterministic. Given the same set of inputs (key and message) it will always return the same output (the encrypted message). And we don't want an attacker to be able to tell that two encrypted messages came from the same plaintext.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2ypDxOdFtrCyJKATmJ8MrZ/6993281f817e294e4c93aded559575b4/Nonces.001-1.png" />
            
            </figure><p>The solution is the use of IVs (Initialization Vectors) or nonces (numbers used once). These are byte strings that are different for each encrypted message. They are the source of non-determinism that is needed to make duplicates indistinguishable. They are usually not secret, and distributed prepended to the ciphertext since they are necessary for decryption.</p><p>The distinction between IVs and nonces is controversial and not binary. Different encryption schemes require different properties to be secure: some just need them to never repeat, in which case we commonly refer to them as nonces; some also need them to be random, or even unpredictable, in which case we commonly call them IVs.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3zLMUUtK3IC6m9cEqDEeN8/b1e3f972e89120f0251a86eb9b9043dd/Nonces.002-1.png" />
            
            </figure>
    <div>
      <h3>Nonces in TLS</h3>
      <a href="#nonces-in-tls">
        
      </a>
    </div>
    <p>TLS at its core is about encrypting a stream of packets, or more properly "records". The initial handshake takes care of authenticating the connection and generating the keys, but then it's up to the record layer to encrypt many records with that same key. Enter nonces.</p><p>Nonce management can be a hard problem, but TLS is near to the best case: keys are never reused across connections, and the records have sequence numbers that both sides keep track of. However, it took the protocol a few revisions to fully take advantage of this.</p><p>The resulting landscape is a bit confusing (including one or two attack names):</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6SKHBZfapSldYN2czxNNuH/7f2d30dd41d885dc13b76d9ac6668c35/Nonces-table.png" />
            
            </figure>
    <div>
      <h4>RC4 and stream ciphers</h4>
      <a href="#rc4-and-stream-ciphers">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/VnXTZTwdpRaoyqySKVcDn/ae6e025045313ccfc7da498555196386/Nonces-RC4-black.png" />
            
            </figure><p>RC4 is a stream cipher, so it doesn't have to treat records separately. The cipher generates a continuous keystream which is XOR'd with the plaintexts as if they were just portions of one big message. Hence, there are no nonces.</p><p>RC4 <a href="/tag/rc4/">is broken</a> and was removed from TLS 1.3.</p>
    <div>
      <h4>CBC in TLS 1.0</h4>
      <a href="#cbc-in-tls-1-0">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4qMGKsTc0nc47WCKPrCvjk/f5dbe96ec072da2153b4549c5094d454/Nonces-CBC-1.0-black-1.png" />
            
            </figure><p>CBC in TLS 1.0 works similarly to RC4: the cipher is instantiated once, and then the records are encrypted as part of one continuous message.</p><p>Sadly that means that the IV for the next record <a href="https://en.wikipedia.org/wiki/Block_cipher_mode_of_operation#Cipher_Block_Chaining_.28CBC.29">is the last block of ciphertext of the previous record</a>, which the attacker can observe. Being able to predict the IV breaks CBC security, and that led to the <a href="https://www.imperialviolet.org/2011/09/23/chromeandbeast.html">BEAST attack</a>. BEAST is mitigated by <a href="https://www.imperialviolet.org/2012/01/15/beastfollowup.html">splitting records in two</a>, which effectively randomizes the IV, but this is a client-side fix, out of the server control.</p>
    <div>
      <h4>CBC in TLS 1.1+</h4>
      <a href="#cbc-in-tls-1-1">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5OaNhFttOtRZjYfEgeIq6v/e7a957af57c1116317a13befa7d5f6b6/Nonces-CBC-explicit-black.png" />
            
            </figure><p>TLS 1.1 fixed BEAST by simply making IVs explicit, sending the IV with each record (with the network overhead that comes with that).</p><p>AES-CBC IVs are 16 bytes (128 bits), so using random bytes is sufficient to prevent collisions.</p><p>CBC has <a href="/yet-another-padding-oracle-in-openssl-cbc-ciphersuites/">other nasty design issues</a> and has been removed in TLS 1.3.</p>
    <div>
      <h4>TLS 1.2 GCM</h4>
      <a href="#tls-1-2-gcm">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3bbl3iFzHYooGKXgbGZYCo/389f3e082c1097f97933391d8c7ed9fd/Nonces-GCM-black-2.png" />
            
            </figure><p>TLS 1.2 inherited the 1.1 explicit IVs. It also introduced <a href="/it-takes-two-to-chacha-poly/">AEADs</a> like AES-GCM. The record nonce in 1.2 AES-GCM is a concatenation of a fixed per-connection IV (4 bytes, derived at the same time as the key) and an explicit per-record nonce (8 bytes, sent on the wire).</p><p>Since <a href="https://en.wikipedia.org/wiki/Birthday_problem">8 random bytes is too short to guarantee uniqueness</a>, 1.2 GCM implementations have to use the sequence number or a counter. If you are thinking "but what sense does it make to use an explicit IV, sent on the wire, which is just the sequence number that both parties know anyway", well... yeah.</p><p>Implementations not using a counter/sequence-based AES-GCM nonce were found to be indeed vulnerable by the "<a href="https://github.com/nonce-disrespect/nonce-disrespect">Nonce-Disrespecting Adversaries</a>" paper.</p>
    <div>
      <h4>TLS 1.3</h4>
      <a href="#tls-1-3">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6X6R46MkjuADDzdspGpNAp/313ba89f66a1a304c169d1e9cd801eaf/Nonces-1.3-black-1.png" />
            
            </figure><p>TLS 1.3 finally took advantage of the sequential nature of TLS records and removed the free-form explicit IVs. It uses instead a combination of a fixed per-connection IV (derived at the same time as the key) and the sequence number, XORed—not concatenated.</p><p>This way the entire nonce length is random-looking, nonces can never be reused as the sequence number monotonically increases, and there is no network overhead.</p>
    <div>
      <h4>ChaCha20-Poly1305</h4>
      <a href="#chacha20-poly1305">
        
      </a>
    </div>
    <p>The <a href="/do-the-chacha-better-mobile-performance-with-cryptography/">ChaCha20-Poly1305 ciphersuite</a> uses the same "fixed IV XORed with the sequence number" scheme of TLS 1.3 even when used in TLS 1.2</p><p>While 1.3 AEADs and 1.2 ChaCha20 use the same nonce scheme, when used in 1.2 ChaCha20 still puts the sequence number, type, version and length in the additional authenticated data. 1.3 makes all those either implicit or part of the encrypted payload.</p>
    <div>
      <h3>To recap</h3>
      <a href="#to-recap">
        
      </a>
    </div>
    <ul><li><p>RC4 is a stream cipher, so it has no per-record nonce.</p></li><li><p>CBC in TLS 1.0 used to work similarly to RC4. Sadly, that was vulnerable to BEAST.</p></li><li><p>TLS 1.1 fixed BEAST by simply making IVs explicit and random.</p></li><li><p>TLS 1.2 AES-GCM uses a concatenation of a fixed IV and an explicit sequential nonce.</p></li><li><p>TLS 1.3 finally uses a simple fixed IV XORed with the sequence number.</p></li><li><p>ChaCha20-Poly1305 uses the same scheme of TLS 1.3 even when used in TLS 1.2.</p></li></ul>
    <div>
      <h2>Nonce misuse resistance</h2>
      <a href="#nonce-misuse-resistance">
        
      </a>
    </div>
    <p>In the introduction we used the case of a pair of identical message and key to illustrate the most intuitive issue of missing or reused nonces. However, depending on the cipher, other things can go wrong when the same nonce is reused, or is predictable.</p><p>A repeated nonce often breaks entirely the security properties of the connection. For example, AES-GCM <a href="https://github.com/nonce-disrespect/nonce-disrespect">leaks the authentication key altogether</a>, allowing an attacker to fake packets and inject data.</p><p>As part of the trend of making cryptography primitives less dangerous to use for implementers, the research is focusing on mitigating the adverse consequences of nonce reuse. The property of these new schemes is called <a href="https://www.lvh.io/posts/nonce-misuse-resistance-101.html">Nonce Reuse Resistance</a>.</p><p>However, they still have to see wider adoption and standardization, which is why a solid protocol design like the one in TLS 1.3 is critical to prevent this class of attacks.</p><p><i>Does painting overviews of technical topics like this sound satisfying to you? </i><a href="https://www.cloudflare.com/join-our-team/"><i>We are hiring in London, Austin (TX), Champaign (IL), San Francisco and Singapore</i></a><i>!</i></p> ]]></content:encoded>
            <category><![CDATA[TLS]]></category>
            <category><![CDATA[TLS 1.3]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[Encryption]]></category>
            <category><![CDATA[Cryptography]]></category>
            <guid isPermaLink="false">1xqvUPkstNMoVVXMQ3fy5C</guid>
            <dc:creator>Filippo Valsorda</dc:creator>
        </item>
        <item>
            <title><![CDATA[An overview of TLS 1.3 and Q&A]]></title>
            <link>https://blog.cloudflare.com/tls-1-3-overview-and-q-and-a/</link>
            <pubDate>Fri, 23 Sep 2016 16:01:18 GMT</pubDate>
            <description><![CDATA[ The CloudFlare London office hosts weekly internal Tech Talks (with free lunch picked by the speaker). My recent one was an explanation of the latest version of TLS, 1.3, how it works and why it's faster and safer. ]]></description>
            <content:encoded><![CDATA[ <p>The CloudFlare London office hosts weekly internal Tech Talks (with free lunch picked by the speaker). My recent one was an explanation of the latest version of <a href="https://www.cloudflare.com/ssl/">TLS, 1.3</a>, how it works and why it's faster and safer.</p><p>You can <a href="https://vimeo.com/177333631">watch the complete talk</a> below or just read my summarized transcript.</p><p><i>Update: you might want to watch my more recent and extended </i><a href="/tls-1-3-explained-by-the-cloudflare-crypto-team-at-33c3/"><i>33c3 talk</i></a><i> instead.</i></p><p><b>The Q&amp;A session is open!</b> Send us your questions about TLS 1.3 at <a>tls13@cloudflare.com</a> or leave them in the Disqus comments below and I'll answer them in an upcoming blog post.</p><p>.post-content iframe { margin: 0; }</p>
    <div>
      <h4>Summarized transcript</h4>
      <a href="#summarized-transcript">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7Bq6YFjnvg3SB2kKRjfNsr/4be9ff4c51df8eb9b9dbd9eb6f499f13/TLS-1.3.003.png" />
            
            </figure><p>To understand why TLS 1.3 is awesome, we need to take a step back and look at how TLS 1.2 works. In particular we will look at modern TLS 1.2, the kind that a recent browser would use when connecting to the CloudFlare edge.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/8ohjI9yrzu2eEm0U6iOu9/654853ffc8128cef9c4f17ac6dfb28cb/TLS-1.3.004.png" />
            
            </figure><p>The client starts by sending a message called the <code>ClientHello</code> that essentially says "hey, I want to speak TLS 1.2, with one of these cipher suites".</p><p>The server receives that and answers with a <code>ServerHello</code> that says "sure, let's speak TLS 1.2, and I pick <i>this</i> cipher suite".</p><p>Along with that the server sends its <i>key share</i>. The specifics of this key share change based on what cipher suite was selected. When using ECDHE, key shares are mixed with the <a href="/a-relatively-easy-to-understand-primer-on-elliptic-curve-cryptography/">Elliptic Curve Diffie Hellman</a> algorithm.</p><p>The important part to understand is that for the client and server to agree on a cryptographic key, they need to receive each other's portion, or share.</p><p>Finally, the server sends the website certificate (signed by the CA) and a signature on portions of <code>ClientHello</code> and <code>ServerHello</code>, including the key share, so that the client knows that those are authentic.</p><p>The client receives all that, and <i>then</i> generates its own key share, mixes it with the server key share, and thus generates the encryption keys for the session.</p><p>Finally, the client sends the server its key share, enables encryption and sends a <code>Finished</code> message (which is a hash of a transcript of what happened so far). The server does the same: it mixes the key shares to get the key and sends its own <code>Finished</code> message.</p><p>At that point we are done, and we can finally send useful data encrypted on the connection.</p><p>Notice that this takes two round-trips between the client and the server before the HTTP request can be transferred. And <a href="https://www.cloudflare.com/learning/cdn/glossary/round-trip-time-rtt/">round-trips on the Internet</a> can be slow.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3d2jAu0W5lMg2PQm8zrdtz/221af7783bc17706e2d3a2587ecf1312/TLS-1.3.006.png" />
            
            </figure><p>Enter TLS 1.3. While TLS 1.0, 1.1 and 1.2 are not that different, 1.3 is a big jump.</p><p>Most importantly, establishing a TLS 1.3 connection takes <b>one less round-trip</b>.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1hr3R0qLxh2XIklK8lNJdh/8b4cbbea91b113ba182294c1bef64695/TLS-1.3.007.png" />
            
            </figure><p>In TLS 1.3 a client starts by sending not only the <code>ClientHello</code> and the list of supported ciphers, but it also makes a guess as to which key agreement algorithm the server will choose, and <b>sends a key share for that</b>.</p><p>(<i>Note: the video calls the key agreement algorithm "cipher suite". In the meantime the specification has been changed to disjoin supported cipher suites like AES-GCM-SHA256 and supported key agreements like ECDHE P-256.</i>)</p><p>And that saves us a round trip, because as soon as the server selects the cipher suite and key agreement algorithm, it's ready to generate the key, as it already has the client key share. So it can switch to encrypted packets one whole round-trip in advance.</p><p>So the server sends the <code>ServerHello</code>, its key share, the certificate (now encrypted, since it has a key!), and already the <code>Finished</code> message.</p><p>The client receives all that, generates the keys using the key share, checks the certificate and <code>Finished</code>, and it's immediately ready to send the HTTP request, after only one round-trip. Which can be hundreds of milliseconds.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5Uvkj3F2TDTPupYTuJOCdE/03d16a27196bb78a5cafebecf12e9b34/TLS-1.3.009.png" />
            
            </figure><p>One existing way to speed up TLS connections is called resumption. It's what happens when the client has connected to that server before, and uses what they remember from the last time to cut short the handshake.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7Kb8uoWeYCfHtOsR4jKTig/4e5ea4f6cdd0fd2498db35e1c51ecdac/TLS-1.3.010.png" />
            
            </figure><p>How this worked in TLS 1.2 is that servers would send the client either a <a href="/tls-session-resumption-full-speed-and-secure/">Session ID or a Session Ticket</a>. The former is just a reference number that the server can trace back to a session, while the latter is an encrypted serialized session which allows the server not to keep state.</p><p>The next time the client would connect, it would send the Session ID or Ticket in the <code>ClientHello</code>, and the server would go like "hey, I know you, we have agreed on a key already", skip the whole key shares dance, and jump straight to <code>Finished</code>, saving a round-trip.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3T989Gfy9u4y0b5WlC2YyJ/926f71e94f528ae25d275bad8399e0fd/TLS-1.3.011.png" />
            
            </figure><p>So, we have a way to do 1-RTT connections in 1.2 if the client has connected before, which is very common. Then what does 1.3 gain us? When resumption is available, <b>1.3 allows us to do 0-RTT connections</b>, again saving one round trip and ending up with no round trip at all.</p><p>If you have connected to a 1.3 server before you can immediately start sending encrypted data, like an HTTP request, without any round-trip at all, making TLS essentially <b>zero overhead</b>.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4x3MsWrJJB60co3tWBTcfD/cc3976fa1688d6bcb39f4fdae04c06ae/TLS-1.3.012.png" />
            
            </figure><p>When a 1.3 client connects to a 1.3 server they agree on a resumption key (or PSK, pre-shared key), and the server gives the client a Session Ticket that will help it remember it. The Ticket can be an encrypted copy of the PSK—to avoid state—or a reference number.</p><p>The next time the client connects, it sends the Session Ticket in the <code>ClientHello</code> and then immediately, without waiting for any round trip, sends the HTTP request encrypted with the PSK. The server figures out the PSK from the Session Ticket and uses that to decrypt the 0-RTT data.</p><p>The client also sends a key share, so that client and server can switch to a new fresh key for the actual HTTP response and the rest of the connection.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3bxXx5kuJvmtOYmxa1JNcy/1cdbfb3ee46d508d6eabcafd44e043b7/TLS-1.3.013.png" />
            
            </figure><p>0-RTT comes with a couple of caveats.</p><p>Since the PSK is not agreed upon with a fresh round of Diffie Hellman, it does not provide Forward Secrecy against a compromise of the Session Ticket key. That is, if in a year an attacker somehow obtains the Session Ticket key, it can decrypt the Session Ticket, obtain the PSK and decrypt the 0-RTT data the client sent (but not the rest of the connection).</p><p>This is why it's important to rotate often and not persist Session Ticket keys (CloudFlare rotates these keys hourly).</p><p>TLS 1.2 has never provided any Forward Secrecy against a compromise of the Session Ticket key at all, so even with 0-RTT 1.3 is an improvement upon 1.2.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/79qHz08hcnfJAk8ogogTfH/1822898b0fc6f0262a2b9c7ae9df0eba/TLS-1.3.014.png" />
            
            </figure><p>More problematic are replay attacks.</p><p>Since with Session Tickets servers are stateless, they have no way to know if a packet of 0-RTT data was already sent before.</p><p>Imagine that the 0-RTT data a client sent is not an HTTP GET ("hey, send me this page") but instead an HTTP POST executing a transaction like "hey, send Filippo 50$". If I'm in the middle I can intercept that <code>ClientHello</code>+0-RTT packet, and then re-send it to the server 100 times. No need to know any key. I now have 5000$.</p><p>Every time the server will see a Session Ticket, unwrap it to find the PSK, use the PSK to decrypt the 0-RTT data and find the HTTP POST inside, with no way to know something is fishy.</p><p>The solution is that servers must not execute operations that are not <i>idempotent</i> received in 0-RTT data. Instead in those cases they should force the client to perform a full 1-RTT handshake. That protects from replay since each <code>ClientHello</code> and <code>ServerHello</code> come with a Random value and connections have sequence numbers, so there's no way to replay recorded traffic verbatim.</p><p>Thankfully, most times the first request a client sends is not a state-changing transaction, but something idempotent like a GET.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2oeeDc9HROItV41GXxqggs/12f35c007af3f031ab681968ff807dad/TLS-1.3.016.png" />
            
            </figure>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4u8bypRISwPgvFeXhAZfuU/bbcdaf44a4b89aa05390b5e85a4a1a5a/TLS-1.3.017.png" />
            
            </figure>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/45Ejce1oMIH6MiWRDtalo3/537497589b0bf99b28ec8b8488c8279e/TLS-1.3.018.png" />
            
            </figure><p>TLS 1.3 is not only good for cutting a round-trip. It's also better, more robust crypto all around.</p><p>Most importantly, many things were removed. 1.3 marked a shift in the design approach: it used to be the case that the TLS committee would accept any proposal that made sense, and implementations like OpenSSL would add support for it. Think for example Heartbeats, the rarely used feature that cause <a href="/the-results-of-the-cloudflare-challenge/">Heartbleed</a>.</p><p>In 1.3, everything was scrutinized for being really necessary and secure, and scrapped otherwise. A lot of things are gone:</p><ul><li><p>the old <a href="/keyless-ssl-the-nitty-gritty-technical-details/">static RSA handshake without Diffie Hellman</a>, which doesn't offer Forward Secrecy</p></li><li><p>the <a href="/padding-oracles-and-the-decline-of-cbc-mode-ciphersuites/">CBC MAC-then-Encrypt modes</a>, which were responsible for Vaudenay, Lucky13, POODLE, <a href="/yet-another-padding-oracle-in-openssl-cbc-ciphersuites/">LuckyMinus20</a>... replaced by <a href="/go-crypto-bridging-the-performance-gap/">AEADs</a></p></li><li><p>weak primitives like <a href="/killing-rc4-the-long-goodbye/">RC4</a>, SHA1, MD5</p></li><li><p>compression</p></li><li><p>renegotiation</p></li><li><p>custom FFDHE groups</p></li><li><p>RSA PKCS#1v1.5</p></li><li><p>explicit nonces</p></li></ul><p>We'll go over these in more detail in future blog posts.</p><p>Some of these were not necessarily broken by design, but they were dangerous, hard to implement correctly and easy to get wrong. The new excellent trend of TLS 1.3 and cryptography in general is to make mistakes less likely at the design stage, since humans are not perfect.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/56pPwUsCFfumH2fYMeL8FN/35aa6d17ae21394702ce0122e6bdfb5c/TLS-1.3.019.png" />
            
            </figure><p>A new version of a protocol obviously can't dictate how older implementations behave and 1.3 can't improve the security of 1.2 systems. So how do you make sure that if tomorrow TLS 1.2 is completely broken, a client and server that both support 1.2 and 1.3 can't be tricked into using 1.2 by a proxy?</p><p>A MitM could change the <code>ClientHello</code> to say "I want to talk at most TLS 1.2", and then use whichever attack it discovered to make the 1.2 connection succeed even if it tampered with a piece of the handshake.</p><p>1.3 has a clever solution to this: if a 1.3 server has to use 1.2 because it looks like the client doesn't support 1.3, it will "hide a message" in the Server Random value. A real 1.2 will completely ignore it, but a client that supports 1.3 would know to look for it, and would discover that it's being tricked into downgrading to 1.2.</p><p>The Server Random is signed with the certificate in 1.2, so it's impossible to fake even if pieces of 1.2 are broken. This is very important because it will allow us to keep supporting 1.2 in the future even if it's found to be weaker, unlike we had to do with <a href="/sslv3-support-disabled-by-default-due-to-vulnerability/">SSLv3 and POODLE</a>. With 1.3 we will know for sure that clients that can do any better are not being put at risk, allowing us to make sure <a href="/ensuring-that-the-web-is-for-everyone/">the Internet is for Everyone</a>.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2Q7LYbjQBvS2WP0PUvkmdk/6fb34646631b248c2d3750b4b2c198c5/TLS-1.3.020.png" />
            
            </figure><p>So this is TLS 1.3. Meant to be a solid, safe, robust, simple, essential foundation for Internet encryption for the years to come. And it's faster, so that no one will have performance reasons not to implement it.</p><p>TLS 1.3 is still a draft and it might change before being finalized, but at CloudFlare we are actively developing a 1.3 stack compatible with current experimental browsers, so <a href="/introducing-tls-1-3/">everyone can get it today</a>.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3xQPlSJ4uBQaS60OpFogxD/7967fae7cd0f8766658a02c0534b95c5/TLS-1.3.023.png" />
            
            </figure><p>The TLS 1.3 spec is <a href="https://github.com/tlswg/tls13-spec">on GitHub</a>, so anyone can contribute. Just while making the slides for this presentation I noticed I was having a hard time understanding a system because a diagram was missing some details, so I submitted a PR to fix it. How easy is that!?</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3Ecgvi8YtzFKhcoQjoPLzU/cb6a281fbd8246c453bc145ca36a7b56/TLS-1.3.026.png" />
            
            </figure><p>Like any talk, at the end there's the Q&amp;A. Send your questions to <a>tls13@cloudflare.com</a> or leave them in the Disqus comments below and I'll answer them in an upcoming blog post!</p> ]]></content:encoded>
            <category><![CDATA[TLS]]></category>
            <category><![CDATA[TLS 1.3]]></category>
            <category><![CDATA[Events]]></category>
            <category><![CDATA[United Kingdom]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[Cryptography]]></category>
            <guid isPermaLink="false">6CKPHn0MEFqMdmC3vvaDvo</guid>
            <dc:creator>Filippo Valsorda</dc:creator>
        </item>
        <item>
            <title><![CDATA[Opportunistic Encryption: Bringing HTTP/2 to the unencrypted web]]></title>
            <link>https://blog.cloudflare.com/opportunistic-encryption-bringing-http-2-to-the-unencrypted-web/</link>
            <pubDate>Wed, 21 Sep 2016 15:51:28 GMT</pubDate>
            <description><![CDATA[ Encrypting the web is not an easy task. Various complexities prevent websites from migrating from HTTP to HTTPS, including mixed content, which can prevent sites from functioning with HTTPS.  ]]></description>
            <content:encoded><![CDATA[ <p>Encrypting the web is not an easy task. Various complexities prevent websites from migrating from HTTP to HTTPS, including mixed content, which can prevent sites from functioning with HTTPS.</p><p>Opportunistic Encryption provides an additional level of security to websites that have not yet moved to HTTPS and the performance benefits of HTTP/2. Users will not see a security indicator for HTTPS in the address bar when visiting a site using Opportunistic Encryption, but the connection from the browser to the server is encrypted.</p><p>In December 2015, CloudFlare introduced <a href="https://www.cloudflare.com/http2/">HTTP/2</a>, the latest version of HTTP, that can result in improved performance for websites. HTTP/2 can’t be used without encryption, and before now, that meant HTTPS. Opportunistic Encryption, based on an <a href="http://httpwg.org/http-extensions/draft-ietf-httpbis-http2-encryption.html">IETF draft</a>, enables servers to accept HTTP requests over an encrypted connection, allowing HTTP/2 connections for non-HTTPS sites. This is a first.</p><p>Combined with <a href="/introducing-tls-1-3/">TLS 1.3</a> and <a href="/announcing-support-for-http-2-server-push-2/">HTTP/2 Server Push</a>, Opportunistic Encryption can result in significant performance gains, while also providing security benefits.</p><p>Opportunistic Encryption is now available to all CloudFlare customers, enabled by default for Free and Pro plans. The option is available in the Crypto tab of the CloudFlare dashboard:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/53Z7Qo7rd4AQlJC0kikjXS/6323937c3bdbbe82b69abf176dc467fb/Screen-Shot-2016-09-21-at-16.19.03.png" />
            
            </figure>
    <div>
      <h3>How it works</h3>
      <a href="#how-it-works">
        
      </a>
    </div>
    <p>Opportunistic Encryption uses <a href="http://httpwg.org/http-extensions/alt-svc.html">HTTP Alternative Services</a>, a mechanism that allows servers to tell clients that the service they are accessing is available at another network location or over another protocol. When a supporting browser makes a request to a CloudFlare site with Opportunistic Encryption enabled, CloudFlare adds an Alternative Service header to indicate that the site is available over HTTP/2 (or SPDY) on port 443.</p><p>For customers with HTTP/2 enabled:</p>
            <pre><code>Alt-Svc: h2=”:443”; ma=60</code></pre>
            <p>For customers with HTTP/2 disabled:</p>
            <pre><code>Alt-Svc: spdy/3.1=”:443”; ma=60</code></pre>
            <p>This header simply states that the domain can be authoritatively accessed using HTTP/2 (“h2”) or SPDY 3.1 (“spdy/3.1”) at the same network address, over port 443 (the standard HTTPS port). The field “ma” (max-age) indicates how long in seconds the client should remember the existence of the alternative service.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2QNLkeEtnUSqIk1sg0yDZn/3eb1c11409ba9f950127ffaba673447e/2829700354_4cb63ac45e_b.jpg" />
            
            </figure><p><a href="https://creativecommons.org/licenses/by-sa/2.0/">CC BY-SA 2.0</a> <a href="https://www.flickr.com/photos/jabberwocky381/2829700354/in/photolist-5j3Xk9-9ozwgL-CyXzMW-b9NypP-af3UAc-c5VG2j-7tCoKg-8iRnKw-4F2xWk-4F6MDS-g5xKwj-6t8Gpg-e7gmMh-ahbP86-aheB15-aheAzE-8ucqdq-aheBcf-DETnGc-8ogV2T-9PNooC-9PKwkx-9PNots-2hpkp-F3qYFL-xdQqLr-9PKwe2-9PKwhT-Fe87Tv-Fe88gK-GnUy7Z-HigQiP-H9VVcS-HigRaP-H9VUFw-HigQPZ-H9VVdy-H9VUA1-H9VV9W-HigPYR-H9VULm-HigRat-HigQUt-H9VVRC-HigQXV-HigQxg-H9VV6u-H9VUuQ-frEFs1-HfkgMN">image</a> by <a href="https://www.flickr.com/photos/jabberwocky381/">Evan Jackson</a></p><p>When Firefox (or any other browser that supports Opportunistic Encryption) receives an “h2” Alt-Svc header, it knows that the site is available using HTTP/2 over TLS on port 443. For any subsequent HTTP requests to that domain, the browser will connect using TLS on port 443, where the server will present a certificate for the domain signed by a trusted certificate authority. The browser will then validate the certificate. If the connection succeeds, the browser will send the requests over that connection using HTTP/2.</p><p>Opportunistically requests will contain “http” in the<a href="https://http2.github.io/http2-spec/#HttpRequest"><code>:scheme</code> pseudo-header</a> instead of “https”. From a bit-on-the-wire perspective, this pseudo-header is the only difference between HTTP requests with Opportunistic Encryption over TLS and HTTPS. However, there is a big difference between how browsers treat assets fetched using HTTP vs. HTTPS URLs (as discussed below). </p><p>HTTP Alternative Services is a relatively new but widely used mechanism. For example, Alt-Svc is used by Google to advertise support for their experimental transport protocol, <a href="https://www.chromium.org/quic">QUIC</a>, to browsers in much the same way as we use it to advertise support for Opportunistic Encryption.</p>
    <div>
      <h2>Why not just use HTTPS?</h2>
      <a href="#why-not-just-use-https">
        
      </a>
    </div>
    <p>CloudFlare enables HTTPS by default for customers on <a href="https://www.cloudflare.com/plans/">all plans</a> using Universal SSL. However, some sites choose to continue to allow access to their sites via unencrypted HTTP. The main reason for this is mixed content. If a site contains references to HTTP images, or makes requests to HTTP URLs via embedded scripts, browsers will present warnings or even block requests outright, often breaking the functionality of the site.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/53q02AwHvDVUSNdgdmG2eX/2927aaaf2f4df9cebf7a6fcf4549c837/16467022408_789bc66bfb_z.jpg" />
            
            </figure><p><a href="https://creativecommons.org/licenses/by/2.0/">CC BY 2.0</a> <a href="https://www.flickr.com/photos/blondinrikard/16467022408/in/photolist-r68N4s-wdqoLL-wL6trG-vTynKb-vTyjDm-wbh17E-wawCrA-vw2iz8-wbofNT-w8RLP5-wboR8p-wscuys-wbgUu9-wPHQUT-wtmaGV-wsdj2U-wbo7Fv-wbgW9m-wtm9RM-wbgfb3-wqyuDm-wboXM6-wbp7jX-wsTgrP-wbALUc-wujVgK-vea2uj-wscmum-veiEag-vea12j-veiFGV-wbgLWS-wbaXWX-veiH66-wqznYf-vea193-vTyvb1-wucvcL-AKwG91-wbaZmk-vTFwsg-vTynJC-wucAZE-wawzEJ-AKCQWi-wawoMm-wbaWhV-wbaVVH-wbb8hv-wujW96">image</a> by <a href="https://www.flickr.com/photos/blondinrikard/">Blondinrikard Fröberg</a></p><p>Making sure a site can be fully migrated to HTTPS can be a manual and time-consuming process. It can require someone to manually inspect every page on a site or set up a <a href="https://content-security-policy.com/">Content Security Policy (CSP)</a> reporting infrastructure, a complex task. Even after all this work, fixing mixed content issues may require changes in middleware or content management software that can’t be easily updated. Later this week, we’ll introduce Automatic HTTPS Rewrites, which helps fix mixed content for many sites, but not all. Some mixed content can’t be fixed because the included third party resources (such as ads) that are not available over HTTPS. Websites that can’t update fully to HTTPS will benefit most from Opportunistic Encryption.</p><p>With Opportunistic Encryption, supporting browsers can choose to access an HTTP site using HTTP/2 over an encrypted connection instead of HTTP/1.1 over plaintext (the default).</p>
    <div>
      <h2>Security Benefits</h2>
      <a href="#security-benefits">
        
      </a>
    </div>
    <p>It’s no secret that network operators have access to the data that travels through their equipment. This access can be used to modify data: ISPs have been caught injecting unwanted data (<a href="http://arstechnica.com/tech-policy/2013/04/how-a-banner-ad-for-hs-ok/">such as advertisements</a> and <a href="http://www.theverge.com/2016/3/7/11173010/verizon-supercookie-fine-1-3-million-fcc">tracking cookies</a>) into unencrypted requests. Countries <a href="https://en.wikipedia.org/wiki/Internet_censorship_in_India">routinely filter content</a> by inspecting HTTP headers in unencrypted traffic, and <a href="https://citizenlab.org/2015/04/chinas-great-cannon/">China’s Great Cannon</a> injected malicious code into unencrypted websites. Access to data in transit can also be used to perform dragnet surveillance, where vast swaths of data is collected and then <a href="https://www.theguardian.com/commentisfree/2013/jul/15/crux-nsa-collect-it-all">shipped to a central location for analysis</a>.</p><p>Opportunistic Encryption does not fully protect against attackers who can simply remove the header that signals support for Opportunistic Encryption to the browser. However, once an opportunistically encrypted connection is established all requests sent over the connection are encrypted and cannot be read (or modified) by prying eyes.</p>
    <div>
      <h2>Terminology is hard</h2>
      <a href="#terminology-is-hard">
        
      </a>
    </div>
    <p>Tim Berners-Lee initiated the development of HTTP in the late 1980s to facilitate the transfer of documents from servers to clients. Both websites and browsers were rudimentary compared to the today’s web ecosystem. The concept of web security was practically non-existent.</p><p>From <a href="https://www.w3.org/History/1989/proposal.html">the original 1989 paper</a>:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2HwLp3ipQLBPQrABrqkM2/35384da5c3bdc5791949e85c1e5cb50c/Screen-Shot-2016-09-21-at-16.47.03.png" />
            
            </figure><p>As the use cases of the web expanded to include sensitive data transactions, some security was needed. Multiple encryption schemes were developed to help secure HTTP, including <a href="http://www.homeport.org/~adam/shttp.html">S-HTTP</a> and the eventual winner, HTTPS.</p><p>Originally, the difference between HTTP and HTTPS was one of layering. In HTTP, messages were written to the network directly, and in HTTPS, a secure connection was established between the client and server using SSL (an encryption and authentication protocol), and standard HTTP messages were written to the encrypted connection. Browsers signaled HTTP websites with an open lock icon, whereas HTTPS websites received a closed lock. Later, SSL evolved into TLS, although people sometimes <a href="https://www.google.com/trends/explore?q=ssl,tls,https">still refer to it as SSL</a>.</p><p>As websites became much more complex, embedded scripts and dynamic content became commonplace. Serving insecure content on a secure web page was identified as a risk and HTTPS started to take on a more nuanced meaning. Rather than just HTTP on an encrypted connection, HTTPS meant secure HTTP. For example, cookies became a popular way to store state in the client for managing web sessions. Cookies obtained over a secure connection were not allowed to be sent over insecure HTTP or modified by data obtained over HTTP. New privacy-sensitive features now require HTTPS (such as the <a href="https://developers.google.com/web/updates/2016/04/geolocation-on-secure-contexts-only">Location API</a>). This distinction between HTTP and HTTPS has been further codified by the W3C, the standards body in charge of the web, in their <a href="https://w3c.github.io/webappsec-secure-contexts/">Secure Contexts</a> document.</p><p>To break it down:</p><ul><li><p>HTTP is a protocol for transferring hypertext</p></li><li><p>TLS/SSL is a protocol for encrypted communication</p></li><li><p>HTTPS is a protocol for transferring secure hypertext</p></li></ul><table><tr><td><p></p></td><td><p><b>HTTP</b></p></td><td><p><b>HTTPS</b></p></td></tr><tr><td><p>Unencrypted</p></td><td><p><b>✔️</b></p></td><td><p>❌</p></td></tr><tr><td><p>Encrypted with TLS</p></td><td><p>✔️</p></td><td><p>✔️</p></td></tr></table><p>Opportunistic Encryption is the bottom left ✔️. While only HTTPS sites are treated as secure by the browser (as indicated by a green lock security indicator), encrypted HTTP is preferable to unencrypted HTTP.</p>
    <div>
      <h2>Browser support</h2>
      <a href="#browser-support">
        
      </a>
    </div>
    <p>All versions of Firefox since Firefox 38 (May 2015) have supported Opportunistic Encryption in its original form (without certificate validation). Firefox has recently added support for certificate validation in Firefox Nightly and will support it in an upcoming official release. We believe that Opportunistic Encryption is a meaningful advance in web security. We hope that other browsers follow Firefox’s lead and enable Opportunistic Encryption.</p><p>To be clear, Opportunistic Encryption is not a replacement for HTTPS. HTTPS should always be used when both strong encryption and authentication are required. For sites that don’t have the resources to fully move to HTTPS, Opportunistic Encryption can help, providing both added security and performance. Every bit counts.</p> ]]></content:encoded>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[HTTP2]]></category>
            <category><![CDATA[HTTPS]]></category>
            <category><![CDATA[TLS]]></category>
            <category><![CDATA[TLS 1.3]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[Crypto Week]]></category>
            <guid isPermaLink="false">4H5mFEHNBeu1B8biX0q6UO</guid>
            <dc:creator>Nick Sullivan</dc:creator>
        </item>
        <item>
            <title><![CDATA[Introducing TLS 1.3]]></title>
            <link>https://blog.cloudflare.com/introducing-tls-1-3/</link>
            <pubDate>Tue, 20 Sep 2016 13:04:52 GMT</pubDate>
            <description><![CDATA[ The encrypted Internet is about to become a whole lot snappier. When it comes to browsing, we’ve been driving around in a beat-up car from the 90s for a while. Little does anyone know, we’re all about to trade in our station wagons for a smoking new sports car.  ]]></description>
            <content:encoded><![CDATA[ <p><b><i>CloudFlare is turbocharging the encrypted internet</i></b></p><p>The encrypted Internet is about to become a whole lot snappier. When it comes to browsing, we’ve been driving around in a beat-up car from the 90s for a while. Little does anyone know, we’re all about to trade in our station wagons for a smoking new sports car. The reason for this speed boost is TLS 1.3, a new encryption protocol that improves both speed and security for Internet users everywhere. As of today, TLS 1.3 is available to all CloudFlare customers.</p>
    <div>
      <h3>The Encrypted Internet</h3>
      <a href="#the-encrypted-internet">
        
      </a>
    </div>
    <p>Many of the major web properties you visit are encrypted, which is indicated by the padlock icon and the presence of “https” instead of “http” in the address bar. The “s” stands for secure. When you connect to an HTTPS site, the communication between you and the website is encrypted, which makes browsing the web dramatically more secure, protecting your communication from prying eyes and the <a href="/an-introduction-to-javascript-based-ddos/">injection of malicious code</a>. HTTPS is not only used by websites, it also secures the majority of APIs and mobile application backends.</p><p>The underlying technology that enables secure communication on the Internet is a protocol called Transport Layer Security (TLS). TLS is an evolution of Secure Sockets Layer (SSL), a protocol developed by Netscape in the 1990s. The Internet Engineering Task Force (IETF), a standards body, has been in charge of defining the protocol, which has gone through several iterations. The last version, TLS 1.2, was standardized in 2008 and is currently supported by the majority of browsers and HTTPS-enabled web services.</p><p>TLS 1.2 can be secure when configured correctly, but its age has begun to show. In the last few years, several high-profile attacks have been published that revealed flaws in the protocol. Eight years is a long time in computer security, so the IETF have been working on a new version of the protocol, TLS 1.3, which should be finalized by the end of 2016.</p><p>TLS 1.3 is a major overhaul and has two main advantages over previous versions:</p><ul><li><p>Enhanced security</p></li><li><p>Improved speed</p></li></ul>
    <div>
      <h3>Enhanced Security</h3>
      <a href="#enhanced-security">
        
      </a>
    </div>
    <p>Most of the attacks on TLS from the last few years targeted vestigial pieces of the protocol left around from the 90s. TLS 1.2 is highly configurable, and vulnerable sites simply failed to disable the older features in hopes of being compatible with old browsers.</p><p>TLS 1.3 embraces the “less is more” philosophy, removing support for older broken forms of cryptography. That means you can’t turn on the potentially vulnerable stuff, even if you try. The list of TLS 1.2 features that have been removed is extensive, and most of the exiled features have been associated with high profile attacks. These include:</p><ul><li><p>RSA key transport — <a href="/staying-on-top-of-tls-attacks/">Doesn’t provide forward secrecy</a></p></li><li><p>CBC mode ciphers — Responsible for <a href="/taming-beast-better-ssl-now-available-across/">BEAST</a>, and <a href="https://en.wikipedia.org/wiki/Lucky_Thirteen_attack">Lucky 13</a></p></li><li><p>RC4 stream cipher — <a href="/killing-rc4-the-long-goodbye/">Not secure for use in HTTPS</a></p></li><li><p>SHA-1 hash function — <a href="/why-its-harder-to-forge-a-sha-1-certificate-than-it-is-to-find-a-sha-1-collision/">Deprecated in favor of SHA-2</a></p></li><li><p>Arbitrary Diffie-Hellman groups — <a href="http://blog.intothesymmetry.com/2016/01/openssl-key-recovery-attack-on-dh-small.html">CVE-2016-0701</a></p></li><li><p>Export ciphers — Responsible for <a href="https://freakattack.com/">FREAK</a> and <a href="/logjam-the-latest-tls-vulnerability-explained/">LogJam</a></p></li></ul>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/49hiXpSgZTn0owXevua6P6/00129c545ce0979a6bab5b34a8da5d32/image02.jpg" />
            
            </figure><p>CC 2.0 Generic <a href="https://www.flickr.com/photos/aquamech-utah/24445365953">Aqua Mechanical</a></p><p>TLS 1.3 removes the “bad crypto smell” of these legacy features, making it less likely that attacks on previous versions of the protocol will affect TLS 1.3. This streamlining also makes TLS 1.3 much simpler to configure for server operators. A secondary side effect of the update is that the protocol can be made much faster, resulting in a better web browsing experience.</p>
    <div>
      <h3>Improved Speed</h3>
      <a href="#improved-speed">
        
      </a>
    </div>
    <p>Fast page load times are critical to the success of web services. Amazon famously <a href="https://www.gigaspaces.com/blog/amazon-found-every-100ms-of-latency-cost-them-1-in-sales/">found that every additional 100ms of page load time decreases sales by 1%</a>. A major component of page load time is latency: the time it takes to send data between the browser and the web server.</p><p>The impact of latency is especially noticeable for:a) users on mobile devicesb) users that geographically far away from the server</p><p>A message from Sydney to New York and back can take over 200ms, enough for <a href="https://hpbn.co/primer-on-web-performance/#speed-performance-and-human-perception">humans to notice</a>. Mobile browsing can also increase latency for a connection. Sending a message over a modern 4G mobile network routinely adds over 100ms of latency to requests. On 3G networks, which are still common in Europe, <a href="https://opensignal.com/blog/2014/03/10/lte-latency-how-does-it-compare-to-other-technologies/">add over 200ms of additional latency</a>. Even home WiFi connections and ISPs add dozens of milliseconds to requests. This additional latency can make mobile browsing feel slow. Unfortunately, encryption can make these slow connections seem slower. TLS 1.3 helps improve this situation.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2vhUThCtQG95GVCJoweEzN/351c7a3163f0404416606e98d24e6301/handshake-1443446541etN.jpg" />
            
            </figure><p>To send a message to an encrypted site, you must first establish shared cryptographic keys. This process is called a cryptographic handshake. It requires special messages to be sent back and forth between the browser and the website. The TLS handshake happens behind the scenes whenever you connect to an encrypted site with your browser.</p><p>With TLS 1.2, two <a href="https://www.cloudflare.com/learning/cdn/glossary/round-trip-time-rtt/">round-trips</a> are needed to complete the handshake before the request can be sent. Accessing a site over a mobile network can add more than half a second to its load time. With TLS 1.3, the initial handshake is cut in half, requiring only one round-trip. That’s like going from a fast station wagon (0-60mph in 10 seconds) to a Tesla Model S (5 seconds).</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/9Dw3e5yPlJrQ54CaR7UE1/842b2e25dee71ce2d6fd49eac1fc7e59/image01.jpg" />
            
            </figure><p>If the round-trip time for a connection is around 100ms, the speed boost from TLS 1.3 is enough to take sites that seem <a href="https://hpbn.co/primer-on-latency-and-bandwidth/#speed-of-light-and-propagation-latency">“sluggish”</a> (over 300ms), and turn them into sites that load comfortably fast (under 300ms).</p><p>The more efficient handshake is only possible because some of the legacy features present in TLS 1.2 were removed from the protocol. TLS 1.3 also has the additional advantage that for sites you’ve visited recently, you can send data on the first message to the server. This is called “zero round trip” mode (0-RTT) and will result in even faster load times. CloudFlare is planning to support TLS 1.3 0-RTT in the coming weeks.</p>
    <div>
      <h3>Live for all</h3>
      <a href="#live-for-all">
        
      </a>
    </div>
    <p>TLS 1.3 is huge step forward for web security and performance. It’s available to all CloudFlare customers, and enabled by default for all Free and Pro customers. You will find the toggle to enable/disable TLS 1.3 in the Crypto tab of the CloudFlare dashboard.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2gBklXLwNX8MsBrkyjoqup/96d12ceb76d73c581d91f75453f20d29/image00.png" />
            
            </figure><p>The TLS 1.3 specification is still being polished, but the IETF is very close to settling on a final version of the protocol. Major browser vendors Firefox and Chrome have implemented preliminary versions of TLS 1.3 in their developer releases (<a href="https://nightly.mozilla.org/">Firefox Nightly</a> and <a href="https://www.google.com/chrome/browser/canary.html">Chrome Canary</a>), and we’re updating our implementation to match the current version in these browsers until the specification is final. Expect a faster, safer web browsing experience as more browsers enable TLS 1.3 by default in the coming months.</p><hr /><p><b>How to enable TLS 1.3 in your browser</b></p><p>Firefox Nightly</p><ul><li><p>Install and run Firefox nightly: <a href="https://nightly.mozilla.org/">https://nightly.mozilla.org/</a></p></li><li><p>Enter "about:config" in the address bar</p></li><li><p>Set security.tls.version.max from 3 to 4</p></li><li><p>Restart the browser</p></li></ul><p>Chrome Canary</p><ul><li><p>Install and run Chrome Canary: <a href="https://www.google.com/chrome/browser/canary.html">https://www.google.com/chrome/browser/canary.html</a></p></li><li><p>Enter "chrome://flags/" in the address bar</p></li><li><p>Go to "Maximum TLS version enabled." and select "TLS 1.3"</p></li><li><p>Restart the browser</p></li></ul> ]]></content:encoded>
            <category><![CDATA[TLS 1.3]]></category>
            <category><![CDATA[TLS]]></category>
            <category><![CDATA[HTTPS]]></category>
            <category><![CDATA[RSA]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[Chrome]]></category>
            <category><![CDATA[Crypto Week]]></category>
            <guid isPermaLink="false">1J6z7Vg0DdDXcGHtmBJuNz</guid>
            <dc:creator>Nick Sullivan</dc:creator>
        </item>
    </channel>
</rss>