
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/">
    <channel>
        <title><![CDATA[ The Cloudflare Blog ]]></title>
        <description><![CDATA[ Get the latest news on how products at Cloudflare are built, technologies used, and join the teams helping to build a better Internet. ]]></description>
        <link>https://blog.cloudflare.com</link>
        <atom:link href="https://blog.cloudflare.com/" rel="self" type="application/rss+xml"/>
        <language>en-us</language>
        
        <lastBuildDate>Sun, 05 Apr 2026 18:17:36 GMT</lastBuildDate>
        <item>
            <title><![CDATA[HTTPS-only for Cloudflare APIs: shutting the door on cleartext traffic]]></title>
            <link>https://blog.cloudflare.com/https-only-for-cloudflare-apis-shutting-the-door-on-cleartext-traffic/</link>
            <pubDate>Thu, 20 Mar 2025 13:00:00 GMT</pubDate>
            <description><![CDATA[ We are closing the cleartext HTTP ports entirely for Cloudflare API traffic. This prevents the risk of clients unintentionally leaking their secret API keys in cleartext during the initial request.  ]]></description>
            <content:encoded><![CDATA[ <p>Connections made over cleartext HTTP ports risk exposing sensitive information because the data is transmitted unencrypted and can be intercepted by network intermediaries, such as ISPs, Wi-Fi hotspot providers, or malicious actors on the same network. It’s common for servers to either <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Redirections"><u>redirect</u></a> or return a <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/403"><u>403 (Forbidden)</u></a> response to close the HTTP connection and enforce the use of HTTPS by clients. However, by the time this occurs, it may be too late, because sensitive information, such as an API token, may have already been <a href="https://jviide.iki.fi/http-redirects"><u>transmitted in cleartext</u></a> in the initial client request. This data is exposed before the server has a chance to redirect the client or reject the connection.</p><p>A better approach is to refuse the underlying cleartext connection by closing the <a href="https://developers.cloudflare.com/fundamentals/reference/network-ports/"><u>network ports</u></a> used for plaintext HTTP, and that’s exactly what we’re going to do for our customers.</p><p><b>Today we’re announcing that we’re closing all of the </b><a href="https://developers.cloudflare.com/fundamentals/reference/network-ports/#network-ports-compatible-with-cloudflares-proxy:~:text=HTTP%20ports%20supported%20by%20Cloudflare"><b><u>HTTP ports</u></b></a><b> on api.cloudflare.com.</b> We’re also making changes so that api.cloudflare.com can change IP addresses dynamically, in line with on-going efforts to <a href="https://blog.cloudflare.com/addressing-agility/"><u>decouple names from IP addresses</u></a>, and reliably <a href="https://blog.cloudflare.com/topaz-policy-engine-design/"><u>managing</u></a> addresses in our authoritative DNS. This will enhance the agility and flexibility of our API endpoint management. Customers relying on static IP addresses for our API endpoints will be notified in advance to prevent any potential availability issues.</p><p>In addition to taking this first step to secure Cloudflare API traffic, we’ll release the ability for customers to opt-in to safely disabling all HTTP port traffic for their websites on Cloudflare. We expect to make this free security feature available in the last quarter of 2025.</p><p>We have <a href="https://blog.cloudflare.com/introducing-universal-ssl/"><u>consistently</u></a> <a href="https://blog.cloudflare.com/enforce-web-policy-with-hypertext-strict-transport-security-hsts/"><u>advocated</u></a> for <a href="https://blog.cloudflare.com/post-quantum-for-all/"><u>strong encryption standards</u></a> to safeguard users’ data and privacy online. As part of our ongoing commitment to enhancing Internet security, this blog post details our efforts to <i>enforce</i> HTTPS-only connections across our global network. </p>
    <div>
      <h3>Understanding the problem</h3>
      <a href="#understanding-the-problem">
        
      </a>
    </div>
    <p>We already provide an “<a href="https://developers.cloudflare.com/ssl/edge-certificates/additional-options/always-use-https/"><u>Always Use HTTPS</u></a>” setting that can be used to redirect all visitor traffic on our customers’ domains (and subdomains) from HTTP (plaintext) to HTTPS (encrypted). For instance, when a user clicks on an HTTP version of the URL on the site (http://www.example.com), we issue an HTTP 3XX redirection status code to immediately redirect the request to the corresponding HTTPS version (https://www.example.com) of the page. While this works well for most scenarios, there’s a subtle but important risk factor: What happens if the initial plaintext HTTP request (before the redirection) contains sensitive user information?</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2mXYZL0JRZOb8J6Tqm4mCj/1b24b76335ad9cf3f3b630ef31868f6c/1.png" />
          </figure><p><sup><i>Initial plaintext HTTP request is exposed to the network before the server can redirect to the secure HTTPS connection.</i></sup></p><p>Third parties or intermediaries on shared networks could intercept sensitive data from the first plaintext HTTP request, or even carry out a <a href="https://blog.cloudflare.com/monsters-in-the-middleboxes/"><u>Monster-in-the-Middle (MITM)</u></a> attack by impersonating the web server.</p><p>One may ask if <a href="https://developers.cloudflare.com/ssl/edge-certificates/additional-options/http-strict-transport-security/"><u>HTTP Strict Transport Security (HSTS)</u></a> would partially alleviate this concern by ensuring that, after the first request, visitors can only access the website over HTTPS without needing a redirect. While this does reduce the window of opportunity for an adversary, the first request still remains exposed. Additionally, HSTS is not applicable by default for most non-user-facing use cases, such as API traffic from stateless clients. Many API clients don’t retain browser-like state or remember HSTS headers they've encountered. It is quite <a href="https://jviide.iki.fi/http-redirects"><u>common practice</u></a> for API calls to be redirected from HTTP to HTTPS, and hence have their initial request exposed to the network.</p><p>Therefore, in line with our <a href="https://blog.cloudflare.com/dogfooding-from-home/"><u>culture of dogfooding</u></a>, we evaluated the accessibility of the Cloudflare API (<a href="https://api.cloudflare.com"><u>api.cloudflare.com</u></a>) over <a href="https://developers.cloudflare.com/fundamentals/reference/network-ports/#:~:text=ports%20listed%20below.-,HTTP,-ports%20supported%20by"><u>HTTP ports (80, and others)</u></a>. In that regard, imagine a client making an initial request to our API endpoint that includes their <i>secret API key</i>. While we outright reject all plaintext connections with a 403 Forbidden response instead of redirecting for API traffic — clearly indicating that “<i>Cloudflare API is only accessible over TLS”</i> — this rejection still happens at the <a href="https://www.cloudflare.com/learning/ddos/what-is-layer-7/">application layer</a>. By that point, the API key may have already been exposed over the network before we can even reject the request. We do have a notification mechanism in place to alert customers and rotate their API keys accordingly, but a stronger approach would be to eliminate the exposure entirely. We have an opportunity to improve!</p>
    <div>
      <h3>A better approach to API security</h3>
      <a href="#a-better-approach-to-api-security">
        
      </a>
    </div>
    <p>Any API key or token exposed in plaintext on the public Internet should be considered compromised. We can either address exposure after it occurs or prevent it entirely. The reactive approach involves continuously tracking and revoking compromised credentials, requiring active management to rotate each one. For example, when a plaintext HTTP request is made to our API endpoints, we detect exposed tokens by scanning for 'Authorization' header values.</p><p>In contrast, a preventive approach is stronger and more effective, stopping exposure before it happens. Instead of relying on the API service application to react after receiving potentially sensitive cleartext data, we can preemptively refuse the underlying connection at the <a href="https://www.cloudflare.com/learning/ddos/glossary/open-systems-interconnection-model-osi/"><u>transport layer</u></a>, before any HTTP or application-layer data is exchanged. The <i>preventative </i>approach can be achieved by closing all plaintext HTTP ports for API traffic on our global network. The added benefit is that this is operationally much simpler: by eliminating cleartext traffic, there's no need for key rotation.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1PYo3GMZjQ4LbfUHXNOVpj/2341da1d926e077624563358bd5034ef/2.png" />
          </figure><p><sup><i>The transport layer carries the application layer data on top.</i></sup></p><p>To explain why this works: an application-layer request requires an underlying transport connection, like TCP or QUIC, to be established first. The combination of a port number and an IP address serves as a transport layer identifier for creating the underlying transport channel. Ports direct network traffic to the correct application-layer process — for example, port 80 is designated for plaintext HTTP, while port 443 is used for encrypted HTTPS. By disabling the HTTP cleartext server-side port, we prevent that transport channel from being established during the initial "<a href="https://www.cloudflare.com/learning/ddos/glossary/tcp-ip/"><u>handshake</u></a>" phase of the connection — before any application data, such as a secret API key, leaves the client’s machine.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/13fKw8cHkHlsLOzlXJYenr/c9156f67ae99cfdc74dc5917ebc1e5bb/3.png" />
          </figure><p><sup><i>Both TCP and QUIC transport layer handshakes are a pre-requisite for HTTPS application data exchange on the web.</i></sup></p><p>Therefore, closing the HTTP interface entirely for API traffic gives a strong and visible <b>fast-failure</b> signal to developers that might be mistakenly accessing <code>http://… </code>instead of <code>https://…</code> with their secret API keys in the first request — a simple one-letter omission, but one with serious implications.</p><p>In theory, this is a simple change, but at Cloudflare’s global scale, implementing it required careful planning and execution. We’d like to share the steps we took to make this transition.</p>
    <div>
      <h3>Understanding the scope</h3>
      <a href="#understanding-the-scope">
        
      </a>
    </div>
    <p>In an ideal scenario, we could simply close all cleartext HTTP ports on our network. However, two key challenges prevent this. First, as shown in the <a href="https://radar.cloudflare.com/adoption-and-usage#http-vs-https"><u>Cloudflare Radar</u></a> figure below, about 2-3% of requests from “likely human” clients to our global network are over plaintext HTTP. While modern browsers prominently warn users about insecure HTTP connections and <a href="https://support.mozilla.org/en-US/kb/https-only-prefs"><u>offer features to silently upgrade to HTTPS</u></a>, this protection doesn't extend to the broader ecosystem of connected devices. IoT devices with limited processing power, automated API clients, or legacy software stacks often lack such safeguards entirely. In fact, when filtering on plaintext HTTP traffic that is “likely automated”, the share <a href="https://radar.cloudflare.com/explorer?dataSet=http&amp;groupBy=http_protocol&amp;filters=botClass%253DLikely_Automated"><u>rises to over 16%</u></a>! We continue to see a wide variety of legacy clients accessing resources over plaintext connections. This trend is not confined to specific networks, but is observable globally.</p><p>Closing HTTP ports, like port 80, across our entire IP address space would block such clients entirely, causing a major disruption in services. While we plan to cautiously start by implementing the change on Cloudflare's API IP addresses, it’s not enough. Therefore, our goal is to ensure all of our customers’ API traffic benefits from this change as well.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1OfUjwkP9iMdjymjtJX7tL/4cd278faf71f610c43239cc41d8f6fba/4.png" />
          </figure><p><sup><i>Breakdown of HTTP and HTTPS for ‘human’ connections</i></sup></p><p>The second challenge relates to limitations posed by the longstanding <a href="https://en.wikipedia.org/wiki/Berkeley_sockets"><u>BSD Sockets API</u></a> at the server-side, which we have addressed using <a href="https://blog.cloudflare.com/tubular-fixing-the-socket-api-with-ebpf/"><u>Tubular</u></a>, a tool that inspects every connection terminated by a server and decides which application should receive it. Operators historically have faced a challenging dilemma: either listen to the same ports across many IP addresses using a single socket (scalable but inflexible), or maintain individual sockets for each IP address (flexible but unscalable). Luckily, Tubular has allowed us to <a href="https://blog.cloudflare.com/its-crowded-in-here/"><u>resolve this using 'bindings'</u></a>, which decouples sockets from specific IP:port pairs. This creates efficient pathways for managing endpoints throughout our systems at scale, enabling us to handle both HTTP and HTTPS traffic intelligently without the traditional limitations of socket architecture.</p><p>Step 0, then, is about provisioning both IPv4 and IPv6 address space on our network that by default has all HTTP ports closed. Tubular enables us to configure and manage these IP addresses differently than others for our endpoints. Additionally, <a href="https://blog.cloudflare.com/addressing-agility/"><u>Addressing Agility</u></a> and <a href="https://blog.cloudflare.com/topaz-policy-engine-design/"><u>Topaz</u></a> enable us to assign these addresses dynamically, and safely, for opted-in domains.</p>
    <div>
      <h3>Moving from strategy to execution</h3>
      <a href="#moving-from-strategy-to-execution">
        
      </a>
    </div>
    <p>In the past, our legacy stack would have made this transition challenging, but today’s Cloudflare possesses the appropriate tools to deliver a scalable solution, rather than addressing it on a domain-by-domain basis.</p><p>Using Tubular, we were able to bind our new set of anycast IP prefixes to our TLS-terminating proxies across the globe. To ensure that no plaintext HTTP traffic is served on these IP addresses, we extended our global <a href="https://en.wikipedia.org/wiki/Iptables"><u>iptables</u></a> firewall configuration to reject any inbound packets on HTTP ports.</p>
            <pre><code>iptables -A INPUT -p tcp -d &lt;IP_ADDRESS_BLOCK&gt; --dport &lt;HTTP_PORT&gt; -j REJECT 
--reject-with tcp-reset

iptables -A INPUT -p udp -d &lt;IP_ADDRESS_BLOCK&gt; --dport &lt;HTTP_PORT&gt; -j REJECT 
--reject-with icmp-port-unreachable</code></pre>
            <p>As a result, any connections to these IP addresses on HTTP ports are filtered and rejected at the transport layer, eliminating the need for state management at the application layer by our web proxies.</p><p>The next logical step is to update the <a href="https://www.cloudflare.com/learning/dns/what-is-dns/"><u>DNS assignments</u></a> so that API traffic is routed over the <i>correct</i> IP addresses. In our case, we encoded a new DNS policy for API traffic for the HTTPS-only interface as a declarative <a href="https://blog.cloudflare.com/topaz-policy-engine-design/"><u>Topaz program</u></a> in our authoritative DNS server:</p>
            <pre><code>- name: https_only
 exclusive: true 
 config: |
    (config
      ([traffic_class "API"]
       [ipv4 (ipv4_address “192.0.2.1”)] # Example IPv4 address
       [ipv6 (ipv6_address “2001:DB8::1:1”)] # Example IPv6 address
       [t (ttl 300]))
  match: |
    (= query_domain_class traffic_class)
  response: |
    (response (list ipv4) (list ipv6) t)</code></pre>
            <p>The above policy encodes that for any DNS query targeting the ‘API traffic’ class, we return the respective HTTPS-only interface IP addresses. Topaz’s safety guarantees ensure <i>exclusivity</i>, preventing other DNS policies from inadvertently matching the same queries and misrouting plaintext HTTP expected domains to HTTPS-only IPs

api.cloudflare.com is the first domain to be added to our HTTPS-only API traffic class, with other applicable endpoints to follow.</p>
    <div>
      <h3>Opting-in your API endpoints</h3>
      <a href="#opting-in-your-api-endpoints">
        
      </a>
    </div>
    <p>As we said above, we've started with api.cloudflare.com and our internal API endpoints to thoroughly monitor any side effects on our own systems before extending this feature to customer domains. We have deployed these changes gradually across all data centers, leveraging Topaz’s flexibility to target subsets of traffic, minimizing disruptions, and ensuring a smooth transition.</p><p>To monitor unencrypted connections for your domains, before blocking access using the feature, you can review the relevant analytics on the Cloudflare dashboard. Log in, select your account and domain, and navigate to the "<a href="https://developers.cloudflare.com/analytics/types-of-analytics/#account-analytics-beta"><u>Analytics &amp; Logs</u></a>" section. There, under the "<i>Traffic Served Over SSL</i>" subsection, you will find a breakdown of encrypted and unencrypted traffic for your site. That data can help provide a baseline for assessing the volume of plaintext HTTP connections for your site that will be blocked when you opt in. After opting in, you would expect no traffic for your site will be served over plaintext HTTP, and therefore that number should go down to zero.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4YjvOU3XQqj1Y2Kfv2jIL3/97178a99d17f8938bc3ec53704bbc4b8/5.png" />
          </figure><p><i>Snapshot of ‘Traffic Served Over SSL’ section on Cloudflare dashboard</i></p><p>Towards the last quarter of 2025, we will provide customers the ability to opt in their domains using the dashboard or API (similar to enabling the Always Use HTTPS feature). Stay tuned!</p>
    <div>
      <h3>Wrapping up</h3>
      <a href="#wrapping-up">
        
      </a>
    </div>
    <p>Starting today, any unencrypted connection to api.cloudflare.com will be completely rejected. Developers should <b>not</b> expect a 403 Forbidden response any longer for HTTP connections, as we will prevent the underlying connection to be established by closing the HTTP interface entirely. Only secure HTTPS connections will be allowed to be established.</p><p>We are also making updates to transition api.cloudflare.com away from its static IP addresses in the future. As part of that change, we will be discontinuing support for <a href="https://developers.cloudflare.com/ssl/reference/browser-compatibility/#non-sni-support"><u>non-SNI</u></a> legacy clients for Cloudflare API specifically — currently, an average of just 0.55% of TLS connections to the Cloudflare API do not include an <a href="https://www.cloudflare.com/learning/ssl/what-is-sni/">SNI</a> value. These non-SNI connections are initiated by a small number of accounts. We are committed to coordinating this transition and will work closely with the affected customers before implementing the change. This initiative aligns with our goal of enhancing the agility and reliability of our API endpoints.</p><p>Beyond the Cloudflare API use case, we're also exploring other areas where it's safe to close plaintext traffic ports. While the long tail of unencrypted traffic may persist for a while, it shouldn’t be forced on every site.

In the meantime, a small step like this can allow us to have a big impact in helping make a better Internet, and we are working hard to reliably bring this feature to your domains. We believe security should be free for all!</p> ]]></content:encoded>
            <category><![CDATA[Security Week]]></category>
            <category><![CDATA[HTTPS]]></category>
            <category><![CDATA[API]]></category>
            <category><![CDATA[Addressing]]></category>
            <category><![CDATA[Research]]></category>
            <guid isPermaLink="false">RqjV9vQoPNX8txlmrju6d</guid>
            <dc:creator>Suleman Ahmad</dc:creator>
            <dc:creator>Ash Pallarito</dc:creator>
            <dc:creator>Algin Martin</dc:creator>
        </item>
        <item>
            <title><![CDATA[Introducing HTTP request traffic insights on Cloudflare Radar ]]></title>
            <link>https://blog.cloudflare.com/http-requests-on-cloudflare-radar/</link>
            <pubDate>Tue, 13 Aug 2024 14:00:00 GMT</pubDate>
            <description><![CDATA[ The traffic graphs on Cloudflare Radar have been enhanced to include HTTP request traffic. This new metric complements the existing bytes-based “HTTP traffic” view, and the new graphs can be found on Radar’s Overview and Traffic pages. ]]></description>
            <content:encoded><![CDATA[ <p>Historically, <a href="https://radar.cloudflare.com/traffic"><u>traffic graphs on Cloudflare Radar</u></a> have displayed two metrics: total traffic and <a href="https://www.cloudflare.com/learning/ddos/glossary/hypertext-transfer-protocol-http/"><u>HTTP</u></a> traffic. These graphs show normalized traffic volumes measured in bytes, derived from aggregated<a href="https://www.kentik.com/kentipedia/what-is-netflow-overview/"><u> NetFlow</u></a> data. (NetFlow is a protocol used to collect metadata about IP traffic flows traversing network devices.) Today, we’re adding an additional metric that reflects the number of HTTP requests, normalized over the same time period. By comparing bytes with requests, readers can gain additional insights into traffic patterns and user behavior. Below, we review how this new data has been incorporated into Radar, and explore HTTP request traffic in more detail.</p><p>Note that while we refer to “HTTP request traffic” in this post and on Radar, the term encompasses requests made in the clear over HTTP <b>and</b> over encrypted connections using HTTPS – the latter accounts for <a href="https://radar.cloudflare.com/adoption-and-usage?dateStart=2024-07-01&amp;dateEnd=2024-07-31"><u>~95% of all requests to Cloudflare during July 2024</u></a>.</p>
    <div>
      <h2>New and updated graphs</h2>
      <a href="#new-and-updated-graphs">
        
      </a>
    </div>
    <p>Graphs including HTTP request-based traffic data have been added to the Overview and Traffic sections on Cloudflare Radar. On the <a href="https://radar.cloudflare.com/"><u>Overview</u></a> page, the “Traffic trends” graph now includes a drop-down selector at the upper right, where you can choose between “Total &amp; HTTP bytes” and “HTTP requests &amp; bytes”. We explore the distinction between these further in the following sections.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6JcsgZtjziThbeKMxSUWrT/2abd7b9aee9920c6f2b58e675254f1b7/Screenshot_2024-08-09_at_11.04.05_AM.png" />
          </figure><p></p><p>The default “Total &amp; HTTP bytes” selection displays a time series graph, showing total bytes and HTTP bytes traffic over time, as Radar has done for several years now.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4scdHiTWAUgBgF0mfibZ8c/2f82024f4e8b5a96f9795839b5e7e492/2493-3.png" />
          </figure><p>
</p><p>Selecting “HTTP requests &amp; bytes” from the dropdown switches the view to a time series graph that HTTP requests traffic and HTTP bytes traffic over time. In both graphs, users can click on a metric in the legend to deselect it and remove it from the graph. These (de)selections are maintained when a user chooses to download or save a graph.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/oQim0jPqAIzfmPZC1ASOB/12fc22dfc52a12b0c362df98f44451d6/2493-4.png" />
          </figure><p></p><p>In addition, we’ve added a “Protocols” summary next to the graph that shows the share of bytes over the selected time period that HTTP accounts for, and the remaining aggregate share associated with the protocols used by other non-HTTP Cloudflare services (such as DNS, WARP, etc.). For most locations or ASNs, HTTP traffic will comprise the majority share of bytes-based traffic.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4k0EgZW5fnUpOuEdIt0BE/4cfd8bd682a11ada069008c541aca07e/Screenshot_2024-08-09_at_11.03.48_AM.png" />
          </figure><p></p><p>On Radar’s <a href="https://radar.cloudflare.com/traffic"><u>Traffic</u></a> page, we have added the HTTP requests metric to the “Traffic volume” graph at the top of the page, allowing you to see how request volume has changed during the selected time period as compared to the previous period, in addition to the changes in the bytes-based metrics.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5AMP7mv3K6Zk1DooGateDx/af663823183a0d06c676a873cdfbf59e/2493-6.png" />
          </figure><p></p><p>A new standalone request-based “HTTP traffic” graph was also added to the Traffic page, just below the bytes-based “Traffic trends” graph. This new graph shows normalized HTTP request traffic volume across the selected time period, and by default, also compares it with the previous time period.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4xVJOTfmnBtYJPzFuvi3iY/0a179dc7b6a8924efc5f2ece94690535/2493-7.png" />
          </figure><p></p><p>Similar to other Radar graphs, these new HTTP request-based graphs can also be downloaded, copied to the clipboard, or embedded in other websites – just click on the share icon.</p><p>As always, the underlying data is also available through the Radar API. The <a href="https://developers.cloudflare.com/api/operations/radar-get-http-timeseries"><u>“HTTP requests Time Series” API endpoint</u></a> returns normalized HTTP request time series data across the specified time period for the requested location or <a href="https://www.cloudflare.com/learning/network-layer/what-is-an-autonomous-system/"><u>autonomous system (ASN)</u></a>.</p>
    <div>
      <h2>What is HTTP request traffic?</h2>
      <a href="#what-is-http-request-traffic">
        
      </a>
    </div>
    <p>An HTTP <a href="https://httpwg.org/specs/rfc9110.html#GET"><u>GET</u></a> request is a message sent from a client (such as your web browser) to a web server (such as one operated by Cloudflare), asking for a particular resource (file). In addition to returning the requested resource, which could range from a single-pixel GIF accounting for just a few bytes, to an API call that returns a few kilobytes of data, to a multi-gigabyte software package, the Web server also returns a set of <a href="https://developer.mozilla.org/en-US/docs/Glossary/Response_header"><u>headers</u></a>, which can include information about the <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Type"><u>content type</u></a>, the <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Last-Modified"><u>last time the resource was modified</u></a>, <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Cookie"><u>cookie</u></a> information, <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Cache-Control"><u>cacheability</u></a>, and more. While GET requests account for the overwhelming majority of HTTP request traffic, such traffic also includes other <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Methods"><u>HTTP request methods</u></a> including HEAD, POST, PUT, and more.</p><p>Cloudflare temporarily logs HTTP requests received by our network, including associated <a href="https://developer.mozilla.org/en-US/docs/Glossary/Request_header"><u>header</u></a> information and “metadata” about the request, such as the <a href="https://developers.cloudflare.com/bots/concepts/bot-score/"><u>bot score</u></a> computed for the request and the associated <a href="https://developers.cloudflare.com/logs/reference/log-fields/zone/http_requests/#cachecachestatus"><u>cache status</u></a>. Request logs for a customer’s web properties are <a href="https://developers.cloudflare.com/logs/"><u>available for them to download</u></a>, and after processing and analysis, this data is also presented in the <a href="https://developers.cloudflare.com/analytics/account-and-zone-analytics/"><u>Analytics</u></a> section of the Cloudflare dashboard. The HTTP request data now available on Radar is based on a sample of this log data, aggregated across Cloudflare’s global customer base.</p>
    <div>
      <h2>The value of request-based traffic insights</h2>
      <a href="#the-value-of-request-based-traffic-insights">
        
      </a>
    </div>
    <p>Cloudflare Radar already has HTTP data, so why add more? One key reason for analyzing and including HTTP request traffic is resilience. Having multiple sources of truth with respect to HTTP traffic allows us to ​​better and more quickly distinguish between real events (such as an Internet disruption in a given country or network) and data pipeline issues.</p><p>While bytes-based metrics provide a reasonable proxy into human (user) behavior, especially with respect to activity surrounding Internet disruptions, request-based metrics provide an even better perspective. A lot of HTTP traffic involves relatively small responses – especially API traffic, which now <a href="https://blog.cloudflare.com/application-security-report-2024-update"><u>accounts for 60%</u></a> of all traffic. Furthermore, response sizes can vary widely, ranging from a single-pixel GIF accounting for just a few bytes, to an API call that returns a few kilobytes of data, to a multi-gigabyte software package</p><p>To that end, the scope of user activity may be insufficiently reflected by a bytes-based metric, or buried in the noise, whereas request activity provides a cleaner signal and a more direct proxy for user activity. This is especially important as we examine the restoration of connectivity after an Internet disruption, attempting to ascertain when activity has returned to “expected” pre-disruption levels.</p><p>Finally, incorporating request-based traffic insights into Radar is simply extending the way that the data is already being used on the site. All of the graphs, maps, and tables presented on Radar’s <a href="https://radar.cloudflare.com/adoption-and-usage"><u>Adoption &amp; Usage</u></a> page, are based on analysis of HTTP request traffic, making use of information contained within request headers (such as HTTP version or user agent) or characteristics of the underlying connection (such as IP version).</p>
    <div>
      <h2>Bytes vs requests – what’s the difference?</h2>
      <a href="#bytes-vs-requests-whats-the-difference">
        
      </a>
    </div>
    <p>The current “HTTP traffic” view aggregates the bytes associated with HTTP requests to Cloudflare’s <a href="https://www.cloudflare.com/en-gb/learning/cdn/what-is-a-cdn/"><u>content delivery (CDN)</u></a> services from the selected location or <a href="https://www.cloudflare.com/learning/network-layer/what-is-an-autonomous-system/"><u>autonomous system (ASN)</u></a>. “Total traffic” aggregates this HTTP traffic along with the traffic associated with other Cloudflare services, including our <a href="https://one.one.one.one/dns/"><u>1.1.1.1 DNS resolver</u></a>, <a href="https://www.cloudflare.com/application-services/products/dns/"><u>authoritative DNS</u></a>, <a href="https://one.one.one.one/"><u>WARP</u></a>, and <a href="https://developers.cloudflare.com/spectrum/"><u>Spectrum</u></a>, among others. (While Spectrum, WARP, and 1.1.1.1 also carry HTTP traffic, the share of HTTP traffic carried by these services is opaque to Radar, and isn't accounted for as part of the HTTP traffic calculations.)</p><p>The bytes associated with a given request include the size of the request, the size of the headers associated with the response, and the size of the response itself. As noted above, the size of a file returned in response to a request can vary widely, depending on what was requested. The shape of the HTTP requests and HTTP bytes lines may be quite similar, but the potential variability in response sizes (in aggregate) can cause the lines to diverge, sometimes significantly so. For example, if an application regularly makes background requests to check for updates, the availability and subsequent download of a large file containing a software update would cause a spike in the HTTP bytes line, while the HTTP requests pattern remained consistent. </p><p>As another example, consider the graph below, capturing HTTP requests and bytes traffic trends for Portugal during the first week of August. HTTP bytes traffic initially grows each day between 06:00 and 09:00 UTC (07:00 - 10:00 local summer time), increases much more slowly until around 19:00 UTC (20:00 local summer time), and then increases rapidly before peaking around 21:00 UTC (22:00 local time). This suggests that content consumed during the workday is lighter in terms of bytes (such as API traffic, as discussed above), while evening traffic is more byte-heavy (possibly due to increased consumption of media content). In contrast, after starting to increase around 06:00 UTC (07:00 local summer time), request traffic generally sees three successively higher peaks each day – occurring around 10:00, 14:00, and 21:00 UTC respectively (11:00, 15:00, and 22:00 local summer time). These peaks are most pronounced on weekdays, but are still apparent on weekend days as well, suggesting regular patterns of user activity at those times.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3LSadrlwBTmm091qayB6sS/2dc5afb1ce0470f50cfc81325e729100/2493-8.png" />
          </figure><p></p><p>It is important to remember that in looking at the “HTTP requests &amp; bytes” graphs on Radar that they are showing two different metrics, and as such, only their shape over time is comparable, not their relative sizes. (As both metrics are normalized on a 0 to 1 (Max) scale, the lines on the graph are scaled relative to the maximum normalized value of each metric, including the previous period.)</p>
    <div>
      <h2>Conclusion</h2>
      <a href="#conclusion">
        
      </a>
    </div>
    <p>The addition of HTTP request metrics to Cloudflare Radar brings additional visibility to traffic trends at a global, location, and network level, complementing the existing bytes-based HTTP traffic metrics. Derived from traffic to customer web properties, these new metrics can be found on Radar’s Overview and Traffic pages.</p><p>In addition to HTTP traffic trends, visit <a href="https://radar.cloudflare.com/"><u>Cloudflare Radar</u></a> for additional insights around Internet disruptions, routing issues, attacks, domain popularity, and Internet quality. Follow us on social media at <a href="https://x.com/CloudflareRadar"><u>@CloudflareRadar</u></a> (X), <a href="https://noc.social/@cloudflareradar"><u>noc.social/@cloudflareradar</u></a> (Mastodon), and <a href="https://bsky.app/profile/radar.cloudflare.com"><u>radar.cloudflare.com</u></a> (Bluesky), or <a><u>contact us via email</u></a>.</p> ]]></content:encoded>
            <category><![CDATA[Radar]]></category>
            <category><![CDATA[Trends]]></category>
            <category><![CDATA[Internet Traffic]]></category>
            <category><![CDATA[HTTPS]]></category>
            <category><![CDATA[Consumer Services]]></category>
            <guid isPermaLink="false">6fI16wZ1kKoXv4VV5pIJ9O</guid>
            <dc:creator>David Belson</dc:creator>
        </item>
        <item>
            <title><![CDATA[Speeding up HTTPS and HTTP/3 negotiation with... DNS]]></title>
            <link>https://blog.cloudflare.com/speeding-up-https-and-http-3-negotiation-with-dns/</link>
            <pubDate>Wed, 30 Sep 2020 13:00:00 GMT</pubDate>
            <description><![CDATA[ A look at a new DNS resource record intended to speed-up negotiation of HTTP security and performance features and how it will help make the web faster. ]]></description>
            <content:encoded><![CDATA[ <p>In late June 2019, Cloudflare's resolver team noticed a spike in DNS requests for the 65479 Resource Record thanks to data exposed through <a href="/introducing-cloudflare-radar/">our new Radar service</a>. We began investigating and found these to be a part of <a href="https://developer.apple.com/videos/play/wwdc2020/10111/">Apple’s iOS14 beta release</a> where they were testing out a new SVCB/HTTPS record type.</p><p>Once we saw that Apple was requesting this record type, and while the iOS 14 beta was still on-going, we rolled out support across the Cloudflare customer base.</p><p>This blog post explains what this new record type does and its significance, but there’s also a deeper story: Cloudflare customers get automatic support for new protocols like this.</p><p>That means that today if you’ve enabled HTTP/3 on an Apple device running iOS 14, when it needs to talk to a Cloudflare customer (say you browse to a Cloudflare-protected website, or use an app whose API is on Cloudflare) it can find the best way of making that connection automatically.</p><p>And if you’re a Cloudflare customer you have to do… absolutely nothing… to give Apple users the best connection to your Internet property.</p>
    <div>
      <h3>Negotiating HTTP security and performance</h3>
      <a href="#negotiating-http-security-and-performance">
        
      </a>
    </div>
    <p>Whenever a user types a URL in the browser box without specifying a scheme (like “https://” or “http://”), the browser cannot assume, without prior knowledge such as a <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Strict-Transport-Security">Strict-Transport-Security (HSTS)</a> cache or preload list entry, whether the requested website supports HTTPS or not. The browser will first try to fetch the resources using plaintext HTTP, and only if the website redirects to an HTTPS URL, or if it specifies an HSTS policy in the initial HTTP response, the browser will then fetch the resource again over a secure connection.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7iz2JFuI19whcuN931y8pk/8d46dd114b946940a6cdcd49502d7b07/image4.gif" />
            
            </figure><p>This means that the latency incurred in fetching the initial resource (say, the index page of a website) is doubled, due to the fact that the browser needs to re-establish the connection over TLS and request the resource all over again. But worse still, the initial request is leaked to the network in plaintext, which could potentially be modified by malicious on-path attackers (think of all those unsecured public WiFi networks) to redirect the user to a completely different website. In practical terms, this weakness is sometimes used by said unsecured public WiFi network operators to sneak advertisements into people’s browsers.</p><p>Unfortunately, that’s not the full extent of it. This problem also impacts <a href="/http3-the-past-present-and-future/">HTTP/3</a>, the newest revision of the HTTP protocol that provides increased performance and security. <a href="https://www.cloudflare.com/learning/performance/what-is-http3/">HTTP/3</a> is advertised using the <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Alt-Svc">Alt-Svc</a> HTTP header, which is only returned after the browser has already contacted the origin using a different and potentially less performant HTTP version. The browser ends up missing out on using faster HTTP/3 on its first visit to the website (although it does store the knowledge for later visits).</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1ulWkZdFUIrgAg7WSKuOwc/acf044f1633f1116567dfd1a0c3d15e0/image2-18.png" />
            
            </figure><p>The fundamental problem comes from the fact that negotiation of HTTP-related parameters (such as whether HTTPS or HTTP/3 can be used) is done through HTTP itself (either via a redirect, HSTS and/or Alt-Svc headers). This leads to a chicken and egg problem where the client needs to use the most basic HTTP configuration that has the best chance of succeeding for the initial request. In most cases this means using plaintext HTTP/1.1. Only after it learns of parameters can it change its configuration for the following requests.</p><p>But before the browser can even attempt to connect to the website, it first needs to resolve the website’s domain to an IP address via DNS. This presents an opportunity: what if additional information required to establish a connection could be provided, in addition to IP addresses, with DNS?</p><p>That’s what we’re excited to be announcing today: Cloudflare has rolled out initial support for HTTPS records to our edge network. Cloudflare’s DNS servers will now automatically generate HTTPS records on the fly to advertise whether a particular zone supports HTTP/3 and/or HTTP/2, based on whether those features are enabled on the zone.</p>
    <div>
      <h3>Service Bindings via DNS</h3>
      <a href="#service-bindings-via-dns">
        
      </a>
    </div>
    <p><a href="https://tools.ietf.org/html/draft-ietf-dnsop-svcb-https-01">The new proposal</a>, currently discussed by the Internet Engineering Task Force (IETF) defines a family of DNS resource record types (“SVCB”) that can be used to negotiate parameters for a variety of application protocols.</p><p>The generic DNS record “SVCB” can be instantiated into records specific to different protocols. The draft specification defines one such instance called “HTTPS”, specific to the HTTP protocol, which can be used not only to signal to the client that it can connect in over a secure connection (skipping the initial unsecured request), but also to advertise the different HTTP versions supported by the website. In the future, potentially even more features could be advertised.</p>
            <pre><code>example.com 3600 IN HTTPS 1 . alpn=”h3,h2”</code></pre>
            <p>The DNS record above advertises support for the HTTP/3 and HTTP/2 protocols for the example.com origin.</p><p>This is best used alongside DNS over HTTPS or DNS over TLS, and DNSSEC, to again prevent malicious actors from manipulating the record.</p><p>The client will need to fetch not only the typical A and AAAA records to get the origin’s IP addresses, but also the HTTPS record. It can of course do these lookups in parallel to avoid additional latency at the start of the connection, but this could potentially lead to A/AAAA and HTTPS responses diverging from each other. For example, in cases where the origin makes use of <a href="https://www.cloudflare.com/learning/performance/what-is-dns-load-balancing/">DNS load-balancing</a>: if an origin can be served by multiple <a href="https://www.cloudflare.com/learning/cdn/what-is-a-cdn/">CDNs</a> it might happen that the responses for A and/or AAAA records come from one CDN, while the HTTPS record comes from another. In some cases this can lead to failures when connecting to the origin (say, if the HTTPS record from one of the CDNs advertises support for HTTP/3, but the CDN the client ends up connecting to doesn’t support it).</p><p>This is solved by the SVCB and HTTPS records by providing the IP addresses directly, without the need for the client to look at A and AAAA records. This is done via the “ipv4hint” and “ipv6hint” parameters that can optionally be added to these records, which provide lists of IPv4 and IPv6 addresses that can be used by the client in lieu of the addresses specified in A and AAAA records. Of course clients will still need to query the A and AAAA records, to support cases where no SVCB or HTTPS record is available, but these IP hints provide an additional layer of robustness.</p>
            <pre><code>example.com 3600 IN HTTPS 1 . alpn=”h3,h2” ipv4hint=”192.0.2.1” ipv6hint=”2001:db8::1”</code></pre>
            <p>In addition to all this, SVCB and HTTPS can also be used to define alternative endpoints that are authoritative for a service, in a similar vein to SRV records:</p>
            <pre><code>example.com 3600 IN HTTPS 1 example.net alpn=”h3,h2”
example.com 3600 IN HTTPS 2 example.org alpn=”h2”</code></pre>
            <p>In this case the “example.com” HTTPS service can be provided by both “example.net” (which supports both HTTP/3 and HTTP/2, in addition to HTTP/1.x) as well as “example.org” (which only supports HTTP/2 and HTTP/1.x). The client will first need to fetch A and AAAA records for “example.net” or “example.org” before being able to connect, which might increase the connection latency, but the service operator can make use of the IP hint parameters discussed above in this case as well, to reduce the amount of required DNS lookups the client needs to perform.</p><p>This means that SVCB and HTTPS records might finally provide a way for SRV-like functionality to be supported by popular browsers and other clients that have historically not supported SRV records.</p>
    <div>
      <h3>There is always room at the top apex</h3>
      <a href="#there-is-always-room-at-the-top-apex">
        
      </a>
    </div>
    <p>When setting up a website on the Internet, it’s common practice to use a “www” subdomain (like in “<a href="http://www.cloudflare.com">www.cloudflare.com</a>”) to identify the site, as well as the “apex” (or “root”) of the domain (in this case, “cloudflare.com”). In order to avoid duplicating the DNS configuration for both domains, the “www” subdomain can typically be configured as a <a href="/introducing-cname-flattening-rfc-compliant-cnames-at-a-domains-root/#cnamesforthewin">CNAME (Canonical Name) record</a>, that is, a record that maps to a different DNS record.</p>
            <pre><code>cloudflare.com.   3600 IN A 192.0.2.1
cloudflare.com.   3600 IN AAAA 2001:db8::1
www               3600 IN CNAME cloudflare.com.</code></pre>
            <p>This way the list of IP addresses of the websites won’t need to be duplicated all over again, but clients requesting A and/or AAAA records for “<a href="http://www.cloudflare.com">www.cloudflare.com</a>” will still get the same results as “cloudflare.com”.</p><p>However, there are some cases where using a CNAME might seem like the best option, but ends up subtly breaking the DNS configuration for a website. For example when setting up services such as <a href="https://docs.gitlab.com/ee/user/project/pages/">GitLab Pages</a>, <a href="https://docs.github.com/en/github/working-with-github-pages">GitHub Pages</a> or <a href="https://www.netlify.com/">Netlify</a> with a custom domain, the user is generally asked to add an A (and sometimes AAAA) record to the DNS configuration for their domain. Those IP addresses are hard-coded in users’ configurations, which means that if the provider of the service ever decides to change the addresses (or add new ones), even if just to provide some form of load-balancing, all of their users will need to manually change their configuration.</p><p>Using a CNAME to a more stable domain which can then have variable A and AAAA records might seem like a better option, and some of these providers do support that, but it’s important to note that this generally only works for subdomains (like “www” in the previous example) and not apex records. This is because the DNS specification that defines CNAME records states that when a CNAME is defined on a particular target, there can’t be any other records associated with it. This is fine for subdomains, but apex records will need to have additional records defined, such as SOA and NS, for the DNS configuration to work properly and could also have records such as MX to make sure emails get properly delivered. In practical terms, this means that defining a CNAME record at the apex of a domain might appear to be working fine in some cases, but be subtly broken in ways that are not immediately apparent.</p><p>But what does this all have to do with SVCB and HTTPS records? Well, it turns out that those records can also solve this problem, by defining an alternative format called “alias form” that behaves in the same manner as a CNAME in all the useful ways, but without the annoying historical baggage. A domain operator will be able to define a record such as:</p>
            <pre><code>example.com. 3600 IN HTTPS example.org.</code></pre>
            <p>and expect it to work as if a CNAME was defined, but without the subtle side-effects.</p>
    <div>
      <h3>One more thing</h3>
      <a href="#one-more-thing">
        
      </a>
    </div>
    <p><a href="/encrypted-sni/">Encrypted SNI</a> is an extension to TLS intended to improve privacy of users on the Internet. You might remember how it makes use of a custom DNS record to advertise the server’s public key share used by clients to then derive the secret key necessary to actually encrypt the SNI. In newer revisions of the specification (which is now called “Encrypted ClientHello” or “ECH”) the custom TXT record used previously is simply replaced by a new parameter, called “echconfig”, for the SVCB and HTTPS records.</p><p>This means that SVCB/HTTPS are a requirement to support newer revisions of Encrypted SNI/Encrypted ClientHello. More on this later this year.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4EBQrn9Y1mOFSs0PtAWtri/f77c27a9e29ef4492e11e0b055e4eacf/image1-28.png" />
            
            </figure>
    <div>
      <h3>What now?</h3>
      <a href="#what-now">
        
      </a>
    </div>
    <p>This all sounds great, but what does it actually mean for Cloudflare customers? As mentioned earlier, we have enabled initial support for HTTPS records across our edge network. Cloudflare’s DNS servers will automatically generate HTTPS records on the fly to advertise whether a particular zone supports HTTP/3 and/or HTTP/2, based on whether those features are enabled on the zone, and we will later also add Encrypted ClientHello support.</p><p>Thanks to Cloudflare’s large network that spans millions of web properties (<a href="https://w3techs.com/technologies/history_overview/dns_server">we happen to be one of the most popular DNS providers</a>), serving these records on our customers' behalf will help build a more secure and performant Internet for anyone that is using a supporting client.</p><p>Adopting new protocols requires cooperation between multiple parties. We have been working with various browsers and clients to increase the support and adoption of HTTPS records. Over the last few weeks, Apple’s iOS 14 release has included <a href="https://mailarchive.ietf.org/arch/msg/dnsop/eeP4H9fli712JPWnEMvDg1sLEfg/">client support for HTTPS records</a>, allowing connections to be upgraded to QUIC when the HTTP/3 parameter is returned in the DNS record. Apple has reported that so far, of the population that has manually enabled HTTP/3 on iOS 14, 8% of the QUIC connections had the HTTPS record response.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7tcR9Q2DOynyF2kGZDNwKz/47cb2bc54fdd25f8778e3cd6f94d773e/image3-15.png" />
            
            </figure><p>Other browser vendors, such as <a href="https://groups.google.com/a/chromium.org/forum/#!msg/blink-dev/brZTXr6-2PU/g0g8wWwCAwAJ">Google</a> and <a href="https://bugzilla.mozilla.org/show_bug.cgi?id=1634793">Mozilla</a>, are also working on shipping support for HTTPS records to their users, and we hope to be hearing more on this front soon.</p> ]]></content:encoded>
            <category><![CDATA[Birthday Week]]></category>
            <category><![CDATA[HTTPS]]></category>
            <category><![CDATA[HTTP3]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[Performance]]></category>
            <guid isPermaLink="false">7KRAY9gQONDIoKm8SZ3a8y</guid>
            <dc:creator>Alessandro Ghedini</dc:creator>
        </item>
        <item>
            <title><![CDATA[Securing infrastructure at scale with Cloudflare Access]]></title>
            <link>https://blog.cloudflare.com/access-wildcard-subdomain/</link>
            <pubDate>Fri, 19 Jul 2019 13:00:00 GMT</pubDate>
            <description><![CDATA[ I rarely have to deal with the hassle of using a corporate VPN and I hope it remains this way. As a new member of the Cloudflare team, that seems possible. Coworkers who joined a few years ago did not have that same luck. They had to use a VPN to get any work done. What changed? ]]></description>
            <content:encoded><![CDATA[ <p>I rarely have to deal with the hassle of using a corporate VPN and I hope it remains this way. As a new member of the Cloudflare team, that seems possible. Coworkers who joined a few years ago did not have that same luck. They had to use a VPN to get any work done. What changed?</p><p>Cloudflare released <a href="https://www.cloudflare.com/products/cloudflare-access/">Access</a>, and now we’re able to do our work without ever needing a VPN again. Access is a way to control access to your internal applications and infrastructure. Today, we’re releasing a new feature to help you replace your VPN by deploying Access at an even greater scale.</p>
    <div>
      <h3>Access in an instant</h3>
      <a href="#access-in-an-instant">
        
      </a>
    </div>
    <p>Access replaces a corporate VPN by evaluating every request made to a resource secured behind Access. Administrators can make web applications, remote desktops, and physical servers available at dedicated URLs, configured as DNS records in Cloudflare. These tools are protected via access policies, set by the account owner, so that only authenticated users can access those resources. These end users are able to be authenticated over both HTTPS and SSH requests. They’re prompted to login with their SSO credentials and Access redirects them to the application or server.</p><p>For your team, Access makes your internal web applications and servers in your infrastructure feel as seamless to reach as your SaaS tools. Originally we built Access to replace our own corporate VPN. In practice, this became the fastest way to control who can reach different pieces of our own infrastructure. However, administrators configuring Access were required to create a discrete policy per each application/hostname. Now, administrators don’t have to create a dedicated policy for each new resource secured by Access; one policy will cover each URL protected.</p><p>When Access launched, the product’s primary use case was to secure internal web applications. Creating unique rules for each was tedious, but manageable. Access has since become a centralized way to secure infrastructure in many environments. Now that companies are using Access to secure hundreds of resources, that method of building policies no longer fits.</p><p>Starting today, Access users can build policies using a wildcard subdomain to replace the typical bottleneck that occurs when replacing dozens or even hundreds of bespoke rules within a single policy. With a wildcard, the same ruleset will now automatically apply to any subdomain your team generates that is gated by Access.</p>
    <div>
      <h3>How can teams deploy at scale with wildcard subdomains?</h3>
      <a href="#how-can-teams-deploy-at-scale-with-wildcard-subdomains">
        
      </a>
    </div>
    <p>Administrators can secure their infrastructure with a wildcard policy in the Cloudflare dashboard. With Access enabled, Cloudflare adds identity-based evaluation to that traffic.</p><p>In the Access dashboard, you can now build a rule to secure any subdomain of the site you added to Cloudflare. Create a new policy and enter a wildcard tag (“*”) into the subdomain field. You can then configure rules, at a granular level, using your identity provider to control who can reach any subdomain of that apex domain.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/53vsHAF0xd7GPynqsfL7hs/a22e729e336eb64db5221cd0bb357120/Screen-Shot-2019-06-04-at-3.12.54-PM-1.png" />
            
            </figure><p>This new policy will propagate to all 180 of Cloudflare’s data centers in seconds and any new subdomains created will be protected.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/52jTgporUjkRHR7XswwS80/142509a3be2c6229cde114946607960c/M4-Full-Login-Flow--1--1.gif" />
            
            </figure>
    <div>
      <h3>How are teams using it?</h3>
      <a href="#how-are-teams-using-it">
        
      </a>
    </div>
    <p>Since releasing this feature in a closed beta, we’ve seen teams use it to gate access to their infrastructure in several new ways. Many teams use Access to secure dev and staging environments of sites that are being developed before they hit production. Whether for QA or collaboration with partner agencies, Access helps make it possible to share sites quickly with a layer of authentication. With wildcard subdomains, teams are deploying dozens of versions of new sites at new URLs without needing to touch the Access dashboard.</p><p>For example, an administrator can create a policy for “*.example.com” and then developers can deploy iterations of sites at “dev-1.example.com” and “dev-2.example.com” and both inherit the global Access policy.</p><p>The feature is also helping teams lock down their entire hybrid, on-premise, or public cloud infrastructure with the Access SSH feature. Teams can assign dynamic subdomains to their entire fleet of servers, regardless of environment, and developers and engineers can reach them over an SSH connection without a VPN. Administrators can now bring infrastructure online, in an entirely new environment, without additional or custom security rules.</p>
    <div>
      <h3>What about creating DNS records?</h3>
      <a href="#what-about-creating-dns-records">
        
      </a>
    </div>
    <p>Cloudflare Access requires users to associate a resource with a domain or subdomain. While the wildcard policy will cover all subdomains, teams will still need to connect their servers to the Cloudflare network and generate DNS records for those services.</p><p>Argo Tunnel can reduce that burden significantly. <a href="https://developers.cloudflare.com/argo-tunnel/quickstart/">Argo Tunnel</a> lets you expose a server to the Internet without opening any inbound ports. The service runs a lightweight daemon on your server that initiates outbound tunnels to the Cloudflare network.</p><p>Instead of managing DNS, network, and firewall complexity, Argo Tunnel helps administrators serve traffic from their origin through Cloudflare with a single command. That single command will generate the DNS record in Cloudflare automatically, allowing you to focus your time on building and managing your infrastructure.</p>
    <div>
      <h3>What’s next?</h3>
      <a href="#whats-next">
        
      </a>
    </div>
    <p>More teams are adopting a hybrid or multi-cloud model for deploying their infrastructure. In the past, these teams were left with just two options for securing those resources: peering a VPN with each provider or relying on custom IAM flows with each environment. In the end, both of these solutions were not only quite costly but also equally unmanageable.</p><p>While infrastructure benefits from becoming distributed, security is something that is best when controlled in a single place. Access can consolidate how a team controls who can reach their entire fleet of servers and services.</p> ]]></content:encoded>
            <category><![CDATA[Cloudflare Access]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[HTTPS]]></category>
            <category><![CDATA[VPN]]></category>
            <guid isPermaLink="false">6Izr90dFwda4FlRyRe9Isc</guid>
            <dc:creator>Jeremy Bernick</dc:creator>
        </item>
        <item>
            <title><![CDATA[A Tale of Two (APT) Transports]]></title>
            <link>https://blog.cloudflare.com/apt-transports/</link>
            <pubDate>Thu, 18 Jul 2019 14:12:38 GMT</pubDate>
            <description><![CDATA[ Securing access to your APT repositories is critical. At Cloudflare, like in most organizations, we used a legacy VPN to lock down who could reach our internal software repositories. However, a network perimeter model lacks a number of features that we consider critical to a team’s security. ]]></description>
            <content:encoded><![CDATA[ <p>Securing access to your APT repositories is critical. At Cloudflare, like in most organizations, we used a legacy VPN to lock down who could reach our internal software repositories. However, a network perimeter model lacks a number of features that we consider critical to a team’s security.</p><p>As a company, we’ve been moving our internal infrastructure to our own zero-trust platform, Cloudflare Access. Access added SaaS-like convenience to the on-premise tools we managed. We started with web applications and then moved resources we need to reach over SSH behind the Access gateway, for example Git or user-SSH access. However, we still needed to handle how services communicate with our internal APT repository.</p><p>We recently <a href="https://github.com/cloudflare/apt-transport-cloudflared">open sourced a new APT transport</a> which allows customers to protect their private APT repositories using <a href="https://www.cloudflare.com/products/cloudflare-access/">Cloudflare Access</a>. In this post, we’ll outline the history of APT tooling, APT transports and introduce our new APT transport for Cloudflare Access.</p>
    <div>
      <h2>A brief history of APT</h2>
      <a href="#a-brief-history-of-apt">
        
      </a>
    </div>
    <p><a href="https://en.wikipedia.org/wiki/APT_(Package_Manager)">Advanced Package Tool</a>, or APT, simplifies the installation and removal of software on Debian and related Linux distributions. Originally released in 1998, APT was to Debian what the App Store was to modern smartphones - a decade ahead of its time!</p><p>APT sits atop the lower-level <a href="https://en.wikipedia.org/wiki/Dpkg">dpkg</a> tool, which is used to install, query, and remove <a href="https://en.wikipedia.org/wiki/Deb_(file_format)">.deb packages</a> - the primary software packaging format in <a href="https://www.debian.org/doc/debian-policy/ch-binary.html">Debian</a> and related Linux distributions such as Ubuntu. With dpkg, packaging and managing software installed on your system became easier - but it didn’t solve for problems around distribution of packages, such as via the Internet or local media; at the time of inception, it was commonplace to install packages from a <a href="https://en.wikipedia.org/wiki/CD-ROM">CD-ROM</a>.</p><p>APT introduced the concept of repositories - a mechanism for storing and indexing a collection of .deb packages. APT supports connecting to multiple repositories for finding packages and automatically resolving package dependencies. The way APT connects to said repositories is via a “transport” - a mechanism for communicating between the APT client and its repository source (more on this later).</p>
    <div>
      <h2>APT over the Internet</h2>
      <a href="#apt-over-the-internet">
        
      </a>
    </div>
    <p>Prior to version 1.5, APT did not include support for HTTPS - if you wanted to install a package over the Internet, your connection was not encrypted. This reduces privacy - an attacker snooping traffic could determine specific package version your system is installing. It also exposes you to on-path attacker attacks where an attacker could, for example, exploit a remote code execution vulnerability. Just 6 months ago, we saw an example of the latter with <a href="https://nvd.nist.gov/vuln/detail/CVE-2019-3462">CVE-2019-3462</a>.</p><p>Enter the APT HTTPS transport - an optional transport you can install to add support for connecting to repositories over HTTPS. Once installed, users need to configure their APT <a href="https://manpages.debian.org/stretch/apt/sources.list.5.en.html">sources.list</a> with repositories using HTTPS.</p><p>The challenge here, of course, is that the most common way to install this transport is via APT and HTTP - a classic bootstrapping problem! An alternative here is to download the .deb package via curl and install it via dpkg. You’ll find the links to apt-transport-https binaries for Stretch <a href="https://packages.debian.org/stretch/apt-transport-https#pdownload">here</a> - once you have the URL path for your system architecture, you can download it from the <a href="https://deb.debian.org/">deb.debian.org</a> mirror-redirector over HTTPS, e.g. for amd64 (a.k.a. x86_64):</p>
            <pre><code>curl -o apt-transport-https.deb -L https://deb.debian.org/debian/pool/main/a/apt/apt-transport-https_1.4.9_amd64.deb 
HASH=c8c4366d1912ff8223615891397a78b44f313b0a2f15a970a82abe48460490cb &amp;&amp; echo "$HASH  apt-transport-https.deb" | sha256sum -c
sudo dpkg -i apt-transport-https.deb</code></pre>
            <p>To confirm which APT transports are installed on your system, you can list each “method binary” that is installed:</p>
            <pre><code>ls /usr/lib/apt/methods</code></pre>
            <p>With apt-transport-https installed you should now see ‘https’ in that list.</p>
    <div>
      <h4>The state of APT &amp; HTTPS on Debian</h4>
      <a href="#the-state-of-apt-https-on-debian">
        
      </a>
    </div>
    <p>You may be wondering how relevant this APT HTTPS transport is today. Given the prevalence of HTTPS on the web today, I was surprised when I found out exactly how relevant it is.</p><p>Up until a couple of weeks ago, Debian <a href="https://www.debian.org/releases/stretch/">Stretch</a> (9.x) was the current stable release; 9.0 was first released in June 2017 - and the latest version (9.9) includes <a href="https://packages.debian.org/stretch/apt">apt 1.4.9</a> by default - meaning that <b>securing your APT communication for Debian Stretch requires installing the optional </b><a href="https://packages.debian.org/stretch/apt-transport-https"><b>apt-transport-https</b></a><b> package</b>.</p><p>Thankfully, on July 6 of this year, Debian released the latest version - <a href="https://www.debian.org/releases/buster/">Buster</a> - which currently includes <a href="https://packages.debian.org/buster/apt">apt 1.8.2</a> with HTTPS support built-in by default, negating the need for installing the <a href="https://packages.debian.org/buster/apt-transport-https">apt-transport-https</a> package - and removing the bootstrapping challenge of installing HTTPS support via HTTPS!</p>
    <div>
      <h3>BYO HTTPS APT Repository</h3>
      <a href="#byo-https-apt-repository">
        
      </a>
    </div>
    <p>A powerful feature of APT is the ability to run your own repository. You can mirror a public repository to improve performance or protect against an outage. And if you’re producing your own software packages, you can run your own repository to simplify distribution and installation of your software for your users.</p><p>If you have your own APT repository and you’re looking to secure it with HTTPS we’ve offered free <a href="/introducing-universal-ssl/">Universal SSL since 2014</a> and last year introduced a way to <a href="/how-to-make-your-site-https-only/">require it site-wide automatically</a> with one click. You’ll get the benefits of <a href="https://www.cloudflare.com/ddos/">DDoS attack protection</a>, a <a href="https://www.cloudflare.com/cdn/">Global CDN with Caching</a>, and <a href="https://www.cloudflare.com/analytics/">Analytics</a>.</p><p>But what if you’re looking for more than just HTTPS for your APT repository? For companies operating private APT repositories, authentication of your APT repository may be a challenge. This is where our new, custom APT transport comes in.</p>
    <div>
      <h3>Building custom transports</h3>
      <a href="#building-custom-transports">
        
      </a>
    </div>
    <p>The system design of APT is powerful in that it supports extensibility via Transport executables, but how does this mechanism work?</p><p>When APT attempts to connect to a repository, it finds the executable which matches the “scheme” from the repository URL (e.g. “https://” prefix on a repository results in the “https” executable being called).</p><p>APT then uses the common Linux standard streams: stdin, stdout, and stderr. It communicates via stdin/stdout using a set of plain-text Messages, which follow <a href="https://tools.ietf.org/html/rfc822">IETF RFC #822</a> (the same format that .deb “Package” files use).</p><p>Examples of input message include “600 URI Acquire”, and examples of output messages include “200 URI Start” and “201 URI Done”:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2NUk7sB64WQWZJS3G4m3MS/dee24a134a4372b36837f54b46772500/apt-transport-internals-1.png" />
            
            </figure><p>If you’re interested in building your own transport, check out the <a href="http://www.fifi.org/doc/libapt-pkg-doc/method.html/ch2.html">APT method interface spec</a> for more implementation details.</p>
    <div>
      <h3>APT meets Access</h3>
      <a href="#apt-meets-access">
        
      </a>
    </div>
    <p>Cloudflare prioritizes dogfooding our own products early and often. The Access product has given our internal DevTools team a chance to work closely with the product team as we build features that help solve use cases across our organization. We’ve deployed new features internally, gathered feedback, improved them, and then released them to our customers. For example, we’ve been able to iterate on tools for Access like the <a href="https://github.com/cloudflare/cloudflare-access-for-atlassian">Atlassian SSO</a> plugin and the <a href="https://developers.cloudflare.com/access/ssh/">SSH feature</a>, as collaborative efforts between DevTools and the Access team.</p><p>Our DevTools team wanted to take the same dogfooding approach to protect our internal APT repository with Access. We knew this would require a custom APT transport to support generating the required tokens and passing the correct headers in HTTPS requests to our internal APT repository server. We decided to build and test our own transport that both generated the necessary tokens and passed the correct headers to allow us to place our repository behind Access.</p><p>After months of internal use, we’re excited to announce that we have recently open-sourced our custom APT transport, so our customers can also secure their APT repositories by enabling authentication via Cloudflare Access.</p><p>By protecting your APT repository with <a href="https://www.cloudflare.com/products/cloudflare-access/">Cloudflare Access</a>, you can support authenticating users via Single-Sign On (SSO) providers, defining comprehensive access-control policies, and monitoring access and change logs.</p><p>Our APT transport leverages another Open Source tool we provide, <a href="https://github.com/cloudflare/cloudflared">cloudflared</a>, which enables users to connect to your Cloudflare-protected domain securely.</p>
    <div>
      <h3>Securing your APT Repository</h3>
      <a href="#securing-your-apt-repository">
        
      </a>
    </div>
    <p>To use our APT transport, you’ll need an APT repository that’s protected by Cloudflare Access. Our instructions (below) for using our transport will use apt.example.com as a hostname.</p><p>To use our APT transport with your own web-based APT repository, refer to our <a href="https://developers.cloudflare.com/access/setting-up-access/">Setting Up Access</a> guide.</p>
    <div>
      <h3>APT Transport Installation</h3>
      <a href="#apt-transport-installation">
        
      </a>
    </div>
    <p>To install from source, both tools require Go - once you <a href="https://golang.org/dl/">install Go</a>, you can install `cloudflared` and our APT transport with four commands:</p>
            <pre><code>go get github.com/cloudflare/cloudflared/cmd/cloudflared
sudo cp ${GOPATH:-~/go}/bin/cloudflared /usr/local/bin/cloudflared
go get github.com/cloudflare/apt-transport-cloudflared/cmd/cfd
sudo cp ${GOPATH:-~/go}/bin/cfd /usr/lib/apt/methods/cfd</code></pre>
            <p>The above commands should place the cloudflared executable in <i>/usr/local/bin</i> (which should be on your PATH), and the APT transport binary in the required <i>/usr/lib/apt/methods</i> directory.</p><p>To confirm cloudflared is on your path, run:</p>
            <pre><code>which cloudflared</code></pre>
            <p>The above command should return <i>/usr/local/bin/cloudflared</i></p><p>Now that the custom transport is installed, to start using it simply configure an APT source with the cfd:// rather than https:// e.g:</p>
            <pre><code>$ cat /etc/apt/sources.list.d/example.list 
deb [arch=amd64] cfd://apt.example.com/v2/stretch stable common</code></pre>
            <p>Next time you do `apt-get update` and `apt-get install`, a browser window will open asking you to log-in over Cloudflare Access, and your package will be retrieved using the token returned by `cloudflared`.</p>
    <div>
      <h4>Fetching a GPG Key over Access</h4>
      <a href="#fetching-a-gpg-key-over-access">
        
      </a>
    </div>
    <p>Usually, private APT repositories will use <a href="https://wiki.debian.org/SecureApt">SecureApt</a> and have their own GPG public key that users must install to verify the integrity of data retrieved from that repository.</p><p>Users can also leverage cloudflared for securely downloading and installing those keys, e.g:</p>
            <pre><code>cloudflared access login https://apt.example.com
cloudflared access curl https://apt.example.com/public.gpg | sudo apt-key add -</code></pre>
            <p>The first command will open your web browser allowing you to authenticate for your domain. The second command wraps curl to download the GPG key, and hands it off to `apt-key add`.</p>
    <div>
      <h4>Cloudflare Access on "headless" servers</h4>
      <a href="#cloudflare-access-on-headless-servers">
        
      </a>
    </div>
    <p>If you’re looking to deploy APT repositories protected by Cloudflare Access to non-user-facing machines (a.k.a. “headless” servers), opening a browser does not work. The good news is since February, <a href="/give-your-automated-services-credentials-with-access-service-tokens/">Cloudflare Access supports service tokens</a> - and we’ve built support for them into our APT transport from day one.</p><p>If you’d like to use service tokens with our APT transport, it’s as simple as placing the token in a file in the correct path; because the machine already has a token, there is also no dependency on `cloudflared` for authentication. You can find details on how to set-up a service token in the APT transport <a href="https://github.com/cloudflare/apt-transport-cloudflared/blob/master/README.md">README</a>.</p>
    <div>
      <h2>What’s next?</h2>
      <a href="#whats-next">
        
      </a>
    </div>
    <p>As demonstrated, you can get started using our APT transport today - we’d love to hear your feedback on this!</p><p>This work came out of an internal dogfooding effort, and we’re currently experimenting with additional packaging formats and tooling. If you’re interested in seeing support for another format or tool, please reach out.</p> ]]></content:encoded>
            <category><![CDATA[Cloudflare Access]]></category>
            <category><![CDATA[Linux]]></category>
            <category><![CDATA[HTTPS]]></category>
            <category><![CDATA[Security]]></category>
            <guid isPermaLink="false">7BN18nAy8ehyHVX2OVcSlU</guid>
            <dc:creator>Ryan Djurovich</dc:creator>
        </item>
        <item>
            <title><![CDATA[Securing Certificate Issuance using Multipath Domain Control Validation]]></title>
            <link>https://blog.cloudflare.com/secure-certificate-issuance/</link>
            <pubDate>Tue, 18 Jun 2019 13:00:00 GMT</pubDate>
            <description><![CDATA[ Trust on the Internet is underpinned by the Public Key Infrastructure (PKI). PKI grants servers the ability to securely serve websites by issuing digital certificates, providing the foundation for encrypted and authentic communication.  ]]></description>
            <content:encoded><![CDATA[ <p></p><p>This blog post is part of <a href="/welcome-to-crypto-week-2019/">Crypto Week 2019</a>.</p><p>Trust on the Internet is underpinned by the Public Key Infrastructure (PKI). PKI grants servers the ability to securely serve websites by issuing digital certificates, providing the foundation for encrypted and authentic communication.</p><p>Certificates make HTTPS encryption possible by using the public key in the certificate to verify server identity. HTTPS is especially important for websites that transmit sensitive data, such as banking credentials or private messages. Thankfully, modern browsers, such as Google Chrome, flag websites not secured using HTTPS by marking them “Not secure,” allowing users to be more security conscious of the websites they visit.</p><p>This blog post introduces a new, free tool Cloudflare offers to CAs so they can further secure certificate issuance. But before we dive in too deep, let’s talk about where certificates come from.</p>
    <div>
      <h3>Certificate Authorities</h3>
      <a href="#certificate-authorities">
        
      </a>
    </div>
    <p>Certificate Authorities (CAs) are the institutions responsible for issuing certificates.</p><p>When issuing a certificate for any given domain, they use Domain Control Validation (DCV) to verify that the entity requesting a certificate for the domain is the legitimate owner of the domain. With DCV the domain owner:</p><ol><li><p>creates a DNS resource record for a domain;</p></li><li><p>uploads a document to the web server located at that domain; OR</p></li><li><p>proves ownership of the domain’s administrative email account.</p></li></ol><p>The DCV process prevents adversaries from obtaining private-key and certificate pairs for domains not owned by the requestor.  </p><p>Preventing adversaries from acquiring this pair is critical: if an incorrectly issued certificate and private-key pair wind up in an adversary’s hands, they could pose as the victim’s domain and serve sensitive HTTPS traffic. This violates our existing trust of the Internet, and compromises private data on a potentially massive scale.</p><p>For example, an adversary that tricks a CA into mis-issuing a certificate for gmail.com could then perform TLS handshakes while pretending to be Google, and exfiltrate cookies and login information to gain access to the victim’s Gmail account. The risks of certificate mis-issuance are clearly severe.</p>
    <div>
      <h3>Domain Control Validation</h3>
      <a href="#domain-control-validation">
        
      </a>
    </div>
    <p>To prevent attacks like this, CAs only issue a certificate after performing DCV. One way of validating domain ownership is through HTTP validation, done by uploading a text file to a specific HTTP endpoint on the webserver they want to secure.  Another DCV method is done using email verification, where an email with a validation code link is sent to the administrative contact for the domain.</p>
    <div>
      <h3>HTTP Validation</h3>
      <a href="#http-validation">
        
      </a>
    </div>
    <p>Suppose Alice <a href="https://www.cloudflare.com/learning/dns/how-to-buy-a-domain-name/">buys</a> the domain name aliceswonderland.com and wants to get a dedicated certificate for this domain. Alice chooses to use Let’s Encrypt as their certificate authority. First, Alice must generate their own private key and create a certificate signing request (CSR). She sends the CSR to Let’s Encrypt, but the CA won’t issue a certificate for that CSR and private key until they know Alice owns aliceswonderland.com. Alice can then choose to prove that she owns this domain through HTTP validation.</p><p>When Let’s Encrypt performs DCV over HTTP, they require Alice to place a randomly named file in the <code>/.well-known/acme-challenge</code> path for her website. The CA must retrieve the text file by sending an HTTP <code>GET</code> request to <code>http://aliceswonderland.com/.well-known/acme-challenge/&lt;random_filename&gt;</code>. An expected value must be present on this endpoint for DCV to succeed.</p><p>For HTTP validation, Alice would upload a file to <code>http://aliceswonderland.com/.well-known/acme-challenge/YnV0dHNz</code></p><p>where the body contains:</p>
            <pre><code>curl http://aliceswonderland.com/.well-known/acme-challenge/YnV0dHNz

GET /.well-known/acme-challenge/YnV0dHNz
Host: aliceswonderland.com

HTTP/1.1 200 OK
Content-Type: application/octet-stream

YnV0dHNz.TEST_CLIENT_KEY</code></pre>
            <p>The CA instructs them to use the Base64 token <code>YnV0dHNz</code>. <code>TEST_CLIENT_KEY</code> in an account-linked key that only the certificate requestor and the CA know. The CA uses this field combination to verify that the certificate requestor actually owns the domain. Afterwards, Alice can get her certificate for her website!</p>
    <div>
      <h3>DNS Validation</h3>
      <a href="#dns-validation">
        
      </a>
    </div>
    <p>Another way users can validate domain ownership is to add a DNS TXT record containing a verification string or <i>token</i> from the CA to their domain’s resource records. For example, here’s a domain for an enterprise validating itself towards Google:</p>
            <pre><code>$ dig TXT aliceswonderland.com
aliceswonderland.com.	 28 IN TXT "google-site-verification=COanvvo4CIfihirYW6C0jGMUt2zogbE_lC6YBsfvV-U"</code></pre>
            <p>Here, Alice chooses to create a TXT DNS resource record with a specific token value. A Google CA can verify the presence of this token to validate that Alice actually owns her website.</p>
    <div>
      <h3>Types of BGP Hijacking Attacks</h3>
      <a href="#types-of-bgp-hijacking-attacks">
        
      </a>
    </div>
    <p>Certificate issuance is required for servers to securely communicate with clients. This is why it’s so important that the process responsible for issuing certificates is also secure. Unfortunately, this is not always the case.</p><p>Researchers at Princeton University recently discovered that common DCV methods are vulnerable to attacks executed by network-level adversaries. If Border Gateway Protocol (BGP) is the “postal service” of the Internet responsible for delivering data through the most efficient routes, then Autonomous Systems (AS) are individual post office branches that represent an Internet network run by a single organization. Sometimes network-level adversaries advertise false routes over BGP to steal traffic, especially if that traffic contains something important, like a domain’s certificate.</p><p><a href="https://www.princeton.edu/~pmittal/publications/bgp-tls-usenix18.pdf"><i>Bamboozling Certificate Authorities with BGP</i></a> highlights five types of attacks that can be orchestrated during the DCV process to obtain a certificate for a domain the adversary does not own. After implementing these attacks, the authors were able to (ethically) obtain certificates for domains they did not own from the top five CAs: Let’s Encrypt, GoDaddy, Comodo, Symantec, and GlobalSign. But how did they do it?</p>
    <div>
      <h3>Attacking the Domain Control Validation Process</h3>
      <a href="#attacking-the-domain-control-validation-process">
        
      </a>
    </div>
    <p>There are two main approaches to attacking the DCV process with BGP hijacking:</p><ol><li><p>Sub-Prefix Attack</p></li><li><p>Equally-Specific-Prefix Attack</p></li></ol><p>These attacks create a vulnerability when an adversary sends a certificate signing request for a victim’s domain to a CA. When the CA verifies the network resources using an <code>HTTP GET</code>  request (as discussed earlier), the adversary then uses BGP attacks to hijack traffic to the victim’s domain in a way that the CA’s request is rerouted to the adversary and not the domain owner. To understand how these attacks are conducted, we first need to do a little bit of math.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/55IOsCmOIvDukAEqazUFq9/2a480249d3d70e5a1b271367f9eeb719/Domain-Control-Validation-Process_1.5x.png" />
            
            </figure><p>Every device on the Internet uses an IP (Internet Protocol) address as a numerical identifier. IPv6 addresses contain 128 bits and follow a slash notation to indicate the size of the prefix. So, in the network address <b>2001:DB8:1000::/48</b>, “<b>/48</b>” refers to how many bits the network contains. This means that there are 80 bits left that contain the host addresses, for a total of 10,240 host addresses. The smaller the prefix number, the more host addresses remain in the network. With this knowledge, let’s jump into the attacks!</p>
    <div>
      <h4>Attack one: Sub-Prefix Attack</h4>
      <a href="#attack-one-sub-prefix-attack">
        
      </a>
    </div>
    <p>When BGP announces a route, the router always prefers to follow the more specific route. So if <b>2001:DB8::/32</b> and <b>2001:DB8:1000::/48</b> are advertised, the router will use the latter as it is the more specific prefix. This becomes a problem when an adversary makes a BGP announcement to a specific IP address while using the victim’s domain IP address. Let’s say the IP address for our victim, leagueofentropy.com, is <b>2001:DB8:1000::1</b> and announced as <b>2001:DB8::/32</b>. If an adversary announces the prefix <b>2001:DB8:1000::/48</b>, then they will capture the victim’s traffic, launching a <i>sub-prefix hijack attack</i>.</p><p>In an IPv4 attack, such as the <a href="/bgp-leaks-and-crypto-currencies/">attack</a> during April 2018, this was /24 and /23 announcements, with the more specific /24 being announced by the nefarious entity. In IPv6, this could be a /48 and /47 announcement. In both scenarios, /24's and /48's are the smallest blocks allowed to be routed globally. In the diagram below, <b>/47</b> is Texas and <b>/48</b> is the more specific Austin, Texas. The new (but nefarious) routes overrode the existing routes for portions of the Internet. The attacker then ran a nefarious DNS server on the normal IP addresses with DNS records pointing at some new nefarious web server instead of the existing server. This attracted the traffic destined for the victim’s domain within the area the nefarious routes were being propagated. The reason this attack was successful was because a more specific prefix is always preferred by the receiving routers.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1u8kdog1lBPYcQsjARPOkV/03c3da0661363f6fe10e589673c9aeab/1-Traditional-Equally-specific-Sub-Prefix-Attack-KC-0A-_3x.png" />
            
            </figure>
    <div>
      <h4>Attack two: Equally-Specific-Prefix Attack</h4>
      <a href="#attack-two-equally-specific-prefix-attack">
        
      </a>
    </div>
    <p>In the last attack, the adversary was able to hijack traffic by offering a more specific announcement, but what if the victim’s prefix is <b>/48</b> and a sub-prefix attack is not viable? In this case, an attacker would launch an <b>equally-specific-prefix hijack</b>, where the attacker announces the same prefix as the victim. This means that the AS chooses the preferred route between the victim and the adversary’s announcements based on properties like path length. This attack only ever intercepts a portion of the traffic.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3BKdC4cWYQO25VBryOZey1/828259a83952227bffd27fc4b655bc94/2-Traditional-EquallyPrefix-Attack-0A-KC_3x.png" />
            
            </figure><p>There are more advanced attacks that are covered in more depth in the paper. They are fundamentally similar attacks but are more stealthy.</p><p>Once an attacker has successfully obtained a bogus certificate for a domain that they do not own, they can perform a convincing attack where they pose as the victim’s domain and are able to decrypt and intercept the victim’s TLS traffic. The ability to decrypt the TLS traffic allows the adversary to completely Monster-in-the-Middle (MITM) encrypted TLS traffic and reroute Internet traffic destined for the victim’s domain to the adversary. To increase the stealthiness of the attack, the adversary will continue to forward traffic through the victim’s domain to perform the attack in an undetected manner.</p>
    <div>
      <h3>DNS Spoofing</h3>
      <a href="#dns-spoofing">
        
      </a>
    </div>
    <p>Another way an adversary can gain control of a domain is by spoofing DNS traffic by using a source IP address that belongs to a DNS nameserver. Because anyone can modify their packets’ outbound IP addresses, an adversary can fake the IP address of any DNS nameserver involved in resolving the victim’s domain, and impersonate a nameserver when responding to a CA.</p><p>This attack is more sophisticated than simply spamming a CA with falsified DNS responses. Because each <a href="https://www.cloudflare.com/learning/dns/what-is-dns/">DNS query</a> has its own randomized query identifiers and source port, a fake DNS response must match the DNS query’s identifiers to be convincing. Because these query identifiers are random, making a spoofed response with the correct identifiers is extremely difficult.</p><p>Adversaries can fragment User Datagram Protocol (UDP) DNS packets so that identifying DNS response information (like the random DNS query identifier) is delivered in one packet, while the actual answer section follows in another packet. This way, the adversary spoofs the DNS response to a legitimate DNS query.</p><p>Say an adversary wants to get a mis-issued certificate for victim.com by forcing packet fragmentation and spoofing DNS validation. The adversary sends a DNS nameserver for victim.com a ICMP "fragmentation needed" packet with a small Maximum Transmission Unit, or maximum byte size. This gets the nameserver to start fragmenting DNS responses. When the CA sends a DNS query to a nameserver for victim.com asking for victim.com’s TXT records, the nameserver will fragment the response into the two packets described above: the first contains the query ID and source port, which the adversary cannot spoof, and the second one contains the answer section, which the adversary can spoof. The adversary can continually send a spoofed answer to the CA throughout the DNS validation process, in the hopes of sliding their spoofed answer in before the CA receives the real answer from the nameserver.</p><p>In doing so, the answer section of a DNS response (the important part!) can be falsified, and an adversary can trick a CA into mis-issuing a certificate.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7LqZ3us0OcigAJXbwPyIbf/656eba59808bab008b5694eb195525c2/DNS-Spoofing_3x.png" />
            
            </figure>
    <div>
      <h3>Solution</h3>
      <a href="#solution">
        
      </a>
    </div>
    <p>At first glance, one could think a Certificate Transparency log could expose a mis-issued certificate and allow a CA to quickly revoke it. CT logs, however, can take up to 24 hours to include newly issued certificates, and certificate revocation can be inconsistently followed among different browsers. We need a solution that allows CAs to proactively prevent this attacks, not retroactively address them.</p><p>We’re excited to announce that Cloudflare provides CAs a free API to leverage our global network to perform DCV from multiple vantage points around the world. This API bolsters the DCV process against BGP hijacking and off-path DNS attacks.</p><p>Given that Cloudflare runs 175+ datacenters around the world, we are in a unique position to perform DCV from multiple vantage points. Each datacenter has a unique path to DNS nameservers or HTTP endpoints, which means that successful hijacking of a BGP route can only affect a subset of DCV requests, further hampering BGP hijacks. And since we use RPKI, we actually sign and verify BGP routes.</p><p>This DCV checker additionally protects CAs against off-path, DNS spoofing attacks. An additional feature that we built into the service that helps protect against off-path attackers is DNS query source IP randomization. By making the source IP unpredictable to the attacker, it becomes more challenging to spoof the second fragment of the forged DNS response to the DCV validation agent.</p><p>By comparing multiple DCV results collected over multiple paths, our DCV API makes it virtually impossible for an adversary to mislead a CA into thinking they own a domain when they actually don’t. CAs can use our tool to ensure that they only issue certificates to rightful domain owners.</p><p>Our multipath DCV checker consists of two services:</p><ol><li><p>DCV agents responsible for performing DCV out of a specific datacenter, and</p></li><li><p>a DCV orchestrator that handles multipath DCV requests from CAs and dispatches them to a subset of DCV agents.</p></li></ol><p>When a CA wants to ensure that DCV occurred without being intercepted, it can send a request to our API specifying the type of DCV to perform and its parameters.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6yKDxJFYuvSllzdqYBDWfx/8c7e27f099e2bf94b54df5bf1810e9f6/Mulitpath-DCV_3x.png" />
            
            </figure><p>The DCV orchestrator then forwards each request to a random subset of over 20 DCV agents in different datacenters. Each DCV agent performs the DCV request and forwards the result to the DCV orchestrator, which aggregates what each agent observed and returns it to the CA.</p><p>This approach can also be generalized to performing multipath queries over DNS records, like Certificate Authority Authorization (CAA) records. CAA records authorize CAs to issue certificates for a domain, so spoofing them to trick unauthorized CAs into issuing certificates is another attack vector that multipath observation prevents.</p><p>As we were developing our multipath checker, we were in contact with the Princeton research group that introduced the proof-of-concept (PoC) of certificate mis-issuance through BGP hijacking attacks. Prateek Mittal, coauthor of the <i>Bamboozling Certificate Authorities with BGP</i> paper, wrote:</p><blockquote><p>“Our analysis shows that domain validation from multiple vantage points significantly mitigates the impact of localized BGP attacks. We recommend that all certificate authorities adopt this approach to enhance web security. A particularly attractive feature of Cloudflare’s implementation of this defense is that Cloudflare has access to a vast number of vantage points on the Internet, which significantly enhances the robustness of domain control validation.”</p></blockquote><p>Our DCV checker follows our belief that trust on the Internet must be distributed, and vetted through third-party analysis (like that provided by Cloudflare) to ensure consistency and security. This tool joins our <a href="/introducing-certificate-transparency-and-nimbus/">pre-existing Certificate Transparency monitor</a> as a set of services CAs are welcome to use in improving the accountability of certificate issuance.</p>
    <div>
      <h3>An Opportunity to Dogfood</h3>
      <a href="#an-opportunity-to-dogfood">
        
      </a>
    </div>
    <p>Building our multipath DCV checker also allowed us to <a href="https://en.wikipedia.org/wiki/Eating_your_own_dog_food"><i>dogfood</i></a> multiple Cloudflare products.</p><p>The DCV orchestrator as a simple fetcher and aggregator was a fantastic candidate for <a href="https://developers.cloudflare.com/workers/">Cloudflare Workers</a>. We <a href="/generating-documentation-for-typescript-projects/">implemented the orchestrator in TypeScript</a> using this post as a guide, and created a typed, reliable orchestrator service that was easy to deploy and iterate on. Hooray that we don’t have to maintain our own <code>dcv-orchestrator</code>  server!</p><p>We use <a href="https://developers.cloudflare.com/argo-tunnel/">Argo Tunnel</a> to allow Cloudflare Workers to contact DCV agents. Argo Tunnel allows us to easily and securely expose our DCV agents to the Workers environment. Since Cloudflare has approximately 175 datacenters running DCV agents, we expose many services through Argo Tunnel, and have had the opportunity to load test Argo Tunnel as a power user with a wide variety of origins. Argo Tunnel readily handled this influx of new origins!</p>
    <div>
      <h3>Getting Access to the Multipath DCV Checker</h3>
      <a href="#getting-access-to-the-multipath-dcv-checker">
        
      </a>
    </div>
    <p>If you and/or your organization are interested in trying our DCV checker, email <a>dcv@cloudflare.com</a> and let us know! We’d love to hear more about how multipath querying and validation bolsters the security of your certificate issuance.</p><p>As a new class of BGP and IP spoofing attacks threaten to undermine PKI fundamentals, it’s important that website owners advocate for multipath validation when they are issued certificates. We encourage all CAs to use multipath validation, whether it is Cloudflare’s or their own. Jacob Hoffman-Andrews, Tech Lead, Let’s Encrypt, wrote:</p><blockquote><p>“BGP hijacking is one of the big challenges the web PKI still needs to solve, and we think multipath validation can be part of the solution. We’re testing out our own implementation and we encourage other CAs to pursue multipath as well”</p></blockquote><p>Hopefully in the future, website owners will look at multipath validation support when selecting a CA.</p> ]]></content:encoded>
            <category><![CDATA[Crypto Week]]></category>
            <category><![CDATA[Cryptography]]></category>
            <category><![CDATA[DNS]]></category>
            <category><![CDATA[BGP]]></category>
            <category><![CDATA[TLS]]></category>
            <category><![CDATA[CFSSL]]></category>
            <category><![CDATA[HTTPS]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[Speed & Reliability]]></category>
            <guid isPermaLink="false">142PyPkCaDGbaxHJsIruoK</guid>
            <dc:creator>Dina Kozlov</dc:creator>
            <dc:creator>Gabbi Fisher</dc:creator>
        </item>
        <item>
            <title><![CDATA[Welcome to Crypto Week 2019]]></title>
            <link>https://blog.cloudflare.com/welcome-to-crypto-week-2019/</link>
            <pubDate>Sun, 16 Jun 2019 17:07:57 GMT</pubDate>
            <description><![CDATA[ The Internet is an extraordinarily complex and evolving ecosystem. Its constituent protocols range from the ancient and archaic (hello FTP) to the modern and sleek (meet WireGuard), with a fair bit of everything in between.  ]]></description>
            <content:encoded><![CDATA[ 
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/15NFCZdp2cpRHNfys7tpAq/15487537909975789f358f467d314eb0/image21.png" />
            
            </figure><p>The Internet is an extraordinarily complex and evolving ecosystem. Its constituent protocols range from the ancient and archaic (hello <a href="https://developers.cloudflare.com/spectrum/getting-started/ftp/">FTP</a>) to the modern and sleek (meet <a href="/boringtun-userspace-wireguard-rust/">WireGuard</a>), with a fair bit of everything in between. This evolution is ongoing, and as one of the <a href="https://bgp.he.net/report/exchanges#_participants">most connected</a> networks on the Internet, Cloudflare has a duty to be a good steward of this ecosystem. We take this responsibility to heart: Cloudflare’s mission is to help build a better Internet. In this spirit, we are very proud to announce Crypto Week 2019.</p><p>Every day this week we’ll announce a new project or service that uses modern cryptography to build a more secure, trustworthy Internet. Everything we release this week will be free and immediately useful. This blog is a fun exploration of the themes of the week.</p><ul><li><p>Monday: <a href="/league-of-entropy/"><b>The League of Entropy</b></a><b>, </b><a href="/inside-the-entropy/"><b>Inside the Entropy</b></a></p></li><li><p>Tuesday: <a href="/secure-certificate-issuance/"><b>Securing Certificate Issuance using Multipath Domain Control Validation</b></a></p></li><li><p>Wednesday: <a href="/cloudflare-ethereum-gateway/"><b>Cloudflare's Ethereum Gateway</b></a><b>, </b><a href="/continuing-to-improve-our-ipfs-gateway/"><b>Continuing to Improve our IPFS Gateway</b></a></p></li><li><p>Thursday: <a href="/the-quantum-menace/"><b>The Quantum Menace</b></a>, <a href="/towards-post-quantum-cryptography-in-tls/"><b>Towards Post-Quantum Cryptography in TLS</b></a>, <a href="/introducing-circl/"><b>Introducing CIRCL: An Advanced Cryptographic Library</b></a></p></li><li><p>Friday: <a href="/secure-time/"><b>Introducing time.cloudflare.com</b></a></p></li></ul>
    <div>
      <h3>The Internet of the Future</h3>
      <a href="#the-internet-of-the-future">
        
      </a>
    </div>
    <p>Many pieces of the Internet in use today were designed in a different era with different assumptions. The Internet’s success is based on strong foundations that support constant reassessment and improvement. Sometimes these improvements require deploying new protocols.</p><p>Performing an upgrade on a system as large and decentralized as the Internet can’t be done by decree;</p><ul><li><p>There are too many economic, cultural, political, and technological factors at play.</p></li><li><p>Changes must be compatible with existing systems and protocols to even be considered for adoption.</p></li><li><p>To gain traction, new protocols must provide tangible improvements for users. Nobody wants to install an update that doesn’t improve their experience!</p></li></ul><p><b>The last time the Internet had a complete reboot and upgrade</b> was during <a href="https://www.internetsociety.org/blog/2016/09/final-report-on-tcpip-migration-in-1983/">TCP/IP flag day</a> <b>in 1983</b>. Back then, the Internet (called ARPANET) had fewer than ten thousand hosts! To have an Internet-wide flag day today to switch over to a core new protocol is inconceivable; the scale and diversity of the components involved is way too massive. Too much would break. It’s challenging enough to deprecate <a href="https://dnsflagday.net/2019/">outmoded functionality</a>. In some ways, the open Internet is a victim of its own success. The bigger a system grows and the longer it stays the same, the <a href="/why-tls-1-3-isnt-in-browsers-yet/">harder it is to change</a>. The Internet is like a massive barge: it takes forever to steer in a different direction and it’s carrying a lot of garbage.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5qyclW7jXghND7AAOoygOv/483e2764aa861229b7487235355208b0/image16.jpg" />
            
            </figure><p>ARPANET, 1983 (<a href="https://www.computerhistory.org/internethistory/1980s/">Computer History Museum</a>)</p><p>As you would expect, many of the warts of the early Internet still remain. Both academic security researchers and real-life adversaries are still finding and exploiting vulnerabilities in the system. Many vulnerabilities are due to the fact that most of the protocols in use on the Internet have a weak notion of trust inherited from the early days. With 50 hosts online, it’s relatively easy to trust everyone, but in a world-scale system, that trust breaks down in fascinating ways. The primary tool to scale trust is cryptography, which helps provide some measure of accountability, though it has its own complexities.</p><p>In an ideal world, the Internet would provide a trustworthy substrate for human communication and commerce. Some people naïvely assume that this is the natural direction the evolution of the Internet will follow. However, constant improvement is not a given. <b>It’s possible that the Internet of the future will actually be</b> <b><i>worse</i></b> <b>than the Internet today: less open, less secure, less private, less</b> <b><i>trustworthy</i></b><b>.</b> There are strong incentives to weaken the Internet on a fundamental level by <a href="https://www.ispreview.co.uk/index.php/2019/04/google-uk-isps-and-gov-battle-over-encrypted-dns-and-censorship.html">Governments</a>, by businesses <a href="https://www.theatlantic.com/technology/archive/2017/03/encryption-wont-stop-your-internet-provider-from-spying-on-you/521208/">such as ISPs</a>, and even by the <a href="https://www.cyberscoop.com/tls-1-3-weakness-financial-industry-ietf/">financial institutions</a> entrusted with our personal data.</p><p>In a system with as many stakeholders as the Internet, <b>real change requires principled commitment from all invested parties.</b> At Cloudflare, we believe everyone is entitled to an Internet built on a solid foundation of trust. <b>Crypto Week</b> is our way of helping nudge the Internet’s evolution in a more trust-oriented direction. Each announcement this week helps bring the Internet of the future to the present in a tangible way.</p>
    <div>
      <h3>Ongoing Internet Upgrades</h3>
      <a href="#ongoing-internet-upgrades">
        
      </a>
    </div>
    <p>Before we explore the Internet of the future, let’s explore some of the previous and ongoing attempts to upgrade the Internet’s fundamental protocols.</p>
    <div>
      <h4>Routing Security</h4>
      <a href="#routing-security">
        
      </a>
    </div>
    <p>As we highlighted in <a href="/crypto-week-2018/">last year’s Crypto Week</a> <b>one of the weak links on the Internet is routing</b>. Not all networks are directly connected.</p><p>To send data from one place to another, <b>you might have to rely on intermediary networks to pass your data along.</b> A packet sent from one host to another <b>may have to be passed through up to a dozen of these intermediary networks.</b> <i>No single network knows the full path the data will have to take to get to its destination, it only knows which network to pass it to next.</i>  <b>The protocol that determines how packets are routed is called the</b> <a href="https://www.cloudflare.com/learning/security/glossary/what-is-bgp/"><b>Border Gateway Protocol (BGP.)</b></a> Generally speaking, networks use BGP to announce to each other which addresses they know how to route packets for and (dependent on a set of complex rules) these networks share what they learn with their neighbors.</p><p>Unfortunately, <b>BGP is completely insecure:</b></p><ul><li><p><b>Any network can announce any set of addresses to any other network,</b> even addresses they don’t control. This leads to a phenomenon called <i>BGP hijacking</i>, where networks are tricked into sending data to the wrong network.</p></li><li><p><b>A BGP hijack</b> is most often caused by accidental misconfiguration, but <b>can also be the result of malice on the network operator’s part</b>.</p></li><li><p><b>During a BGP hijack, a network inappropriately announces a set of addresses to other networks</b>, which results in packets destined for the announced addresses to be routed through the illegitimate network.</p></li></ul>
    <div>
      <h4>Understanding the risk</h4>
      <a href="#understanding-the-risk">
        
      </a>
    </div>
    <p>If the packets represent unencrypted data, this can be a big problem as it <b>allows the hijacker to read or even change the data:</b></p><ul><li><p>In 2018, a rogue network <a href="/bgp-leaks-and-crypto-currencies/">hijacked the addresses of a service called MyEtherWallet</a>, financial transactions were routed through the attacker network, which the attacker modified, <b>resulting in the theft of over a hundred thousand dollars of cryptocurrency.</b></p></li></ul>
    <div>
      <h4>Mitigating the risk</h4>
      <a href="#mitigating-the-risk">
        
      </a>
    </div>
    <p>The <a href="/tag/rpki/">Resource Public Key Infrastructure (RPKI)</a> system helps bring some trust to BGP by <b>enabling networks to utilize cryptography to digitally sign network routes with certificates, making BGP hijacking much more difficult.</b></p><ul><li><p>This enables participants of the network to gain assurances about the authenticity of route advertisements. <a href="/introducing-certificate-transparency-and-nimbus/">Certificate Transparency</a> (CT) is a tool that enables additional trust for certificate-based systems. Cloudflare operates the <a href="https://ct.cloudflare.com/logs/cirrus">Cirrus CT log</a> to support RPKI.</p></li></ul><p>Since we announced our support of RPKI last year, routing security has made big strides. More routes are signed, more networks validate RPKI, and the <a href="https://github.com/cloudflare/cfrpki">software ecosystem has matured</a>, but this work is not complete. Most networks are still vulnerable to BGP hijacking. For example, <a href="https://www.cnet.com/news/how-pakistan-knocked-youtube-offline-and-how-to-make-sure-it-never-happens-again/">Pakistan knocked YouTube offline with a BGP hijack</a> back in 2008, and could likely do the same today. Adoption here is driven less by providing a benefit to users, but rather by reducing systemic risk, which is not the strongest motivating factor for adopting a complex new technology. Full routing security on the Internet could take decades.</p>
    <div>
      <h3>DNS Security</h3>
      <a href="#dns-security">
        
      </a>
    </div>
    <p>The <a href="https://www.cloudflare.com/learning/dns/what-is-dns/">Domain Name System (DNS)</a> is the phone book of the Internet. Or, for anyone under 25 who doesn’t remember phone books, it’s the system that takes hostnames (like cloudflare.com or facebook.com) and returns the Internet address where that host can be found. For example, as of this publication, <a href="http://www.cloudflare.com">www.cloudflare.com</a> is 104.17.209.9 and 104.17.210.9 (IPv4) and 2606:4700::c629:d7a2, 2606:4700::c629:d6a2 (IPv6). Like BGP, <b>DNS is completely insecure. Queries and responses sent unencrypted over the Internet are modifiable by anyone on the path.</b></p><p>There are many ongoing attempts to add security to DNS, such as:</p><ul><li><p><a href="https://www.cloudflare.com/learning/dns/dnssec/ecdsa-and-dnssec/">DNSSEC</a> that <b>adds a chain of digital signatures to DNS responses</b></p></li><li><p>DoT/DoH that <b>wraps DNS queries in the TLS encryption protocol</b> (more on that later)</p></li></ul><p>Both technologies are slowly gaining adoption, but have a long way to go.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1dE9CBbFXOXM7eXg22tDnr/d5aac38c166f0b58eccaddaf77cf0c8d/DNSSEC-adoption-over-time-1.png" />
            
            </figure><p>DNSSEC-signed responses served by Cloudflare</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/186wxeg2yS6VN7fRqMeBS6/cd8261cbee791770fdbdde5b8e856974/DoT_DoH.png" />
            
            </figure><p>Cloudflare’s 1.1.1.1 resolver queries are already over 5% DoT/DoH</p><p>Just like RPKI, <b>securing DNS comes with a performance cost,</b> making it less attractive to users. However,</p><ul><li><p><b>Services like 1.1.1.1 provide</b> <a href="https://www.dnsperf.com/dns-provider/cloudflare"><b>extremely fast DNS</b></a>, which means that for many users, <a href="https://blog.mozilla.org/futurereleases/2019/04/02/dns-over-https-doh-update-recent-testing-results-and-next-steps/">encrypted DNS is faster than the unencrypted DNS</a> from their ISP.</p></li><li><p>This <b>performance improvement makes it appealing for customers</b> of privacy-conscious applications, like Firefox and Cloudflare’s 1.1.1.1 app, to adopt secure DNS.</p></li></ul>
    <div>
      <h3>The Web</h3>
      <a href="#the-web">
        
      </a>
    </div>
    <p><b>Transport Layer Security (TLS)</b> is a cryptographic protocol that gives two parties the ability to communicate over an encrypted and authenticated channel**.** <b>TLS protects communications from eavesdroppers even in the event of a BGP hijack.</b> TLS is what puts the “S” in <b>HTTPS</b>. TLS protects web browsing against multiple types of network adversaries.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6SY5pZINknbDszMPvOf77c/84fb41510e25717243e5ff06d631eeb3/past-connection-1.png" />
            
            </figure><p>Requests hop from network to network over the Internet</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2SrnXAdMqSONKHaYJVarm7/5b3a59fd2159f3d19704cf267dc30cc0/MITM-past-copy-2.png" />
            
            </figure><p>For unauthenticated protocols, an attacker on the path can impersonate the server</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/H8ticQEIC5ebrX7WXsmPW/23e71675bd585233362bd62d9f6840e4/BGP-hijack-past-1.png" />
            
            </figure><p>Attackers can use BGP hijacking to change the path so that communication can be intercepted</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4gg6sYN8sqXSKeTPETI4b0/347ddfea0a9de130aa69726f1dd870da/PKI-validated-connectio-1.png" />
            
            </figure><p>Authenticated protocols are protected from interception attacks</p><p>The adoption of TLS on the web is partially driven by the fact that:</p><ul><li><p><b>It’s easy and free for websites to get an authentication certificate</b> (via <a href="https://letsencrypt.org/">Let’s Encrypt</a>, <a href="/introducing-universal-ssl/">Universal SSL</a>, etc.)</p></li><li><p>Browsers make TLS adoption appealing to website operators by <b>only supporting new web features such as HTTP/2 over HTTPS.</b></p></li></ul><p>This has led to the rapid adoption of HTTPS over the last five years.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/78UVeYpDvOgQp0CQIyb20R/97d29db95d234771da008b5f83dfc7dc/image12.jpg" />
            
            </figure><p>HTTPS adoption curve (<a href="https://transparencyreport.google.com/https/overview">from Google Chrome</a>)‌‌</p><p>To further that adoption, TLS recently got an upgrade in TLS 1.3, <b>making it faster </b><i><b>and</b></i><b> more secure (a combination we love)</b>. It’s taking over the Internet!</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1NFORFUn1FFn10RTJ5HUCS/a21ff9ea0eaa41916107fa9522b13f38/tls.13-adoption-1.png" />
            
            </figure><p>TLS 1.3 adoption over the last 12 months (from Cloudflare's perspective)</p><p>Despite this fantastic progress in the adoption of security for routing, DNS, and the web, <b>there are still gaps in the trust model of the Internet.</b> There are other things needed to help build the Internet of the future. To find and identify these gaps, we lean on research experts.</p>
    <div>
      <h3>Research Farm to Table</h3>
      <a href="#research-farm-to-table">
        
      </a>
    </div>
    <p>Cryptographic security on the Internet is a hot topic and there have been many flaws and issues recently pointed out in academic journals. Researchers often <b>study the vulnerabilities of the past and ask:</b></p><ul><li><p>What other critical components of the Internet have the same flaws?</p></li><li><p>What underlying assumptions can subvert trust in these existing systems?</p></li></ul><p>The answers to these questions help us decide what to tackle next. Some recent research  topics we’ve learned about include:</p><ul><li><p>Quantum Computing</p></li><li><p>Attacks on Time Synchronization</p></li><li><p>DNS attacks affecting Certificate issuance</p></li><li><p>Scaling distributed trust</p></li></ul><p>Cloudflare keeps abreast of these developments and we do what we can to bring these new ideas to the Internet at large. In this respect, we’re truly standing on the shoulders of giants.</p>
    <div>
      <h3>Future-proofing Internet Cryptography</h3>
      <a href="#future-proofing-internet-cryptography">
        
      </a>
    </div>
    <p>The new protocols we are currently deploying (RPKI, DNSSEC, DoT/DoH, TLS 1.3) use relatively modern cryptographic algorithms published in the 1970s and 1980s.</p><ul><li><p>The security of these algorithms is based on hard mathematical problems in the field of number theory, such as <a href="https://en.wikipedia.org/wiki/RSA_(cryptosystem)">factoring</a> and the <a href="/a-relatively-easy-to-understand-primer-on-elliptic-curve-cryptography/">elliptic curve discrete logarithm</a> problem.</p></li><li><p>If you can solve the hard problem, you can crack the code. <b>Using a bigger key makes the problem harder,</b> making it more difficult to break, <b>but also slows performance.</b></p></li></ul><p>Modern Internet protocols typically pick keys large enough to make it infeasible to break with <a href="https://whatis.techtarget.com/definition/classical-computing">classical computers</a>, but no larger. <b>The sweet spot is around 128-bits of security;</b> <b>meaning a computer has to do approximately 2</b>¹²⁸ <b>operations to break it.</b></p><p><a href="https://eprint.iacr.org/2013/635.pdf">Arjen Lenstra and others</a> created a useful measure of security levels by <b>comparing the amount of energy it takes to break a key to the amount of water you can boil</b> using that much energy. You can think of this as the electric bill you’d get if you run a computer long enough to crack the key.</p><ul><li><p><b>35-bit security</b> <b>is “Teaspoon security”</b> -- It takes about the same amount of energy to break a 35-bit key as it does to boil a teaspoon of water (pretty easy).</p></li></ul>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2iNGrNB4yH2iwZY14nQyWB/7bdf9a8e1874fe0444f532823e22fcd3/image20.png" />
            
            </figure><ul><li><p><b>65 bits gets you up to “Pool security” –</b> The energy needed to boil the average amount of water in a swimming pool.</p></li></ul>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1xzgnzDf3dwHcNWdTfAyeC/de62077e9a936e937a78c2dbc4874ece/image8.png" />
            
            </figure><ul><li><p><b>105 bits is “Sea Security”</b> – The energy needed to boil the Mediterranean Sea.</p></li></ul>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5CYdmr4aGJ67Wq0j9t3iXR/2df460cc061c0c747ab66be9a866beed/8reIkbszxaKMxOsDDEzOB4ljqnVtQdJBQsYEz-uL-AZnNL0jUKSd4CbSAz-yS9tvpi_ki1JoYZ_-ZktMSbqRtDSVFMjHvsyBtgmc2rPuiDr9b-Fj6DvEJvLF7tWP.png" />
            
            </figure><ul><li><p><b>114-bits is “Global Security” –</b>  The energy needed to boil all water on Earth.</p></li></ul>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/58XtfEmY4rpTkx7Y71AnDW/b1b31eaf34b318cc83bdcad3c20d85ff/image14.png" />
            
            </figure><ul><li><p><b>128-bit security is safely beyond that</b> <b>of Global Security</b> – Anything larger is excessive.</p></li><li><p><b>256-bit security corresponds to “Universal Security”</b> – The estimated mass-energy of the observable universe. So, if you ever hear someone suggest 256-bit AES, you know they mean business.</p></li></ul>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5oWF3OLLr5WVWIga7mMGf8/d21426fe5df41b9c85bc6b8eca2e4331/image18.png" />
            
            </figure>
    <div>
      <h3>Post-Quantum of Solace</h3>
      <a href="#post-quantum-of-solace">
        
      </a>
    </div>
    <p>As far as we know, <b>the algorithms we use for cryptography are functionally uncrackable</b> with all known algorithms that classical computers can run. <b>Quantum computers change this calculus.</b> Instead of transistors and bits, a quantum computer uses the effects of <a href="https://en.wikipedia.org/wiki/Quantum_entanglement">quantum mechanics</a> to perform calculations that just aren’t possible with classical computers. As you can imagine, quantum computers are very difficult to build. However, despite large-scale quantum computers not existing quite yet, computer scientists have already developed algorithms that can only run efficiently on quantum computers. Surprisingly, it turns out that <b>with a sufficiently powerful quantum computer, most of the hard mathematical problems we rely on for Internet security become easy!</b></p><p>Although there are still <a href="https://www.quantamagazine.org/gil-kalais-argument-against-quantum-computers-20180207/">quantum-skeptics</a> out there, <a href="http://fortune.com/2018/12/15/quantum-computer-security-encryption/">some experts</a> <b>estimate that within 15-30 years these large quantum computers will exist, which poses a risk to every security protocol online.</b> Progress is moving quickly; every few months a more powerful quantum computer <a href="https://en.wikipedia.org/wiki/Timeline_of_quantum_computing">is announced</a>.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/281v5SkqTn89jPkEnFjkPE/587423b66380be1051c67f2f17263e69/image1-2.png" />
            
            </figure><p>Luckily, there are cryptography algorithms that rely on different hard math problems that seem to be resistant to attack from quantum computers. These math problems form the basis of so-called <i>quantum-resistant</i> (or <i>post-quantum</i>) cryptography algorithms that can run on classical computers. These algorithms can be used as substitutes for most of our current quantum-vulnerable algorithms.</p><ul><li><p>Some quantum-resistant algorithms (such as <a href="https://en.wikipedia.org/wiki/McEliece_cryptosystem">McEliece</a> and <a href="https://en.wikipedia.org/wiki/Lamport_signature">Lamport Signatures</a>) were invented decades ago, but there’s a reason they aren’t in common use: they <b>lack some of the nice properties of the algorithms we’re currently using, such as key size and efficiency.</b></p></li><li><p>Some quantum-resistant algorithms <b>require much larger keys to provide 128-bit security</b></p></li><li><p>Some are very <b>CPU intensive</b>,</p></li><li><p>And some just <b>haven’t been studied enough to know if they’re secure.</b></p></li></ul><p>It is possible to swap our current set of quantum-vulnerable algorithms with new quantum-resistant algorithms, but it’s a daunting engineering task. With widely deployed <a href="https://en.wikipedia.org/wiki/IPsec">protocols</a>, it is hard to make the transition from something fast and small to something slower, bigger or more complicated without providing concrete user benefits. <b>When exploring new quantum-resistant algorithms, minimizing user impact is of utmost importance</b> to encourage adoption. This is a big deal, because almost all the protocols we use to protect the Internet are vulnerable to quantum computers.</p><p>Cryptography-breaking quantum computing is still in the distant future, but we must start the transition to ensure that today’s secure communications are safe from tomorrow’s quantum-powered onlookers; however, that’s not the most <i>timely</i> problem with the Internet. We haven’t addressed that...yet.</p>
    <div>
      <h3>Attacking time</h3>
      <a href="#attacking-time">
        
      </a>
    </div>
    <p>Just like DNS, BGP, and HTTP, <b>the Network Time Protocol (NTP) is fundamental to how the Internet works</b>. And like these other protocols, it is <b>completely insecure</b>.</p><ul><li><p>Last year, <b>Cloudflare introduced</b> <a href="/roughtime/"><b>Roughtime</b></a> support as a mechanism for computers to access the current time from a trusted server in an authenticated way.</p></li><li><p>Roughtime is powerful because it <b>provides a way to distribute trust among multiple time servers</b> so that if one server attempts to lie about the time, it will be caught.</p></li></ul><p>However, Roughtime is not exactly a secure drop-in replacement for NTP.</p><ul><li><p><b>Roughtime lacks the complex mechanisms of NTP</b> that allow it to compensate for network latency and yet maintain precise time, especially if the time servers are remote. This leads to <b>imprecise time</b>.</p></li><li><p>Roughtime also <b>involves expensive cryptography that can further reduce precision</b>. This lack of precision makes Roughtime useful for browsers and other systems that need coarse time to validate certificates (most certificates are valid for 3 months or more), but some systems (such as those used for financial trading) require precision to the millisecond or below.</p></li></ul><p>With Roughtime we supported the time protocol of the future, but there are things we can do to help improve the health of security online <i>today</i>.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/76dqfHQzRcGyA09D55f46W/35be1f73c91f7d4e128b9582638b80ab/image2-3.png" />
            
            </figure><p>Some academic researchers, including Aanchal Malhotra of Boston University, have <a href="https://www.cs.bu.edu/~goldbe/NTPattack.html">demonstrated</a> a variety of attacks against NTP, including <b>BGP hijacking and off-path User Datagram Protocol (UDP) attacks.</b></p><ul><li><p>Some of these attacks can be avoided by connecting to an NTP server that is close to you on the Internet.</p></li><li><p>However, to bring cryptographic trust to time while maintaining precision, we need something in between NTP and Roughtime.</p></li><li><p>To solve this, it’s natural to turn to the same system of trust that enabled us to patch HTTP and DNS: Web PKI.</p></li></ul>
    <div>
      <h3>Attacking the Web PKI</h3>
      <a href="#attacking-the-web-pki">
        
      </a>
    </div>
    <p>The Web PKI is similar to the RPKI, but is more widely visible since it relates to websites rather than routing tables.</p><ul><li><p>If you’ve ever clicked the lock icon on your browser’s address bar, you’ve interacted with it.</p></li><li><p>The PKI relies on a set of trusted organizations called Certificate Authorities (CAs) to issue certificates to websites and web services.</p></li><li><p>Websites use these certificates to authenticate themselves to clients as <a href="https://www.cloudflare.com/learning/ssl/transport-layer-security-tls/">part of the TLS protocol</a> in HTTPS.</p></li></ul>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/wzqfOvaCcj0TCr3ezbOY3/f87fc64402bee2de14b4c4ba5d0b93bb/pki-validated.png" />
            
            </figure><p>TLS provides encryption and integrity from the client to the server with the help of a digital certificate </p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1bOHTpPBXfi0VeYoJikyfD/cbc3e67f2294e322436a5b542bb3644e/attack-against-PKI-validated-connectio-2.png" />
            
            </figure><p>TLS connections are safe against MITM, because the client doesn’t trust the attacker’s certificate</p><p>While we were all patting ourselves on the back for moving the web to HTTPS, <a href="https://dl.acm.org/citation.cfm?id=3243790">some</a> <a href="https://www.princeton.edu/~pmittal/publications/bgp-tls-usenix18.pdf">researchers</a> managed to find and exploit <b>a weakness in the system: the process for getting HTTPS certificates.</b></p><p>Certificate Authorities (CAs) use a process called <b>domain control validation (DCV) to ensure that they only issue certificates to websites owners who legitimately request them.</b></p><ul><li><p>Some CAs do this validation manually, which is secure, but <b>can’t scale to the total number of websites deployed today.</b></p></li><li><p>More progressive CAs have <b>automated this validation process, but rely on insecure methods</b> (HTTP and DNS) to validate domain ownership.</p></li></ul><p>Without ubiquitous cryptography in place (DNSSEC may never reach 100% deployment), there is <b>no completely secure way to bootstrap this system</b>. So, let’s look at how to distribute trust using other methods.</p><p><b>One tool at our disposal is the distributed nature of the Cloudflare network.</b></p><p>Cloudflare is global. We have locations all over the world connected to dozens of networks. That means we have different <i>vantage points</i>, resulting in different ways to traverse networks. This diversity can prove an advantage when dealing with BGP hijacking, since <b>an attacker would have to hijack multiple routes from multiple locations to affect all the traffic between Cloudflare and other distributed parts of the Internet.</b> The natural diversity of the network raises the cost of the attacks.</p><p>A distributed set of connections to the Internet and using them as a quorum is a mighty paradigm to distribute trust, with or without cryptography.</p>
    <div>
      <h3>Distributed Trust</h3>
      <a href="#distributed-trust">
        
      </a>
    </div>
    <p>This idea of distributing the source of trust is powerful. Last year we announced the <b>Distributed Web Gateway</b> that</p><ul><li><p>Enables users to access content on the InterPlanetary File System (IPFS), a network structured to <b>reduce the trust placed in any single party.</b></p></li><li><p>Even if a participant of the network is compromised, <b>it can’t be used to distribute compromised content</b> because the network is content-addressed.</p></li><li><p>However, using content-based addressing is <b>not the only way to distribute trust between multiple independent parties.</b></p></li></ul><p>Another way to distribute trust is to literally <b>split authority between multiple independent parties</b>. <a href="/red-october-cloudflares-open-source-implementation-of-the-two-man-rule/">We’ve explored this topic before.</a> In the context of Internet services, this means ensuring that no single server can authenticate itself to a client on its own. For example,</p><ul><li><p>In HTTPS the server’s private key is the lynchpin of its security. Compromising the owner of the private key (by <a href="https://www.theguardian.com/world/2013/oct/03/lavabit-ladar-levison-fbi-encryption-keys-snowden">hook</a> or by <a href="https://www.symantec.com/connect/blogs/how-attackers-steal-private-keys-digital-certificates">crook</a>) <b>gives an attacker the ability to impersonate (spoof) that service</b>. This single point of failure <b>puts services at risk.</b> You can mitigate this risk by distributing the authority to authenticate the service between multiple independently-operated services.</p></li></ul>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3wuX22HpjVAbiwcwEDojxB/df21a1462febcf64f1a613ab075a104a/TLS-server-compromise-1.png" />
            
            </figure><p>TLS doesn’t protect against server compromise</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/34nZukSMSjU2QDwNr5zkhf/dd5a6024a01c4dc0c6c6153e0205d91c/future-distributed-trust-copy-2-1.png" />
            
            </figure><p>With distributed trust, multiple parties combine to protect the connection</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/29B6lr8FV35nqA8Bw9TgtL/34fb894d8d74dc083424398f1e65e80f/future-distributed-trust.png" />
            
            </figure><p>An attacker that has compromised one of the servers cannot break the security of the system‌‌</p><p>The Internet barge is old and slow, and we’ve only been able to improve it through the meticulous process of patching it piece by piece. Another option is to build new secure systems on top of this insecure foundation. IPFS is doing this, and IPFS is not alone in its design. <b>There has been more research into secure systems with decentralized trust in the last ten years than ever before</b>.</p><p>The result is radical new protocols and designs that use exotic new algorithms. These protocols do not supplant those at the core of the Internet (like TCP/IP), but instead, they sit on top of the existing Internet infrastructure, enabling new applications, much like HTTP did for the web.</p>
    <div>
      <h3>Gaining Traction</h3>
      <a href="#gaining-traction">
        
      </a>
    </div>
    <p>Some of the most innovative technical projects were considered failures because <b>they couldn’t attract users.</b> New technology has to bring tangible benefits to users to sustain it: useful functionality, content, and a decent user experience. Distributed projects, such as IPFS and others, are gaining popularity, but have not found mass adoption. This is a chicken-and-egg problem. New protocols have a high barrier to entry**—<b>users have to install new software</b>—**and because of the small audience, there is less incentive to create compelling content. <b>Decentralization and distributed trust are nice security features to have, but they are not products</b>. Users still need to get some benefit out of using the platform.</p><p>An example of a system to break this cycle is the web. In 1992 the web was hardly a cornucopia of awesomeness. <b>What helped drive the dominance of the web was its users.</b></p><ul><li><p>The growth of the user base meant <b>more incentive for people to build services</b>, and the availability of more services attracted more users. It was a virtuous cycle.</p></li><li><p>It’s hard for a platform to gain momentum, but once the cycle starts, a flywheel effect kicks in to help the platform grow.</p></li></ul><p>The <a href="https://www.cloudflare.com/distributed-web-gateway/">Distributed Web Gateway</a> project Cloudflare launched last year in Crypto Week is our way of exploring what happens if we try to kickstart that flywheel. By providing a secure, reliable, and fast interface from the classic web with its two billion users to the content on the distributed web, we give the fledgling ecosystem an audience.</p><ul><li><p><b>If the advantages provided by building on the distributed web are appealing to users, then the larger audience will help these services grow in popularity</b>.</p></li><li><p>This is somewhat reminiscent of how IPv6 gained adoption. It started as a niche technology only accessible using IPv4-to-IPv6 translation services.</p></li><li><p>IPv6 adoption has now <a href="https://www.internetsociety.org/resources/2018/state-of-ipv6-deployment-2018/">grown so much</a> that it is becoming a requirement for new services. For example, <b>Apple is</b> <a href="https://developer.apple.com/support/ipv6/"><b>requiring</b></a> <b>that all apps work in IPv6-only contexts.</b></p></li></ul><p>Eventually, as user-side implementations of distributed web technologies improve, people may move to using the distributed web natively rather than through an HTTP gateway. Or they may not! By leveraging Cloudflare’s global network to <b>give users access to new technologies based on distributed trust, we give these technologies a better chance at gaining adoption.</b></p>
    <div>
      <h3>Happy Crypto Week</h3>
      <a href="#happy-crypto-week">
        
      </a>
    </div>
    <p>At Cloudflare, we always support new technologies that help make the Internet better. Part of helping make a better Internet is scaling the systems of trust that underpin web browsing and protect them from attack. We provide the tools to create better systems of assurance with fewer points of vulnerability. We work with academic researchers of security to get a vision of the future and engineer away vulnerabilities before they can become widespread. It’s a constant journey.</p><p>Cloudflare knows that none of this is possible without the work of researchers. From award-winning researcher publishing papers in top journals to the blog posts of clever hobbyists, dedicated and curious people are moving the state of knowledge of the world forward. However, the push to publish new and novel research sometimes holds researchers back from committing enough time and resources to fully realize their ideas. Great research can be powerful on its own, but it can have an even broader impact when combined with practical applications. We relish the opportunity to stand on the shoulders of these giants and use our engineering know-how and global reach to expand on their work to help build a better Internet.</p><p>So, to all of you dedicated researchers, <b>thank you for your work!</b> Crypto Week is yours as much as ours. If you’re working on something interesting and you want help to bring the results of your research to the broader Internet, please contact us at <a>ask-research@cloudflare.com</a>. We want to help you realize your dream of making the Internet safe and trustworthy.</p><p>If you're a research-oriented <a href="https://boards.greenhouse.io/cloudflare/jobs/1346216">engineering manager</a> or student, we're also hiring in <a href="https://boards.greenhouse.io/cloudflare/jobs/1025810">London</a> and <a href="https://boards.greenhouse.io/cloudflare/jobs/608495">San Francisco</a>!</p> ]]></content:encoded>
            <category><![CDATA[Crypto Week]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[BGP]]></category>
            <category><![CDATA[Cryptography]]></category>
            <category><![CDATA[RPKI]]></category>
            <category><![CDATA[DNS]]></category>
            <category><![CDATA[HTTPS]]></category>
            <guid isPermaLink="false">2Cs84t1yRSnIXcoIIszCGj</guid>
            <dc:creator>Nick Sullivan</dc:creator>
        </item>
        <item>
            <title><![CDATA[Monsters in the Middleboxes: Introducing Two New Tools for Detecting HTTPS Interception]]></title>
            <link>https://blog.cloudflare.com/monsters-in-the-middleboxes/</link>
            <pubDate>Mon, 18 Mar 2019 17:47:50 GMT</pubDate>
            <description><![CDATA[ The practice of HTTPS interception continues to be commonplace on the Internet. This blog post discusses types of monster-in-the-middle devices and software, and how to detect them. ]]></description>
            <content:encoded><![CDATA[ <p></p><p>The practice of HTTPS interception continues to be commonplace on the Internet. HTTPS interception has encountered scrutiny, most notably in the 2017 study “<a href="https://jhalderm.com/pub/papers/interception-ndss17.pdf">The Security Impact of HTTPS Interception</a>” and the United States Computer Emergency Readiness Team (US-CERT)  <a href="https://www.us-cert.gov/ncas/alerts/TA17-075A">warning</a> that the technique weakens security. In this blog post, we provide a brief recap of HTTPS interception and introduce two new tools:</p><ol><li><p><a href="https://github.com/cloudflare/mitmengine">MITMEngine</a>, an open-source library for HTTPS interception detection, and</p></li><li><p><a href="https://malcolm.cloudflare.com/">MALCOLM</a>, a dashboard displaying metrics about HTTPS interception we observe on Cloudflare’s network.</p></li></ol><p>In a basic HTTPS connection, a browser (client) establishes a TLS connection directly to an origin server to send requests and download content. However, many connections on the Internet are not directly from a browser to the server serving the website, but instead traverse through some type of proxy or middlebox (a “monster-in-the-middle” or MITM). There are many reasons for this behavior, both malicious and benign.</p>
    <div>
      <h3>Types of HTTPS Interception, as Demonstrated by Various Monsters in the Middle</h3>
      <a href="#types-of-https-interception-as-demonstrated-by-various-monsters-in-the-middle">
        
      </a>
    </div>
    <p>One common HTTPS interceptor is TLS-terminating forward proxies. (These are a subset of all forward proxies; non-TLS-terminating forward proxies forward TLS connections without any ability to inspect encrypted traffic). A TLS-terminating forward proxy sits in front of a client in a TLS connection, transparently forwarding and possibly modifying traffic from the browser to the destination server. To do this, the proxy must terminate the TLS connection from the client, and then (hopefully) re-encrypt and forward the payload to the destination server over a new TLS connection. To allow the connection to be intercepted without a browser certificate warning appearing at the client, forward proxies often require users to install a root certificate on their machine so that the proxy can generate and present a trusted certificate for the destination to the browser. These root certificates are often installed for corporate managed devices, done by network administrators without user intervention.</p>
    <div>
      <h2>Antivirus and Corporate Proxies</h2>
      <a href="#antivirus-and-corporate-proxies">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/lQlpoDWmUQ7mvaOQjOCzR/cf1df0af814a7ba373072b727102c5dd/my-stapler-_2x.png" />
            
            </figure><p>Some legitimate reasons for a client to connect through a forward proxy would be to allow antivirus software or a corporate proxy to inspect otherwise encrypted data entering and leaving a local network in order to detect inappropriate content, malware, and data breaches. The Blue Coat data loss prevention tools offered by Symantec are one example. In this case, HTTPS interception occurs to check if an employee is leaking sensitive information before sending the request to the intended destination.</p>
    <div>
      <h2>Malware Proxies</h2>
      <a href="#malware-proxies">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2VTM6e5HggvoJsREQ4t2BX/3670abf896ee660c9aef85f658346fff/business-sasquatch_2x.png" />
            
            </figure><p>Malicious forward proxies, however, might insert advertisements into web pages or <a href="https://www.cloudflare.com/learning/security/what-is-data-exfiltration/">exfiltrate private user information</a>. Malware like <a href="https://www.us-cert.gov/ncas/alerts/TA15-051A">Superfish</a> insert targeted ads into encrypted traffic, which requires intercepting HTTPS traffic and modifying the content in the response given to a client.</p>
    <div>
      <h2>Leaky Proxies</h2>
      <a href="#leaky-proxies">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4sW8l4AlgtkLvFVsF0RiYe/94b9c16eb8fc3ad8bf47596c4817dbc7/blabbermouth_2x.png" />
            
            </figure><p>Any TLS-terminating forward proxy--whether it’s well-intentioned or not--also risks exposing private information and opens the door to spoofing. When a proxy root certificate is installed, Internet browsers lose the ability to validate the connection end-to-end, and must trust the proxy to maintain the security of the connection to ensure that sensitive data is protected. Some proxies re-encrypt and forward traffic to destinations using less secure TLS parameters.</p><p>Proxies can also require the installation of vendor root certificates that can be easily abused by other malicious parties. In November 2018, a type of Sennheiser wireless headphones required the user to install a <a href="https://arstechnica.com/information-technology/2018/11/sennheiser-discloses-monumental-blunder-that-cripples-https-on-pcs-and-macs/">root certificate which used insecure parameters</a>. This root certificate could allow any adversary to impersonate websites and send spoofed responses to machines with this certificate, as well as observe otherwise encrypted data.</p><p>TLS-terminating forward proxies could even trust root certificates considered insecure, like Symantec’s CA. If poorly implemented, any TLS-terminating forward proxy can become a widespread attack vector, leaking private information or allowing for response spoofing.</p>
    <div>
      <h2>Reverse Proxies</h2>
      <a href="#reverse-proxies">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/10jWDwqO36EovaMh7yezP0/8cbc6e9d12d744f1e6f294944ce788a0/speedy-_2x.png" />
            
            </figure><p>Reverse proxies also sit between users and origin servers. Reverse proxies (such as Cloudflare and <a href="https://www.cloudflare.com/cloudflare-vs-akamai/">Akamai</a>) act on behalf of origin servers, caching static data to improve the speed of content delivery and offering security services such as DDoS mitigation. Critically, reverse proxies do not require special root certificates to be installed on user devices, since browsers establish connections directly to the reverse proxy to download content that is hosted at the origin server. Reverse proxies are often used by origin servers to improve the security of client HTTPS connections (for example, by enforcing strict security policies and using the <a href="/rfc-8446-aka-tls-1-3/">newest security protocols like TLS 1.3</a>). In this case, reverse proxies are intermediaries that provide better performance and security to TLS connections.</p>
    <div>
      <h2>Why Continue Examining HTTPS Interception?</h2>
      <a href="#why-continue-examining-https-interception">
        
      </a>
    </div>
    <p><a href="/understanding-the-prevalence-of-web-traffic-interception/">In a previous blog post, we argued that HTTPS interception is prevalent on the Internet</a> and that it often degrades the security of Internet connections. A server that refuses to negotiate weak cryptographic parameters should be safe from many of the risks of degraded connection security, but there are plenty of reasons why a server operator may want to know if HTTPS traffic from its clients has been intercepted.</p><p>First, detecting HTTPS interception can help a server to identify suspicious or potentially vulnerable clients connecting to its network. A server can use this knowledge to notify legitimate users that their connection security might be degraded or compromised. HTTPS interception also increases the attack surface area of the system, and presents an attractive target for attackers to gain access to sensitive connection data.</p><p>Second, the presence of content inspection systems can not only weaken the security of TLS connections, but it can hinder the <a href="/why-tls-1-3-isnt-in-browsers-yet/">adoption of new innovations and improvements to TLS</a>.  Users connecting through older middleboxes may have their connections downgraded to older versions of TLS the middleboxes still support, and may not receive the security, privacy, and performance benefits of new TLS versions, even if newer versions are supported by both the browser and the server.</p>
    <div>
      <h2>Introducing MITMEngine: Cloudflare’s HTTPS Interception Detector</h2>
      <a href="#introducing-mitmengine-cloudflares-https-interception-detector">
        
      </a>
    </div>
    <p>Many TLS client implementations can be uniquely identified by features of the Client Hello message such as the supported version, cipher suites, extensions, elliptic curves, point formats, compression, and signature algorithms. The technique introduced by “<a href="https://jhalderm.com/pub/papers/interception-ndss17.pdf">The Security Impact of HTTPS Interception</a>” is to construct TLS Client Hello <i>signatures</i> for common browser and middlebox implementations. Then, to identify HTTPS requests that have been intercepted, a server can look up the signature corresponding to the request’s HTTP User Agent, and check if the request’s Client Hello message matches the signature. A mismatch indicates either a spoofed User Agent or an intercepted HTTPS connection. The server can also compare the request’s Client Hello to those of known HTTPS interception tools to understand which interceptors are responsible for intercepting the traffic.</p><p>The <a href="https://caddyserver.com/docs/mitm-detection">Caddy Server MITM Detection</a> tool is based on these heuristics and implements support for a limited set of browser versions. However, we wanted a tool that could be easily applied to the broad set of TLS implementations that Cloudflare supports, with the following goals:</p><ul><li><p>Maintainability: It should be easy to add support for new browsers and to update existing browser signatures when browser updates are released.</p></li><li><p>Flexibility: Signatures should be able to capture a wide variety of TLS client behavior without being overly broad. For example, signatures should be able to account for the <a href="https://tools.ietf.org/html/draft-davidben-tls-grease-01">GREASE</a> values sent in modern versions of Chrome.</p></li><li><p>Performance: Per-request MITM detection should be cheap so that the system can be deployed at scale.</p></li></ul><p>To accomplish these goals, the Cryptography team at Cloudflare developed <a href="https://github.com/cloudflare/mitmengine">MITMEngine</a>, an open-source HTTPS interception detector. MITMEngine is a Golang library that ingests User Agents and TLS Client Hello fingerprints, then returns the likelihood of HTTPS interception and the factors used to identify interception. To learn how to use MITMEngine, check out the project on GitHub.</p><p>MITMEngine works by comparing the values in an observed TLS Client Hello to a set of known browser Client Hellos. The fields compared include:</p><ul><li><p>TLS version,</p></li><li><p>Cipher suites,</p></li><li><p>Extensions and their values,</p></li><li><p>Supported elliptic curve groups, and</p></li><li><p>Elliptic curve point formats.</p></li></ul><p>When given a pair of User Agent and observed TLS Client Hello, MITMEngine detects differences between the given Client Hello and the one expected for the presented User Agent. For example, consider the following User Agent:</p>
            <pre><code>Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko)
Chrome/47.0.2526.111 Safari/537.36</code></pre>
            <p>This User Agent corresponds to Chrome 47 running on Windows 7. The paired TLS Client Hello includes the following cipher suites, displayed below as a hex dump:</p>
            <pre><code>0000  c0 2b c0 2f 00 9e c0 0a  c0 14 00 39 c0 09 c0 13   .+./.... ...9....
0010  00 33 00 9c 00 35 00 2f  00 0a                     .3...5./ ..</code></pre>
            <p>These cipher suites translate to the following list (and order) of 13 ciphers:</p>
            <pre><code>TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 (0xc02b)
TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 (0xc02f)
TLS_DHE_RSA_WITH_AES_128_GCM_SHA256 (0x009e)
TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA (0xc00a)
TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA (0xc014)
TLS_DHE_RSA_WITH_AES_256_CBC_SHA (0x0039)
TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA (0xc009)
TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA (0xc013)
TLS_DHE_RSA_WITH_AES_128_CBC_SHA (0x0033)
TLS_RSA_WITH_AES_128_GCM_SHA256 (0x009c)
TLS_RSA_WITH_AES_256_CBC_SHA (0x0035)
TLS_RSA_WITH_AES_128_CBC_SHA (0x002f)
TLS_RSA_WITH_3DES_EDE_CBC_SHA (0x000a)</code></pre>
            <p>The reference TLS Client Hello cipher suites for Chrome 47 are the following:</p>
            <pre><code>0000  c0 2b c0 2f 00 9e cc 14  cc 13 c0 0a c0 14 00 39   .+./.... .......9
0010  c0 09 c0 13 00 33 00 9c  00 35 00 2f 00 0a         .....3.. .5./..</code></pre>
            <p>Looking closely, we see that the cipher suite list for the observed traffic is shorter than we expect for Chrome 47; two cipher suites have been removed, though the remaining cipher suites remain in the same order. The two missing cipher suites are</p>
            <pre><code>TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256 (0xcc14)
TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 (0xcc13)</code></pre>
            <p>Chrome prioritizes these two ChaCha ciphers above AES-CBC ciphers--a good choice, given that <a href="/padding-oracles-and-the-decline-of-cbc-mode-ciphersuites/">CBC (cipher block chaining) mode is vulnerable to padding oracle attacks</a>. It looks like the traffic we received underwent HTTPS interception, and the interceptor potentially didn't support ChaCha ciphers.</p><p>Using contextual clues like the used cipher suites, as well as additional User Agent text, we can also detect which software was used to intercept the HTTPS connection. In this case, MITMEngine recognizes that the fingerprint observed actually matches a fingerprint collected from Sophos antivirus software, and indicates that this software is likely the cause of this interception.</p><p>We welcome contributions to MITMEngine. We are particularly interested in collecting more fingerprints of MITM software and browser TLS Client Hellos, because MITMEngine depends on these reference fingerprints to detect HTTPS interception. Contributing these fingerprints is as simple as opening <a href="https://www.wireshark.org/">Wireshark</a>, capturing a pcap file with a TLS Client Hello, and submitting the pcap file in a PR. More instructions on how to contribute can be found in the <a href="https://github.com/cloudflare/mitmengine">MITMEngine documentation</a>.</p>
    <div>
      <h2>Observing HTTPS Interception on Cloudflare’s Network with MALCOLM</h2>
      <a href="#observing-https-interception-on-cloudflares-network-with-malcolm">
        
      </a>
    </div>
    <p>To complement MITMEngine, we also built a dashboard, <a href="https://malcolm.cloudflare.com/">MALCOLM</a>, to apply MITMEngine to a sample of Cloudflare’s overall traffic and observe HTTPS interception in the requests hitting our network. Recent MALCOLM data incorporates a fresh set of reference TLS Client Hellos, so readers will notice that percentage of "unknown" instances of HTTPS interception has decreased from Feburary 2019 to March 2019.</p><p>In this section of this blog post, we compare HTTPS interception statistics from MALCOLM to the 2017 study “<a href="https://jhalderm.com/pub/papers/interception-ndss17.pdf">The Security Impact of HTTPS Interception</a>”. With this data, we can see the changes in HTTPS interception patterns observed by Cloudflare over the past two years.</p><p>Using MALCOLM, let’s see how HTTPS connections have been intercepted as of late. This MALCOLM data was collected between March 12 and March 13, 2019.</p><p>The 2017 study found that 10.9% of Cloudflare-bound TLS Client Hellos had been intercepted. MALCOLM shows that the number of interceptions has increased by a substantial amount, to 18.6%:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/308RiVWeL9fqieIEzt7DsR/1de5978ad5319a33e0f6d670a2fbf69c/1.png" />
            
            </figure><p>This result, however, is likely inflated compared to the results of the 2017 study. The 2017 study considered all traffic that went through Cloudflare, regardless of whether it had a recognizable User Agent or not. MALCOLM only considers results with recognizable User Agents that could be identified by <a href="https://github.com/avct/uasurfer">uasurfer</a>, a Golang library for parsing User Agent strings. Indeed, when we don’t screen out TLS Client Hellos with unidentified User Agents, we see that 11.3% of requests are considered intercepted--an increase of 0.4%. Overall, the prevalence of HTTPS interception activity does not seem to have changed much over the past two years.</p><p>Next, we examine the prevalence of HTTPS interception by browser and operating system. The paper presented the following table. We’re interested in finding the most popular browsers and most frequently intercepted browsers.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3HJcWioFHJY1gSjUGbM0Fo/aa06678c9b4cac6d7c4a71b4a601c39c/2-1.png" />
            
            </figure><p>MALCOLM yields the following statistics for all traffic by browsers. MALCOLM presents mobile and desktop browsers as a single item. This can be broken into separate views for desktop and mobile using the filters on the dashboard.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6ueWD4onXq5T1fTh2ZULTt/746be2f8618fe4c3979ba6f8bc4ac61c/3.png" />
            
            </figure><p>Chrome usage has expanded substantially since 2017, while usage of Safari, IE, and Firefox has fallen somewhat (here, IE includes Edge). Examining the most frequently intercepted browsers, we see the following results:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2DoOG6gtyJKloQ6f7N2YM/a7259a93ac04c0a687cc6c9865dbefa1/NC5wbmc-.png" />
            
            </figure><p>We see above that Chrome again accounts for a larger percentage of intercepted traffic, likely given growth in Chrome’s general popularity. As a result, HTTPS interception rates for other browsers, like Internet Explorer, have fallen as IE is less frequently used. MALCOLM also highlights the prevalence of other browsers that have their traffic intercepted--namely, UCBrowser, a browser common in China.</p><p>Now, we examine the most common operating systems observed in Cloudflare’s traffic:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/33AywIjUylFoFgSgdXG0LX/97790997880300e3cd9ed30a0101c0d1/6.png" />
            
            </figure><p>Android use has clearly increased over the past two years as smartphones become peoples’ primary device for accessing the Internet. Windows also remains a common operating system.</p><p>As Android becomes more popular, the likelihood of HTTPS interception occurring on Android devices also has increased substantially:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7D4pH8O2VNOUfPZLiV6bV9/1ea61b5d21caa3d35b65ab5f0dcf33af/aW1hZ2UucG5n.png" />
            
            </figure><p>Since 2017, Android devices have overtaken those of Windows as the most intercepted.</p><p>As more of the world’s Internet consumption occurs through mobile devices, it’s important to acknowledge that simply changing platforms and browsers has not impacted the prevalence of HTTPS interception.</p>
    <div>
      <h2>Conclusion</h2>
      <a href="#conclusion">
        
      </a>
    </div>
    <p>Using MITMEngine and MALCOLM, we’ve been able to continuously track the state of HTTPS interception on over 10% of Internet traffic. It’s imperative that we track the status of HTTPS interception to give us foresight when deploying new security measures and detecting breaking changes in security protocols. Tracking HTTPS interception also helps us contribute to our broader mission of “helping to build a better Internet” by keeping tabs on software that possibly weakens good security practices.</p><p>Interested in exploring more HTTPS interception data? Here are a couple of next steps:</p><ol><li><p>Check out <a href="https://malcolm.cloudflare.com/">MALCOLM</a>, click on a couple of percentage bars to apply filters, and share any interesting HTTPS interception patterns you see!</p></li><li><p>Experiment with <a href="https://github.com/cloudflare/mitmengine">MITMEngine</a> today, and see if TLS connections to your website have been impacted by HTTPS interception.</p></li><li><p>Contribute to MITMEngine!</p></li></ol><p></p> ]]></content:encoded>
            <category><![CDATA[HTTPS]]></category>
            <category><![CDATA[Malware]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[Speed & Reliability]]></category>
            <category><![CDATA[Research]]></category>
            <guid isPermaLink="false">1Pl12Ah2e26vZxqTeuN3vm</guid>
            <dc:creator>Gabbi Fisher</dc:creator>
            <dc:creator>Luke Valenta</dc:creator>
        </item>
        <item>
            <title><![CDATA[Real URLs for AMP Cached Content Using Cloudflare Workers]]></title>
            <link>https://blog.cloudflare.com/real-urls-for-amp-cached-content-using-cloudflare-workers/</link>
            <pubDate>Tue, 13 Nov 2018 19:33:00 GMT</pubDate>
            <description><![CDATA[ As Cloudflare Workers matures, we continue to push ourselves to develop and deploy important features using them. Today, we’re excited to announce support for HTTP signed exchanges, generated by Cloudflare Workers! ]]></description>
            <content:encoded><![CDATA[ 
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5zK61KytdjS5NqmHTcWfrX/25a160e62d1bf1058ef6b61558fe9f60/amp-share-copy_4x.png" />
            
            </figure><p>Today, we’re excited to announce our solution for arguably the biggest issue affecting Accelerated Mobile Pages (AMP): the inability to use real origin URLs when serving AMP-cached content. To allow AMP caches to serve content under its origin URL, we implemented HTTP signed exchanges, which extend authenticity and integrity to content cached and served on behalf of a publisher. This logic lives on <a href="https://www.cloudflare.com/products/cloudflare-workers/">Cloudflare Workers</a>, meaning that adding HTTP signed exchanges to your content is just a simple Workers application away. Publishers on Cloudflare can now take advantage of AMP performance and have AMP caches serve content with their origin URLs. We're thrilled to use Workers as a core component of this solution.</p><p>HTTP signed exchanges are a crucial component of the emerging Web Packaging standard, a set of protocols used to package websites for distribution through optimized delivery systems like Google AMP. This announcement comes just in time for Chrome Dev Summit 2018, where our colleague Rustam Lalkaka spoke about our efforts to advance the Web Packaging standard.</p>
    <div>
      <h3>What is Web Packaging and Why Does it Matter?</h3>
      <a href="#what-is-web-packaging-and-why-does-it-matter">
        
      </a>
    </div>
    <p>You may already see the need for Web Packaging on a daily basis. On your smartphone, perhaps you’ve searched for Christmas greens, visited 1-800-Flowers directly from Google, and have been surprised to see content served under the URL <a href="https://google.com/amp/1800flowers.com/blog/flower-facts/types-of-christmas-greens/amp">https://google.com/amp/1800flowers.com/blog/flower-facts/types-of-christmas-greens/amp</a>. This is an instance of AMP in action, where Google serves cached content so your desired web page loads faster.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6kgYK09foa4H185JC2AaO4/26c0a7ef1039fd33516c1653bb7d2f70/ezgif-noAMP.gif" />
            
            </figure><p>Visiting 1-800 Flowers through AMP without HTTP signed exchange</p><p>Google cannot serve cached content under publisher URLs for clear security reasons. To securely present content from a URL, a <a href="https://www.cloudflare.com/application-services/products/ssl/">TLS certificate</a> for its domain is required. Google cannot provide 1-800-Flowers’ certificate on the vendor’s behalf, because it does not have the corresponding private key. Additionally, Google cannot, and should not be able to, sign content using the private key that corresponds to 1-800-Flowers’ certificate.</p><p>The inability to use original content URLs with AMP posed some serious issues. First, the google.com/amp URL prefix can strip URLs of their meaning. To the frustration of publishers, their content is no longer directly attributed to them by a URL (let alone a certificate). The publisher can no longer prove the integrity and authenticity of content served on their behalf.</p><p>Second, for web browsers the lack of a publisher’s URL can call the integrity and authenticity of a cached webpage into question. Namely, there’s no clear way to prove that this response is a cached version of an actual page published by 1-800-Flowers. Additionally, cookies are managed by third-party providers like Google instead of the publisher.</p><p>Enter Web Packaging, a <a href="https://github.com/WICG/webpackage">collection of specifications</a> for “packaging” website content with information like certificates and their validity. The <a href="https://wicg.github.io/webpackage/draft-yasskin-http-origin-signed-responses.html">HTTP signed exchanges specification</a> allows third-party caches to cache and service HTTPS requests with proof of integrity and authenticity.</p>
    <div>
      <h3>HTTP Signed Exchanges: Extending Trust with Cryptography</h3>
      <a href="#http-signed-exchanges-extending-trust-with-cryptography">
        
      </a>
    </div>
    <p>In the pre-AMP days, people expected to find a webpage’s content at one definitive URL. The publisher, who owns the domain of the definitive URL, would present a visitor with a certificate that corresponds to this domain and contains a public key.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6GeUoh4OBPc3gUuywTcGAE/651a5a6638d6eef794fc9bf6462983db/step-one_4x.png" />
            
            </figure><p>The publisher would use the corresponding private key to sign a cryptographic handshake, which is used to derive shared symmetric keys that are used to encrypt the content and protect its integrity.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/39tMMzRKPaWwXidW4fxyDw/edaf20ec675376ec08c0a9069a794d02/step-2_4x.png" />
            
            </figure><p>The visitor would then receive content encrypted and signed by the shared key.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2rOJWcnNjs0ps5CFQIDvZI/60fb1846557c2f9587e4d555ce03151a/step-3_4x.png" />
            
            </figure><p>The visitor’s browser then uses the shared key to verify the response’s signature and, in turn, the authenticity and integrity of the content received.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/17OtcMYs9DYkDblDZVcQj5/54cda01c719370ebf2546ca0d3652f2d/step-4_4x.png" />
            
            </figure><p>With services like AMP, however, online content may correspond to more than one URL. This introduces a problem: while only one domain actually corresponds to the webpage’s publisher, multiple domains can be responsible for serving a webpage. If a publisher allows AMP services to cache and serve their webpages, they must be able to sign their content even when AMP caches serve it for them. Only then can AMP-cached content prove its legitimacy.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/70Tx1Lvaj6AIDzjWUJPuXi/c1d9d474af8f2e1bb50aa0a6a145a9c5/step-4-copy-4_4x.png" />
            
            </figure><p>HTTP signed exchanges directly address the problem of extending publisher signatures to services like AMP. This <a href="https://wicg.github.io/webpackage/draft-yasskin-http-origin-signed-responses.html">IETF draft</a> specifies how publishers may sign an HTTP request/response pair (an exchange). With a signed exchange, the publisher can assure the integrity and authenticity of a response to a specific request even before the client makes the request. Given a signed exchange, the publisher authorizes intermediates (like Google’s AMP Cache) to forward the exchanges; the intermediate responds to a given request with the corresponding response in the signed HTTP request/response pair. A browser can then verify the exchange signature to assert the intermediate response’s integrity and authenticity.</p><p>This is like handing out an answer key to a quiz signed by the instructor. Having a signed answer sheet is just as good as getting the answer from the teacher in real time.</p>
    <div>
      <h3>The Technical Details</h3>
      <a href="#the-technical-details">
        
      </a>
    </div>
    <p>An HTTP signed exchange is generated by the following steps.First, the publisher uses <a href="https://tools.ietf.org/id/draft-thomson-http-mice-03.txt">MICE</a> (Merkle Integrity Content Encoding) to provide a concise proof of integrity for the response included in the exchange. To start, the response is split into blocks of some record size bits long. Take, for example, a message ABCD, which is divided into record-size blocks A, B, C, and D. The first step to constructing a proof of integrity is to take the last block, D, and compute the following:</p>
            <pre><code>proof(D) = SHA-256(D || 0x0)</code></pre>
            <p>This produces proof(D). Then, all consequent proof values for blocks are computed as follows:</p>
            <pre><code>proof(C) = SHA-256(C || proof(D) || 0x1)
proof(B) = SHA-256(B || proof(C) || 0x1)
proof(A) = SHA-256(A || proof(B) || 0x1)</code></pre>
            <p>The generation of these proofs build the following tree:</p>
            <pre><code>      proof(A)
         /\
        /  \
       /    \
      A    proof(B)
            /\
           /  \
          /    \
         B    proof(C)
                /\
               /  \
              /    \
             C    proof(D)
                    |
                    |
                    D</code></pre>
            <p>As such, proof(A) is a 256-bit digest that a person who receives the real response should be able to recompute for themselves. If a recipient can recompute a tree head value identical to proof(A), they can verify the integrity of the response they received. In fact, this digest plays a similar role to the tree head of a <a href="/introducing-certificate-transparency-and-nimbus/">Merkle Tree</a>, which is recomputed and compared to the presented tree head to verify the membership of a particular node. The MICE-generated digest is stored in the Digest header of the response.</p><p>Next, the publisher serializes the headers and payloads of a request/response pair into <a href="https://tools.ietf.org/html/rfc7049">CBOR</a> (Concise Binary Object Representation). CBOR’s key-value storage is structurally similar to JSON, but creates smaller message sizes.</p><p>Finally, the publisher signs the CBOR-encoded request/response pair using the private key associated with the publisher’s certificate. This becomes the value of the sig parameter in the HTTP signed exchange.</p><p>The final HTTP signed exchange appears like the following:</p>
            <pre><code>sig=*MEUCIQDXlI2gN3RNBlgFiuRNFpZXcDIaUpX6HIEwcZEc0cZYLAIga9DsVOMM+g5YpwEBdGW3sS+bvnmAJJiSMwhuBdqp5UY=*;  
integrity="digest/mi-sha256";  
validity-url="https://example.com/resource.validity.1511128380";  
cert-url="https://example.com/oldcerts";  
cert-sha256=*W7uB969dFW3Mb5ZefPS9Tq5ZbH5iSmOILpjv2qEArmI=*;  
date=1511128380; expires=1511733180</code></pre>
            <p>Services like AMP can send signed exchanges by using a new HTTP response format that includes the signature above in addition to the original response.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/30wTgi8lZo4TnZN0nsg73U/6de57d124bdd34b656633929962221b0/step-4-copy-3_4x.png" />
            
            </figure><p>When this signature is included in an AMP-cached response, a browser can verify the legitimacy of this response. First, the browser confirms that the certificate provided in cert-url corresponds to the request’s domain and is still valid. It next uses the certificate’s public key, as well as the headers and body values of request/response pair, to check the authenticity of the signature, sig. The browser then checks the integrity of the response using the given integrity algorithm, digest/mi-sha256 (aka MICE), and the contents of the Digest header. Now the browser can confirm that a response provided by a third party has the integrity and authenticity of the content’s original publisher.</p><p>After all this behind-the-scenes work, the browser can now present the original URL of the content instead of one prefixed by google.com/amp. Yippee to solving one of AMP’s most substantial pain points!</p>
    <div>
      <h3>Generating HTTP Signed Exchanges with Workers</h3>
      <a href="#generating-http-signed-exchanges-with-workers">
        
      </a>
    </div>
    <p>From the overview above, the process of generating an HTTP signed exchange is clearly involved. What if there were a way to automate the generation of HTTP signed exchanges and have services like AMP automatically pick them up? With Cloudflare Workers… we found a way you could have your HTTP origin exchange cake and eat it too!</p><p>We have already implemented HTTP signed exchanges for one of our customers, <a href="https://www.1800flowers.com/">1-800-Flowers</a>. Code deployed in a Cloudflare Worker is responsible for fetching and generating information necessary to create this HTTP signed exchange.</p><p>This Worker works with Google AMP’s automatic caching. When Google’s search crawler crawls a site, it will ask for a signed exchange from the same URL if it initially responds with Vary: AMP-Cache-Transform. Our HTTP signed exchange Worker checks if we can generate a signed exchange and if the current document is valid AMP. If it is, that Vary header is returned. After Google’s crawler sees this Vary header in the response, it will send another request with the following two headers:</p>
            <pre><code>AMP-Cache-Transform: google
Accept: application/signed-exchange;v=b2</code></pre>
            <p>When our implementation sees these header values, it will attempt to generate and return an HTTP response with Content-Type: application/signed-exchange;v=b2.</p><p>Now that Google has cached this page with the signed exchange produced by our Worker, the requested page will appear with the publisher’s URL instead of Google’s AMP Cache URL. Success!</p><p>If you’d like to see HTTP signed exchanges in action on 1-800-Flowers, follow these steps:</p><ol><li><p>Install/open Chrome Beta for Android. (It should be version 71+).</p></li><li><p>Go to <a href="https://goo.gl/webpackagedemo">goo.gl/webpackagedemo</a>.</p></li><li><p>Search for “Christmas greens.”</p></li><li><p>Click on the 1-800-Flowers link -- it should be about 3 spots down with the AMP icon next to it. Along the way to getting there you should see a blue box that says "Results with the AMP icon use web packaging technology." If you see a different message, double check that you are using the correct Chrome Beta.An example of AMP in action for 1-800-Flowers:</p></li></ol>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5j6PfikkSSKXgrgSRmvYFh/9af3b955be231c64435517d2466af692/ezgif-w-amp.gif" />
            
            </figure><p>Visiting 1-800 Flowers through AMP with HTTP signed exchange</p>
    <div>
      <h3>The Future: Deploying HTTP Signed Exchanges as a Worker App</h3>
      <a href="#the-future-deploying-http-signed-exchanges-as-a-worker-app">
        
      </a>
    </div>
    <p>Phew. There’s clearly a lot of infrastructure for publishers to build for distributing AMP content. Thankfully Cloudflare has <a href="https://www.cloudflare.com/network/">one of the largest networks in the world</a>, and we now have the ability to execute JavaScript at the edge with <a href="https://www.cloudflare.com/network/">Cloudflare Workers</a>. We have developed a prototype Worker that generates these exchanges, on the fly, for any domain. If you’d like to start experimenting with signed exchanges, <a href="https://www.cloudflare.com/website-optimization/ampersand/">we’d love to talk</a>!</p><p>Soon, we will release this as a Cloudflare Worker application to our AMP customers. We’re excited to bring a better AMP experience to internet users and advance the Web Packaging standard. Stay tuned!</p>
    <div>
      <h3>The Big Picture</h3>
      <a href="#the-big-picture">
        
      </a>
    </div>
    <p>Web Packaging is not simply a technology that helps fix the URL for AMP pages, it’s a fundamental shift in the way that publishing works online. For the entire history of the web up until this point, publishers have relied on transport layer security (TLS) to ensure that the content that they send to readers is authentic. TLS is great for protecting communication from attackers but it does not provide any public verifiability. This means that if a website serves a specific piece of content to a specific user, that user has no way of proving that to the outside world. This is problematic when it comes to efforts to archive the web.</p><p>Services like the Internet Archive crawl websites and keep a copy of what the website returns, but who’s to say they haven’t modified it? And who’s to say that the site didn’t serve a different version of the site to the crawler than it did to a set of readers? Web Packaging fixes this issue by allowing sites to digitally sign the actual content, not just the cryptographic keys used to transport data. This subtle change enables a profoundly new ability that we never knew we needed: the ability to record and archive content on the Internet in a trustworthy way. This ability is something that is lacking in the field of online publishing. If Web Packaging takes off as a general technology, it could be the first step in creating a trusted digital record for future generations to look back on.</p><p>Excited about the future of Web Packaging and AMP? Check out <a href="https://www.cloudflare.com/website-optimization/ampersand/">Cloudflare Ampersand</a> to see how we're implementing this future.</p> ]]></content:encoded>
            <category><![CDATA[Serverless]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[JavaScript]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[AMP]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[HTTPS]]></category>
            <category><![CDATA[Mobile]]></category>
            <category><![CDATA[Programming]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <guid isPermaLink="false">2vBadQs4BUhz2xKzJQ5Bll</guid>
            <dc:creator>Gabbi Fisher</dc:creator>
            <dc:creator>Avery Harnish</dc:creator>
        </item>
        <item>
            <title><![CDATA[The QUICening]]></title>
            <link>https://blog.cloudflare.com/the-quicening/</link>
            <pubDate>Tue, 25 Sep 2018 12:00:00 GMT</pubDate>
            <description><![CDATA[ Six o’clock already, I was just in the middle of a dream, now I’m up, awake, looking at my Twitter stream. As I do that the Twitter app is making multiple API calls over HTTPS to Twitter’s servers somewhere on the Internet. ]]></description>
            <content:encoded><![CDATA[ <p>Six o’clock already, I was just in the middle of a dream, now I’m up, awake, looking at my Twitter stream. As I do that the Twitter app is making multiple API calls over HTTPS to Twitter’s servers somewhere on the Internet.</p><p>Those HTTPS connections are running over TCP via my home WiFi and broadband connection. All’s well inside the house, the WiFi connection is interference free thanks to my eero system, the broadband connection is stable and so there’s no packet loss, and my broadband provider’s connection to Twitter’s servers is also loss free.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2TYEDkSskxA3liOv6VxN4K/ba0ff5d2cd688fddfbe31f18193bb9cb/happy-home-.svg" />
            
            </figure><p>Those are the perfect conditions for HTTPS running over TCP. Not a packet dropped, not a bit of jitter, no congestion. It’s even the perfect conditions for HTTP/2 where multiple streams of requests and responses are being sent from my phone to websites and APIs as I boot my morning. Unlike HTTP/1.1, HTTP/2 is able to use a single TCP connection for multiple, simultaneously in flight requests. That has a significant speed advantage over the old way (one request after another per TCP connection) when conditions are good.</p><p>But I have to catch an early train, got to be to work by nine, so I step out of the front door and my phone silently and smoothly switches from my home WiFi to 4G. All’s not well inside the phone’s apps though. The TCP connections in use between Chrome and apps, and websites and APIs are suddenly silent. Those HTTPS connections are in trouble and about to fail; errors are going to occur deep inside apps. I’m going to see sluggish response from my phone.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2kBQqHqSUvvn7Ucnc0lxCs/e4eca9863819afc21b67e57b8d616980/dropped-tcp-.svg" />
            
            </figure><p>The IP address associated with my phone has abruptly changed as I go from home to roam. TCP connections either stall or get dropped resulting in a weird delay while internal timers inform apps that connections have disappeared or as connections are re-established. It’s irritating, because it takes me so long just to figure out what I'm gonna wear, and now I’m waiting for an app that worked fine moments ago.</p><p>The same thing will happen multiple times on my trip as I jump around the cell towers and service providers along the route. It might be tempting to blame it on the train, but it’s really that the Internet was never meant to work this way. We weren’t meant to be carrying around pocket supercomputers that roam across lossy, noisy networks all the while trying to remain productive while complaining about sub-second delays in app response time.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2dsnywDKpe0un0wFXB7T7Y/96eaf483e734328bd6d077bee917d545/full-commute.svg" />
            
            </figure><p>One proposed solution to these problems is QUIC: a new way to send packets across the Internet that takes into account what a messy place the Internet really is. A place where people don’t stand still and use the same IP address all the time (the horror!), a place where packets get lost because of radio reflections off concrete buildings (how awful!), a place with no Waze (how terrible!) where congestion comes and goes without a live map.</p><p>QUIC tries to make an HTTPS connection between a computer (phone) and server work reliably despite the poor conditions, it does this with a collection of technologies.</p><p>The first is UDP to replace TCP. UDP is widely used for fire-and-forget protocols where packets are sent but their arrival or ordering is not guaranteed (TCP provides the opposite: it guarantees arrival order and delivery but at a cost). Because UDP doesn’t have TCP’s guarantees it allows developers to innovate new protocols that do guarantee delivery and ordering (on top of UDP) that can incorporate features that TCP lacks.</p><p>One such feature is end-to-end encryption. All QUIC connections are fully encrypted. Another proposed feature is forward-error correction or FEC. When NASA’s Deep Space Network talks to the Voyager 2 spacecraft (which recently left our solar system) it transmits messages that become garbled crossing 17.6 billion km of space (that’s about 11 billion miles). Voyager 2 can’t send back the equivalent of “Say again?” when it receives a garbled message so the messages sent to Voyager 2 contain error-correcting codes that allow it to reconstruct the message from the mess.</p><p>Similarly, QUIC plans to incorporate error-correcting codes that allow missing data to be reconstructed. Although an app or server can send the “Say again?” message, it’s faster if an error-correcting code stops that being needed. The result is snappy apps and websites even in difficult Internet conditions.</p><p>QUIC also solves the HTTP/2 HoL problem. HoL is head of line blocking: because HTTP/2 sits on top of TCP and TCP guarantees delivery order if a packet gets lost the entire TCP connection has to wait while the missing packet is retransmitted. That’s OK if only one stream of data is passing over the TCP connection, but for efficiency it’s better to have multiple streams per connection. Sadly that means all streams wait when a packet gets lost. QUIC solves that because it doesn’t rely on TCP for delivery and ordering and can make an intelligent decision about which streams need to wait and which can continue when a packet goes astray.</p><p>Finally, one of the slower parts of a standard HTTP/2 over TCP connection is the very beginning. When the app or browser makes a connection there’s an initial handshake at the TCP level followed by a handshake to establish encryption. Over a high latency connection (say on a mobile phone on 3G) that creates a noticeable delay. Since QUIC controls all aspects of the connect it merges together connection and encryption into a single handshake.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2dL6kbAv4uFdYpp7WcUTDM/ab0fb0209a086ae42b1660e90f061aa6/full-commute-copy.svg" />
            
            </figure><p>Hopefully, this blog post has helped you see the operation of HTTPS on the real, messy, roaming Internet in a different light. Nick’s more <a href="/head-start-with-quic/">technical blog</a> will tell you how to test out QUIC for yourself. Visit <a href="https://cloudflare-quic.com">https://cloudflare-quic.com</a> to get started.</p><p>If you want to join the early access program for QUIC from Cloudflare you’ll find a button on the <a href="https://dash.cloudflare.com?zone=network">Network</a> tab in the Cloudflare Dashboard.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/JonvDzU5JUlYCBTI1pYNV/687c1d86a678a13738d608ac6a925e7e/image4-1.png" />
            
            </figure><p>As we did with TLS 1.3 we’ll be working closely with IETF as QUIC develops and be continually rolling out the latest versions of the standard as they are created. We look forward to the day when all your connections are QUIC!</p><p><a href="/subscribe/"><i>Subscribe to the blog</i></a><i> for daily updates on all our Birthday Week announcements.</i></p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4qBoeuxUM8tPVqGrVUgV1c/f0cb67075597e6c7807afbbc0a807c15/Cloudflare-Birthday-Week-7.png" />
            
            </figure> ]]></content:encoded>
            <category><![CDATA[Birthday Week]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[QUIC]]></category>
            <category><![CDATA[HTTPS]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[Beta]]></category>
            <guid isPermaLink="false">1j6UWopUfEiaG6T8LTE0Wm</guid>
            <dc:creator>John Graham-Cumming</dc:creator>
        </item>
        <item>
            <title><![CDATA[Encrypting SNI: Fixing One of the Core Internet Bugs]]></title>
            <link>https://blog.cloudflare.com/esni/</link>
            <pubDate>Mon, 24 Sep 2018 12:00:00 GMT</pubDate>
            <description><![CDATA[ Cloudflare launched on September 27, 2010. Since then, we've considered September 27th our birthday. This Thursday we'll be turning 8 years old.
Ever since our first birthday, we've used the occasion  ]]></description>
            <content:encoded><![CDATA[ 
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2KlxHjVa3dQkN8o9aUwBhS/8e02971c49b45811314c1393b0f5a761/Cloudflare-Birthday-Week-2.png" />
            
            </figure><p>Cloudflare <a href="https://www.youtube.com/watch?v=bAc_5gMwzuM">launched</a> on September 27, 2010. Since then, we've considered September 27th our birthday. This Thursday we'll be turning 8 years old.</p><p>Ever since our first birthday, we've used the occasion to launch new products or services. Over the years we came to the conclusion that the right thing to do to celebrate our birthday wasn't so much about launching products that we could make money from but instead to do things that were gifts back to our users and the Internet in general. My cofounder Michelle <a href="/cloudflare-turns-8/">wrote about this tradition in a great blog post yesterday</a>.</p><p>Personally, one of my proudest moments at Cloudflare came on our birthday in 2014 when we made <a href="/introducing-universal-ssl/">HTTPS support free for all our users</a>. At the time, people called us crazy — literally and repeatedly. Frankly, internally we had significant debates about whether we were crazy since encryption was the primary reason why people upgraded from a free account to a paid account.</p><p>But it was the right thing to do. The fact that encryption wasn't built into the web from the beginning was, in our mind, a bug. Today, almost exactly four years later, the web is nearly 80% encrypted thanks to leadership from great projects like Let's Encrypt, the browser teams at Google, Apple, Microsoft, and Mozilla, and the fact that more and more hosting and SaaS providers have built in support for HTTPS at no cost. I'm proud of the fact that we were a leader in helping start that trend.</p><p>Today is another day I expect to look back on and be proud of because today we hope to help start a new trend to make the encrypted web more private and secure. To understand that, you have to understand a bit about why the encrypted web as exists today still leaks a lot of your browsing history.</p>
    <div>
      <h2>How Private Is Your Browsing History?</h2>
      <a href="#how-private-is-your-browsing-history">
        
      </a>
    </div>
    <p>The expectation when you visit a site over HTTPS is that no one listening on the line between you and where your connection terminates can see what you're doing. And to some extent, that's true. If you visit your bank's website, HTTPS is effective at keeping the contents sent to or from the site (for example, your username and password or the balance of your bank account) from being leaked to your ISP or anyone else monitoring your network connection.</p><p>While the contents sent to or received from a HTTPS site are protected, the fact that you visited the site can be observed easily in a couple of ways. Traditionally, one of these has been via <a href="https://www.cloudflare.com/learning/dns/what-is-dns/">DNS</a>. DNS queries are, by default, unencrypted so your ISP or anyone else can see where you're going online. That's why last April, we launched <a href="https://one.one.one.one/">1.1.1.1</a> — a free (and <a href="https://www.dnsperf.com/#!dns-resolvers">screaming fast</a>) public DNS resolver with support for DNS over TLS and DNS over HTTPS.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4fWcc2LzN2BjMRcEisXaNq/bfe31ed6b02c14c773ff21e0ef3efefd/Cloudflare_resolver-1111-april-to-sept-2018.png" />
            
            </figure><p><a href="https://one.one.one.one/">1.1.1.1</a> has been a huge success and we've significantly increased the percentage of DNS queries sent over an encrypted connection. Critics, however, rightly pointed out that the identity of the sites that you visit still can leak in other ways. The most problematic is something called the Server Name Indication (SNI) extension.</p>
    <div>
      <h2>Why SNI?</h2>
      <a href="#why-sni">
        
      </a>
    </div>
    <p>Fundamentally, SNI exists in order to allow you to host multiple encrypted websites on a single IP address. Early browsers didn't include the SNI extension. As a result, when a request was made to establish a HTTPS connection the web server didn't have much information to go on and could only hand back a single <a href="https://www.cloudflare.com/application-services/products/ssl/">SSL certificate</a> per IP address the web server was listening on.</p><p>One solution to this problem was to create certificates with multiple Subject Alternative Names (SANs). These certificates would encrypt traffic for multiple domains that could all be hosted on the same IP. This is how Cloudflare handles HTTPS traffic from older browsers that don't support SNI. We limit that feature to our paying customers, however, for the same reason that SANs aren't a great solution: they're a hack, a pain to manage, and can slow down performance if they include too many domains.</p><p>The more scalable solution was SNI. The analogy that makes sense to me is to think of a postal mail envelope. The contents inside the envelope are protected and can't be seen by the postal carrier. However, outside the envelope is the street address which the postal carrier uses to bring the envelope to the right building. On the Internet, a web server's IP address is the equivalent of the street address.</p><p>However, if you live in a multi-unit building, a street address alone isn't enough to get the envelope to the right recipient. To supplement the street address you include an apartment number or recipient's name. That's the equivalent of SNI. If a web server hosts multiple domains, SNI ensures that a request is routed to the correct site so that the right SSL certificate can be returned to be able to encrypt and decrypt any content.</p>
    <div>
      <h2>Nosey Networks</h2>
      <a href="#nosey-networks">
        
      </a>
    </div>
    <p>The specification for SNI was introduced by the IETF in 2003 and browsers rolled out support over the next few years. At the time, it seemed like an acceptable tradeoff. The vast majority of Internet traffic was unencrypted. Adding a TLS extension that made it easier to support encryption seemed like a great trade even if that extension itself wasn't encrypted.</p><p>But, today, as HTTPS covers nearly 80% of all web traffic, the fact that SNI leaks every site you go to online to your ISP and anyone else listening on the line has become a glaring privacy hole. Knowing what sites you visit can build a very accurate picture of who you are, creating both privacy and security risks.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/c8iRSUxulX6UwI9piOFPV/b474340b44191a2f633cbb3c8c4efcd3/Cloudflare_https_with_plaintext_dns_tls12_plaintext_sni.png" />
            
            </figure><p>In the United States, ISPs were briefly restricted in their ability to gather customer browsing data under FCC rules passed at the end of the Obama administration. ISPs, however, lobbied Congress and, in April 2017, President Trump signed a Congressional Resolution repealing those protections. As ISPs increasingly <a href="https://arstechnica.com/information-technology/2017/06/oath-verizon-completes-4-5-billion-buy-of-yahoo-and-merges-it-with-aol/">acquire media companies</a> and <a href="https://www.appnexus.com/company/pressroom/att-to-acquire-appnexus">ad targeting businesses</a>, being able to mine the data flowing through their pipes is an increasingly attractive business for them and an increasingly troubling privacy threat to all of us.</p>
    <div>
      <h2>Closing the SNI Privacy Hole</h2>
      <a href="#closing-the-sni-privacy-hole">
        
      </a>
    </div>
    <p>On May 3, about a month after we launched <a href="https://one.one.one.one/">1.1.1.1</a>, I was reading a review of our new service. While the article praised the fact that <a href="https://one.one.one.one/">1.1.1.1</a> was privacy-oriented, it somewhat nihilistically concluded that it was all for naught because ISPs could still spy on you by monitoring SNI. Frustrated, I dashed off an email to some of Cloudflare's engineers and the senior team at Mozilla, who we'd been working on a project to help encrypt DNS. I concluded my email:</p><blockquote><p>My simple PRD: if Firefox connects to a Cloudflare IP then we'd give you a public key to use to encrypt the SNI entry before sending it to us. How does it scale to other providers? Dunno, but we have to start somewhere. Rough consensus and running code, right?</p></blockquote><p>It turned out to be <a href="/encrypted-sni">a bit more complex than that</a>. However, today I'm proud to announce that Encrypted SNI (ESNI) is live across Cloudflare's network. Later this week we expect Mozilla's Firefox to become the first browser to support the new protocol in their Nightly release. In the months to come, the plan is for it go mainstream. And it's not just Mozilla. There's been significant interest from all the major browser makers and I'm hopeful they'll all add support for ESNI over time.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6m8eyH5dCGo5BQfLPfexbz/92ef550e0368ec79ce665f492d083234/Cloudflare_https_with_secure_dns_tls13_encrytped_sni-1.png" />
            
            </figure>
    <div>
      <h2>Hoping to Start Another Trend</h2>
      <a href="#hoping-to-start-another-trend">
        
      </a>
    </div>
    <p>While we're the first to support ESNI, we haven't done this alone. We worked on ESNI with great teams from Apple, Fastly, Mozilla, and others across the industry who, like us, are concerned about Internet privacy. While Cloudflare is the first content network to support ESNI, this isn't a proprietary protocol. It's being worked on as an <a href="https://datatracker.ietf.org/doc/draft-ietf-tls-esni/?include_text=1">IETF Draft RFC</a> and we are hopeful others will help us formalize the draft and implement the standard as well. If you're curious about the technical details behind ESNI, you can learn more from the <a href="/encrypted-sni/">great blog post just published by my colleague Alessandro Ghedini</a>. Finally, when browser support starts to launch later this week you can test this from our <a href="https://encryptedsni.com">handy ESNI testing tool</a>.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/XTA8EuEynvYfTJQCyRATk/53384a3abb066250e9437b24f8cfcbb8/Cloudflare_esni-2.png" />
            
            </figure><p>Four years ago I'm proud that we helped start a trend that today has led to nearly all the web being encrypted. Today, I hope we are again helping start a trend — this time to make the encrypted web even more private and secure.</p><p><a href="/subscribe/"><i>Subscribe to the blog</i></a><i> for daily updates on all our Birthday Week announcements.</i></p> ]]></content:encoded>
            <category><![CDATA[Birthday Week]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[Privacy]]></category>
            <category><![CDATA[HTTPS]]></category>
            <category><![CDATA[Reliability]]></category>
            <category><![CDATA[1.1.1.1]]></category>
            <category><![CDATA[DNS]]></category>
            <guid isPermaLink="false">217e9J3VWa9ZTuRMrfi0Tj</guid>
            <dc:creator>Matthew Prince</dc:creator>
        </item>
        <item>
            <title><![CDATA[Verschlüsselung von SNI: Wie einer der großen Internet-Bugs behoben wurde]]></title>
            <link>https://blog.cloudflare.com/esni-de/</link>
            <pubDate>Mon, 24 Sep 2018 12:00:00 GMT</pubDate>
            <description><![CDATA[ Cloudflare wurde am 27. September 2010 gestartet. Seitdem betrachten wir den 27. September als unseren Geburtstag. Am kommenden Donnerstag werden wir 8 Jahre alt. ]]></description>
            <content:encoded><![CDATA[ 
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1wWqJe3PftUIK7AHsrN98x/005a6f1e9aa489d4a91ddf673f60659a/Cloudflare-Birthday-Week-2.png" />
            
            </figure><p>Cloudflare wurde am 27. September 2010 <a href="https://www.youtube.com/watch?v=bAc_5gMwzuM">gestartet</a>. Seitdem betrachten wir den 27. September als unseren Geburtstag. Am kommenden Donnerstag werden wir 8 Jahre alt.</p><p>Seit unserem ersten Geburtstag nutzen wir diese Gelegenheit, neue Produkte oder Dienste zu präsentieren. Im Laufe der Jahre kamen wir zu dem Schluss, dass wir zur Feier unseres Geburtstages nicht Produkte auf den Markt bringen sollten, mit denen wir Geld verdienen können, sondern vielmehr unsere Benutzer und das Internet im Allgemeinen mit etwas beschenken sollten. Meine Mitbegründerin Michelle hat <a href="/cloudflare-turns-8/">gestern in einem großartigen Blogbeitrag über diese Tradition geschrieben</a>.</p><p>Für mich persönlich war einer meiner stolzesten Momente bei Cloudflare, als wir anlässlich unseres Geburtstags 2014 die <a href="/introducing-universal-ssl/">HTTPS-Unterstützung für alle unsere Benutzer kostenlos</a> machten. Damals nannten uns die Leute verrückt – wortwörtlich und wiederholt. Ehrlich gesagt, hatten wir auch intern einige Diskussionen darüber geführt, ob wir verrückt sind, denn Verschlüsselung war der Hauptgrund dafür, dass Menschen von einem kostenlosen Konto auf ein Bezahlkonto umgestiegen sind.</p><p>Aber es war der richtige Weg. Die Tatsache, dass Verschlüsselung nicht von Anfang an in das Web integriert war, war unserer Meinung nach ein Fehler. Heute, fast genau vier Jahre später, ist das Web zu fast 80 % verschlüsselt, dank der Vorreiterrolle von großartigen Projekten wie Let's Encrypt, der Browserteams von Google, Apple, Microsoft und Mozilla und der Tatsache, dass immer mehr Hosting- und SaaS-Anbieter kostenlose Unterstützung für HTTPS bieten. Ich bin stolz darauf, dass wir eine führende Rolle bei der Entwicklung dieses Trends gespielt haben.</p><p>Heute ist ein weiterer Tag, auf den ich zurückblickend stolz sein möchte, denn heute werden wir hoffentlich einen neuen Trend starten, der das verschlüsselte Web noch privater und sicherer macht. Um das zu verstehen, müssen Sie zunächst verstehen, warum das verschlüsselte Web, wie es heute existiert, immer noch einen großen Teil Ihres Browserverlaufs durchsickern lässt.</p>
    <div>
      <h3><b>Wie privat ist Ihr Browserverlauf?</b></h3>
      <a href="#wie-privat-ist-ihr-browserverlauf">
        
      </a>
    </div>
    <p>Wenn Sie eine Website über HTTPS besuchen, erwarten Sie, dass niemand, der die Verbindung zwischen Ihnen und Ihrem Ziel belauscht, sehen kann, was Sie tun. Und bis zu einem gewissen Grad stimmt das auch. Wenn Sie die Website Ihrer Bank besuchen, sorgt HTTPS dafür, dass die an die Website gesendeten oder von ihr erhaltenen Inhalte (z. B. Ihr Benutzername und Passwort oder der Saldo Ihres Bankkontos) nicht an Ihren ISP oder andere Personen, die Ihre Netzwerkverbindung überwachen, weitergegeben werden.</p><p>Während die Inhalte, die an eine HTTPS-Seite gesendet oder von ihr empfangen werden, geschützt sind, kann die Tatsache, dass Sie die Seite besucht haben, auf verschiedene Weise leicht nachverfolgt werden. Eine Möglichkeit dafür war traditionell über DNS. DNS-Abfragen sind standardmäßig unverschlüsselt, sodass Ihr ISP oder jemand anderes sehen kann, wo Sie online unterwegs sind. Aus diesem Grund haben wir im vergangenen April <a href="https://one.one.one.one/">1.1.1.1</a> gestartet – einen kostenlosen (und <a href="https://www.dnsperf.com/#!dns-resolvers">wahnsinnig schnellen</a>) öffentlichen DNS-Resolver mit Unterstützung für DNS-over-TLS und DNS-over-HTTPS.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2cArjVGpgqcKvpcikHi9tq/2a99675f24c5276b9fd95835b26d4e3b/Cloudflare_resolver-1111-april-to-sept-2018.png" />
            
            </figure><p><a href="https://one.one.one.one/">1.1.1.1</a> war ein großer Erfolg und wir konnten den Prozentsatz der DNS-Abfragen, die über eine verschlüsselte Verbindung gesendet werden, deutlich erhöhen. Kritiker haben jedoch zu Recht darauf hingewiesen, dass die Identität der von Ihnen besuchten Seiten immer noch durch andere Schwachstellen nachverfolgt werden kann. Die problematischste davon ist die sogenannte Server Name Indication-Erweiterung (SNI).</p>
    <div>
      <h3><b>Warum SNI?</b></h3>
      <a href="#warum-sni">
        
      </a>
    </div>
    <p>Grundsätzlich ist SNI da, damit Sie mehrere verschlüsselte Websites unter einer einzigen IP-Adresse hosten können. Frühe Browser enthielten die SNI-Erweiterung nicht. Wenn damals eine Anfrage zum Aufbau einer HTTPS-Verbindung gestellt wurde, hatte der Webserver nicht viele Informationen und konnte nur ein einziges SSL-Zertifikat pro IP-Adresse, an der der Webserver lauschte, zurückgeben.</p><p>Eine Lösung für dieses Problem war die Erstellung von Zertifikaten mit mehreren Alternativnamen (<i>Subject Alternative Names</i> – SANs). Diese Zertifikate verschlüsseln den Traffic für mehrere Domains, die alle auf derselben IP gehostet werden können. So behandelt Cloudflare HTTPS-Zugriffe von älteren Browsern, die kein SNI unterstützen. Wir beschränken diese Funktion jedoch auf unsere zahlenden Kunden, und das aus dem gleichen Grund, warum SANs keine gute Lösung sind: Sie sind ein Hack, schwierig zu verwalten und können die Performance bremsen, wenn sie zu viele Domains umfassen.</p><p>SNI war als Lösung besser skalierbar. Eine gute Analogie dafür ist meiner Ansicht nach ein Briefumschlag. Der Inhalt im Umschlag ist geschützt und für den Postboten nicht sichtbar. Außerhalb des Umschlags befindet sich jedoch die Anschrift mit Straße und Hausnummer, dank der der Postbote den Umschlag in das richtige Gebäude bringt. Im Internet entspricht die IP-Adresse eines Webservers Straße und Hausnummer.</p><p>Wenn man jedoch in einem Mehrfamilienhaus wohnt, reichen Straße und Hausnummer allein nicht aus, um den Umschlag an den richtigen Empfänger zu bringen. Als Ergänzung zur Anschrift gibt man eine Wohnungsnummer oder den Namen des Empfängers an. Das ist das Äquivalent zu SNI. Wenn ein Webserver mehrere Domains hostet, stellt SNI sicher, dass die Anfrage an die richtige Website weitergeleitet wird, sodass das richtige SSL-Zertifikat zurückgegeben werden kann, um alle Inhalte verschlüsseln und entschlüsseln zu können.</p>
    <div>
      <h3><b>Neugierige Netzwerke</b></h3>
      <a href="#neugierige-netzwerke">
        
      </a>
    </div>
    <p>Die Spezifikation für SNI wurde 2003 von der IETF eingeführt, in den Folgejahren stellten Browser die Unterstützung bereit. Damals schien es ein akzeptabler Kompromiss zu sein. Der überwiegende Teil des Internetverkehrs war unverschlüsselt. Das Hinzufügen einer TLS-Erweiterung, mit der Verschlüsselung einfacher unterstützt werden kann, schien ein großartiger Schritt zu sein, auch wenn diese Erweiterung selbst nicht verschlüsselt war.</p><p>Aber heute macht HTTPS fast 80 % des gesamten Web-Traffics aus. Dass SNI jede von Ihnen online aufgerufene Website an Ihren ISP und jeden anderen weitergibt, der die Verbindung belauscht, zu einem eklatanten Datenschutzloch geworden. Wenn man weiß, welche Websites Sie besuchen, kann man sich ein sehr genaues Bild davon verschaffen, wer Sie sind. Das führt zu Datenschutz- und Sicherheitsrisiken.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1g5hZdym2SbTSehzlbzHXp/0312b28baa77dfdfb5b0da24e4e83dc0/Cloudflare_https_with_plaintext_dns_tls12_plaintext_sni.png" />
            
            </figure><p>In den Vereinigten Staaten gab es nach den FCC-Vorschriften, die kurz vor dem Ende der Obama-Regierungszeit verabschiedet wurden, kurzzeitig Beschränkungen für ISPs beim Sammeln von Daten zum Surfverhalten ihrer Kunden. Die ISPs betrieben jedoch erfolgreich Lobbyarbeit im Kongress, und im April 2017 unterzeichnete Präsident Trump eine Kongressresolution zur Aufhebung dieser Schutzmaßnahmen. Mit der zunehmenden <a href="https://arstechnica.com/information-technology/2017/06/oath-verizon-completes-4-5-billion-buy-of-yahoo-and-merges-it-with-aol/">Übernahme von Medien-</a> und <a href="https://www.appnexus.com/company/pressroom/att-to-acquire-appnexus">Ad-Targeting-Unternehmen</a> durch ISPs erhalten diese die Möglichkeit, die durch ihre Leitungen fließenden Daten zu analysieren. Für diese Unternehmen ist das ein zunehmend attraktives Geschäft, für uns alle stellt es jedoch eine zunehmend beunruhigende Bedrohung der Privatsphäre dar.</p>
    <div>
      <h3><b>Das SNI-Privatsphärenleck schließen</b></h3>
      <a href="#das-sni-privatspharenleck-schliessen">
        
      </a>
    </div>
    <p>Am 3. Mai, etwa einen Monat nach dem Start von <a href="https://one.one.one.one/">1.1.1.1</a>, las ich einen Bericht über unseren neuen Dienst. Während der Artikel die Tatsache lobte, dass <a href="https://one.one.one.one/">1.1.1.1</a> zum Schutz der Privatsphäre beiträgt, kam er etwas nihilistisch zu dem Schluss, dass das alles vergeblich sei, weil ISPs uns immer noch ausspionieren konnten, indem sie SNI überwachten. Frustriert schrieb ich schnell eine E-Mail an einige Cloudflare-Entwickler und das Senior-Team von Mozilla, mit dem wir an einem Projekt zur Verschlüsselung des DNS arbeiteten. Ich schloss meine E-Mail mit:</p><blockquote><p>Meine einfache Anforderungsspezifikation: Wenn Firefox sich mit einer Cloudflare-IP verbindet, dann geben wir ihm einen öffentlichen Schlüssel, mit dem er den SNI-Eintrag verschlüsseln können, bevor er ihn an uns sendet. Wie lässt sich das auf andere Anbieter ausdehnen? Keine Ahnung, aber irgendwo müssen wir anfangen. Grober Konsens und lauffähiger Code, oder?</p></blockquote><p>Es stellte sich heraus, <a href="/encrypted-sni">dass sich die Sache etwas komplexer verhielt</a>. Heute bin ich jedoch stolz darauf, Ihnen mitteilen zu können, dass Encrypted SNI (ESNI) im gesamten Netzwerk von Cloudflare live ist. Wir erwarten, dass im Laufe dieser Woche Mozillas Firefox in seiner Nightly-Version als erster Browser das neue Protokoll unterstützen wird. In den kommenden Monaten ist geplant, dass es zum Mainstream wird. Und es ist nicht nur Mozilla. Es gibt großes Interesse von allen wichtigen Browseranbietern und ich bin zuversichtlich, dass jeder von ihnen im Laufe der Zeit ESNI unterstützen wird.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7kMgBtfX1zoyC275LnKoVs/703408a4cfeb7ce1959574a007b3a9ce/Cloudflare_https_with_secure_dns_tls13_encrytped_sni-1.png" />
            
            </figure>
    <div>
      <h3><b>In der Hoffnung, einen weiteren Trend zu starten</b></h3>
      <a href="#in-der-hoffnung-einen-weiteren-trend-zu-starten">
        
      </a>
    </div>
    <p>Wir sind zwar die ersten, die ESNI unterstützen, aber wir haben es nicht allein entwickelt. Wir haben an ESNI gemeinsam mit großartigen Teams von Apple, Fastly, Mozilla und anderen aus der Branche gearbeitet, die sich wie wir um den Datenschutz im Internet sorgen. Cloudflare ist zwar das erste Content-Netzwerk, das ESNI unterstützt, aber es handelt sich nicht um ein proprietäres Protokoll. Es wird als <a href="https://datatracker.ietf.org/doc/draft-ietf-tls-esni/?include_text=1">IETF-Entwurf-RFC</a> daran gearbeitet und wir hoffen, dass andere uns helfen werden, den Entwurf zu formalisieren und den Standard umzusetzen. Wenn Sie neugierig auf die technischen Details hinter ESNI sind, können Sie mehr aus dem <a href="/encrypted-sni/">großartigen Blogbeitrag erfahren, den mein Kollege Alessandro Ghedini gerade veröffentlicht hat</a>. Und wenn die Browserunterstützung Ende dieser Woche beginnt, können Sie es mit unserem <a href="https://encryptedsni.com/">praktischen ESNI-Testwerkzeug</a> testen.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1G1SYDC7ap5s5vmT4XkP0h/d8b59712c00928361869ddf815ba51ac/Cloudflare_esni-2.png" />
            
            </figure><p>Vor vier Jahren war ich stolz darauf, dass wir zu einem Trend beigetragen haben, der dazu geführt hat, dass heute fast das gesamte Web verschlüsselt wird. Ich hoffe, dass wir heute wieder zu einem Trend beitragen – diesmal, um das verschlüsselte Web noch privater und noch sicherer zu machen.</p><p><a href="/subscribe/"><i>Abonnieren Sie den Blog</i></a> <i>und Sie erhalten tägliche Updates zu allen unseren Mitteilungen der Geburtstagswoche.</i></p> ]]></content:encoded>
            <category><![CDATA[Birthday Week]]></category>
            <category><![CDATA[HTTPS]]></category>
            <category><![CDATA[DNS]]></category>
            <category><![CDATA[Privacy]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[1.1.1.1]]></category>
            <guid isPermaLink="false">2plnCwUypgnfMv1s2hNVXP</guid>
            <dc:creator>Matthew Prince</dc:creator>
        </item>
        <item>
            <title><![CDATA[Cloudflare goes InterPlanetary - Introducing Cloudflare’s IPFS Gateway]]></title>
            <link>https://blog.cloudflare.com/distributed-web-gateway/</link>
            <pubDate>Mon, 17 Sep 2018 13:01:00 GMT</pubDate>
            <description><![CDATA[ Today we’re excited to introduce Cloudflare’s IPFS Gateway, an easy way to access content from the the InterPlanetary File System (IPFS) that doesn’t require installing and running any special software on your computer. ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Today we’re excited to introduce Cloudflare’s IPFS Gateway, an easy way to access content from the InterPlanetary File System (IPFS) that doesn’t require installing and running any special software on your computer. We hope that our gateway, hosted at <a href="https://cloudflare-ipfs.com">cloudflare-ipfs.com</a>, will serve as the platform for many new highly-reliable and security-enhanced web applications. The IPFS Gateway is the first product to be released as part of our <a href="https://www.cloudflare.com/distributed-web-gateway">Distributed Web Gateway</a> project, which will eventually encompass all of our efforts to support new distributed web technologies.</p><p>This post will provide a brief introduction to IPFS. We’ve also written an accompanying blog post <a href="/e2e-integrity">describing what we’ve built</a> on top of our gateway, as well as <a href="https://developers.cloudflare.com/distributed-web/">documentation</a> on how to serve your own content through our gateway with your own custom hostname.</p>
    <div>
      <h3>Quick Primer on IPFS</h3>
      <a href="#quick-primer-on-ipfs">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3hS4Q3j1BBgCA4If6fFImz/d25f3b24017cb8dfcfa9208a68f8ed03/spaaaace-ipfs_3.5x-1.png" />
            
            </figure><p>Usually, when you access a website from your browser, your browser tracks down the origin server (or servers) that are the ultimate, centralized repository for the website’s content. It then sends a request from your computer to that origin server, wherever it is in the world, and that server sends the content back to your computer. This system has served the Internet well for decades, but there’s a pretty big downside: centralization makes it impossible to keep content online any longer than the origin servers that host it. If that origin server is hacked or taken out by a natural disaster, the content is unavailable. If the site owner decides to take it down, the content is gone. In short, mirroring is not a first-class concept in most platforms (<a href="https://www.cloudflare.com/always-online/">Cloudflare’s Always Online</a> is a notable exception).</p><p>The InterPlanetary File System aims to change that. IPFS is a peer-to-peer file system composed of thousands of computers around the world, each of which stores files on behalf of the network. These files can be anything: cat pictures, 3D models, or even entire websites. Over 5,000,000,000 files had been uploaded to <a href="https://cloudflare-ipfs.com/ipfs/QmWimYyZHzChb35EYojGduWHBdhf9SD5NHqf8MjZ4n3Qrr/Filecoin-Primer.7-25.pdf">IPFS already</a>.</p>
    <div>
      <h3>IPFS vs. Traditional Web</h3>
      <a href="#ipfs-vs-traditional-web">
        
      </a>
    </div>
    <p>There are two key differences between IPFS and the web as we think of it today.</p><p>The first is that with IPFS anyone can cache and serve any content—for free. Right now, with the traditional web, most typically rely on big hosting providers in remote locations to store content and serve it to the rest of the web. If you want to set up a website, you have to pay one of these major services to do this for you. With IPFS, anyone can sign up their computer to be a node in the system and start serving data. It doesn’t matter if you’re working on a Raspberry Pi or running the world’s biggest server. You can still be a productive node in the system.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/66pmjeMDzYrBDczI8gesbH/8b9b8588bd3bf911aa71bc5e87bbd671/Decentralized-Network-1.png" />
            
            </figure><p>The second key difference is that data is content-addressed, rather than location-addressed. That’s a bit of a subtle difference, but the ramifications are substantial, so it’s worth breaking down.</p><p>Currently, when you open your browser and navigate to example.com, you’re telling the browser “fetch me the data stored at example.com’s IP address” (this happens to be 93.184.216.34). That IP address marks where the content you want is stored in the network. You then send a request to the server at that IP address for the “example.com” content and the server sends back the relevant information. So at the most basic level, you tell the network where to look and the network sends back what it found.</p><p>IPFS turns that on its head.</p><p>With IPFS, every single block of data stored in the system is addressed by a cryptographic hash of its contents, i.e., a long string of letters and numbers that is unique to that block. When you want a piece of data in IPFS, you request it by its hash. So rather than asking the network “get me the content stored at 93.184.216.34,” you ask “get me the content that has a hash value of <code>QmXnnyufdzAWL5CqZ2RnSNgPbvCc1ALT73s6epPrRnZ1Xy</code>.” (<code>QmXnnyufdzAWL5CqZ2RnSNgPbvCc1ALT73s6epPrRnZ1Xy</code> happens to be the hash of a .txt file containing the string “I’m trying out IPFS”).</p>
    <div>
      <h3>How is this so different?</h3>
      <a href="#how-is-this-so-different">
        
      </a>
    </div>
    <p>Remember that with IPFS, you tell the network what to look for, and the network figures out where to look.</p>
    <div>
      <h3>Why does this matter?</h3>
      <a href="#why-does-this-matter">
        
      </a>
    </div>
    <p>First off, it makes the network more resilient. The content with a hash of <code>QmXnnyufdzAWL5CqZ2RnSNgPbvCc1ALT73s6epPrRnZ1Xy</code> could be stored on dozens of nodes, so if one node that was caching that content goes down, the network will just look for the content on another node.</p><p>Second, it introduces an automatic level of security. Let’s say you know the hash value of a file you want. So you ask the network, “get me the file with hash <code>QmXnnyufdzAWL5CqZ2RnSNgPbvCc1ALT73s6epPrRnZ1Xy</code>” (the example.txt file from above). The network responds and sends the data. When you receive all the data, you can rehash it. If the data was changed at all in transit, the hash value you get will be different than the hash you asked for. You can think of the hash as like a unique fingerprint for the file. If you’re sent back a different file than you were expecting to receive, it’s going to have a different fingerprint. This means that the system has a built-in way of knowing whether or not content has been tampered with.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/14TxfLu3bArvharLWoFCJY/64e2fadd810da7c0a51149d8f71a9f95/ipfs-blog-post-image-1-copy_3.5x.png" />
            
            </figure>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4hO98vSSGsM8wh0r4w10GO/d4c1d501a571e241c1632a4645cfc8a3/ipfs-blog-post-image-2-copy_3.5x.png" />
            
            </figure>
    <div>
      <h3></h3>
      <a href="#">
        
      </a>
    </div>
    <p>A Note on IPFS Addresses and Cryptographic Hashes</p><p>Since we’ve spent some time going over why this content-addressed system is so special, it’s worth talking a little bit about how the IPFS addresses are built. Every address in IPFS is a <a href="https://github.com/multiformats/multihash">multihash</a>, which means that the address combines information about both the hashing algorithm used and the hash output into one string. IPFS multihashes have three distinct parts: the first byte of the mulithash indicates which hashing algorithm has been used to produce the hash; the second byte indicates the length of the hash; and the remaining bytes are the value output by the hash function. By default, IPFS uses the <a href="https://en.wikipedia.org/wiki/SHA-2">SHA-256</a> algorithm, which produces a 32-byte hash. This is represented by the string “Qm” in <a href="https://en.wikipedia.org/wiki/Base58">Base58</a> (the default encoding for IPFS addresses), which is why all the example IPFS addresses in this post are of the form “Qm…”.</p><p>While SHA-256 is the standard algorithm used today, this multihash format allows the IPFS protocol to support addresses produced by other hashing algorithms. This allows the IPFS network to move to a different algorithm, should the world discover flaws with SHA-256 sometime in the future. If someone hashed a file with another algorithm, the address of that file would start some characters other than “Qm”.</p><p>The good news is that, at least for now, SHA-256 is believed to have a number of qualities that make it a strong cryptographic hashing algorithm. The most important of these is that SHA-256 is collision resistant. A collision occurs when there are two different files that produce the same hash when run through the SHA-256 algorithm. To understand why it’s important to prevent collisions, consider this short scenario. Imagine some IPFS user, Alice, uploads a file with some hash, and another user, Bob, uploads a different file that happens to produce the exact same hash. If this happened, there would be two different files in the network with the exact same address. So if some third person, Carol, sent out an IPFS request for the content at that address, she wouldn't necessarily know whether she was going to receive Bob’s file or Alice’s file.</p><p>SHA-256 makes collisions extremely unlikely. Because SHA-256 computes a 256-bit hash, there are 2^256 possible IPFS addresses that the algorithm could produce. Hence, the chance that there are two files in IPFS that produce a collision is low. Very low. If you’re interested in more details, the <a href="https://en.wikipedia.org/wiki/Birthday_attack#Mathematics">birthday attack</a> Wikipedia page has a cool table showing exactly how unlikely collisions are, given a sufficiently strong hashing algorithm.</p>
    <div>
      <h3>How exactly do you access content on IPFS?</h3>
      <a href="#how-exactly-do-you-access-content-on-ipfs">
        
      </a>
    </div>
    <p>Now that we’ve walked through all the details of what IPFS is, you’re probably wondering how to use it. There are a number of ways to access content that’s been stored in the IPFS network, but we’re going to address two popular ones here. The first way is to download IPFS onto your computer. This turns your machine into a node of the IPFS network, and it’s the best way to interact with the network if you want to get down in the weeds. If you’re interested in playing around with IPFS, the Go implementation can be downloaded <a href="https://ipfs.io/docs/install/">here</a>.</p><p>But what if you want access to content that’s stored on IPFS without the hassle of operating a node locally on your machine? That’s where IPFS gateways come into play. IPFS gateways are third-party nodes that fetch content from the IPFS network and serve it to you over <a href="https://www.cloudflare.com/learning/ssl/what-is-https/">HTTPS</a>. To use a gateway, you don’t need to download any software or type any code. You simply open up a browser and type in the gateway’s name and the hash of the content you’re looking for, and the gateway will serve the content in your browser.</p><p>Say you know you want to access the example.txt file from before, which has the hash <code>QmXnnyufdzAWL5CqZ2RnSNgPbvCc1ALT73s6epPrRnZ1Xy</code>, and there’s a public gateway that is accessible at <code>https://example-gateway.com</code></p><p>To access that content, all you need to do is open a browser and type</p>
            <pre><code>https://example-gateway.com/ipfs/QmXnnyufdzAWL5CqZ2RnSNgPbvCc1ALT73s6epPrRnZ1Xy</code></pre>
            <p>and you’ll get back the data stored at that hash. The combination of the /ipfs/ prefix and the hash is referred to as the file path. You always need to provide a full file path to access content stored in IPFS.</p>
    <div>
      <h3>What can you do with Cloudflare’s Gateway?</h3>
      <a href="#what-can-you-do-with-cloudflares-gateway">
        
      </a>
    </div>
    <p>At the most basic level, you can access any of the billions of files stored on IPFS from your browser. But that’s not the only cool thing you can do. Using Cloudflare’s gateway, you can also build a website that’s hosted entirely on IPFS, but still available to your users at a custom domain name. Plus, we’ll issue any website connected to our gateway a <a href="https://www.cloudflare.com/application-services/products/ssl/">free SSL certificate</a>, ensuring that each website connected to Cloudflare's gateway is secure from snooping and manipulation. For more on that, check out the <a href="https://developers.cloudflare.com/distributed-web/">Distributed Web Gateway developer docs</a>.</p><p>A fun example we’ve put together using the <a href="http://www.kiwix.org/">Kiwix</a> archives of all the different StackExchange websites and build a distributed search engine on top of that using only IPFS. Check it out <a href="https://ipfs-sec.stackexchange.cloudflare-ipfs.com/">here</a>.</p>
    <div>
      <h3>Dealing with abuse</h3>
      <a href="#dealing-with-abuse">
        
      </a>
    </div>
    <p>IPFS is a peer-to-peer network, so there is the possibility of users sharing abusive content. This is not something we support or condone. However, just like how Cloudflare works with more traditional customers, Cloudflare’s IPFS gateway is simply a cache in front of IPFS. Cloudflare does not have the ability to modify or remove content from the IPFS network. If any abusive content is found that is served by the Cloudflare IPFS gateway, you can use the standard abuse reporting mechanism described <a href="https://www.cloudflare.com/abuse/">here</a>.</p>
    <div>
      <h3>Embracing a distributed future</h3>
      <a href="#embracing-a-distributed-future">
        
      </a>
    </div>
    <p>IPFS is only one of a family of technologies that are embracing a new, decentralized vision of the web. Cloudflare is excited about the possibilities introduced by these new technologies and we see our gateway as a tool to help bridge the gap between the traditional web and the new generation of distributed web technologies headlined by IPFS. By enabling everyday people to explore IPFS content in their browser, we make the ecosystem stronger and support its growth. Just like when Cloudflare launched back in 2010 and changed the game for web properties by providing the <a href="https://www.cloudflare.com/security/">security</a>, <a href="https://www.cloudflare.com/performance/">performance</a>, and <a href="https://www.cloudflare.com/performance/ensure-application-availability/">availability</a> that was previously only available to the Internet giants, we think the IPFS gateway will provide the same boost to content on the distributed web.</p><p>Dieter Shirley, CTO of Dapper Labs and Co-founder of CryptoKitties said the following:</p><blockquote><p>We’ve wanted to store CryptoKitty art on IPFS since we launched, but the tech just wasn’t ready yet. Cloudflare’s announcement turns IPFS from a promising experiment into a robust tool for commercial deployment. Great stuff!</p></blockquote><p>The IPFS gateway is exciting, but it’s not the end of the road. There are other equally interesting distributed web technologies that could benefit from Cloudflare’s massive global network and we’re currently exploring these possibilities. If you’re interested in helping build a better internet with Cloudflare, <a href="https://www.cloudflare.com/careers/"><b>we’re hiring!</b></a></p><p><a href="/subscribe/"><i>Subscribe to the blog</i></a><i> for daily updates on our announcements.</i></p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2kIF9JJRHMU2pmS0vA2xbc/4261d639ac630d4c0f55e676621ddd51/Crypto-Week-1-1.png" />
            
            </figure> ]]></content:encoded>
            <category><![CDATA[Crypto Week]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[IPFS]]></category>
            <category><![CDATA[Universal SSL]]></category>
            <category><![CDATA[SSL]]></category>
            <category><![CDATA[DNSSEC]]></category>
            <category><![CDATA[HTTPS]]></category>
            <category><![CDATA[Research]]></category>
            <category><![CDATA[Cryptography]]></category>
            <guid isPermaLink="false">3gBDsqNt0ufJh5O7aQBBxd</guid>
            <dc:creator>Andy Parker</dc:creator>
        </item>
        <item>
            <title><![CDATA[Welcome to Crypto Week]]></title>
            <link>https://blog.cloudflare.com/crypto-week-2018/</link>
            <pubDate>Mon, 17 Sep 2018 13:00:00 GMT</pubDate>
            <description><![CDATA[ The Internet isn’t perfect. It was put together piecemeal through publicly funded research, private investment, and organic growth that has left us with an imperfect tapestry. ]]></description>
            <content:encoded><![CDATA[ <p></p><p>The Internet is an amazing invention. We marvel at how it connects people, connects ideas, and makes the world smaller. But the Internet isn’t perfect. It was put together piecemeal through publicly funded research, private investment, and organic growth that has left us with an imperfect tapestry. It’s also evolving. People are constantly developing creative applications and finding new uses for existing Internet technology. Issues like privacy and security that were afterthoughts in the early days of the Internet are now supremely important. People are being tracked and monetized, websites and web services are being attacked in interesting new ways, and the fundamental system of trust the Internet is built on is showing signs of age. The Internet needs an upgrade, and one of the tools that can make things better is cryptography.</p><p>Every day this week, Cloudflare will be announcing support for a new technology that uses cryptography to make the Internet better (hint: <a href="/subscribe/">subscribe to the blog</a> to make sure you don't miss any of the news). Everything we are announcing this week is free to use and provides a meaningful step towards supporting a new capability or structural reinforcement. So why are we doing this? Because it’s good for the users and good for the Internet. Welcome to Crypto Week!</p>
    <div>
      <h3>Day 1: Distributed Web Gateway</h3>
      <a href="#day-1-distributed-web-gateway">
        
      </a>
    </div>
    <ul><li><p><a href="/distributed-web-gateway/">Cloudflare goes InterPlanetary - Introducing Cloudflare’s IPFS Gateway</a></p></li><li><p><a href="/e2e-integrity/">End-to-End Integrity with IPFS</a></p></li></ul>
    <div>
      <h3>Day 2: DNSSEC</h3>
      <a href="#day-2-dnssec">
        
      </a>
    </div>
    <ul><li><p><a href="/automatically-provision-and-maintain-dnssec/">Expanding DNSSEC Adoption</a></p></li></ul>
    <div>
      <h3>Day 3: RPKI</h3>
      <a href="#day-3-rpki">
        
      </a>
    </div>
    <ul><li><p><a href="/rpki/">RPKI - The required cryptographic upgrade to BGP routing</a></p></li><li><p><a href="/rpki-details/">RPKI and BGP: our path to securing Internet Routing</a></p></li></ul>
    <div>
      <h3>Day 4: Onion Routing</h3>
      <a href="#day-4-onion-routing">
        
      </a>
    </div>
    <ul><li><p><a href="/cloudflare-onion-service/">Introducing the Cloudflare Onion Service</a></p></li></ul>
    <div>
      <h3>Day 5: Roughtime</h3>
      <a href="#day-5-roughtime">
        
      </a>
    </div>
    <ul><li><p><a href="/roughtime/">Roughtime: Securing Time with Digital Signatures</a></p></li></ul>
    <div>
      <h2>A more trustworthy Internet</h2>
      <a href="#a-more-trustworthy-internet">
        
      </a>
    </div>
    <p>Everything we do online depends on a relationship between users, services, and networks that is supported by some sort of trust mechanism. These relationships can be physical (I plug my router into yours), contractual (I paid a <a href="https://www.cloudflare.com/learning/dns/glossary/what-is-a-domain-name-registrar/">registrar</a> for this <a href="https://www.cloudflare.com/learning/dns/glossary/what-is-a-domain-name/">domain name</a>), or reliant on a trusted third party (I sent a message to my friend on iMessage via Apple). The simple act of visiting a website involves hundreds of trust relationships, some explicit and some implicit. The sheer size of the Internet and number of parties involved make trust online incredibly complex. Cryptography is a tool that can be used to encode and enforce, and most importantly scale these trust relationships.</p><p>To illustrate this, let’s break down what happens when you visit a website. But before we can do this, we need to know the jargon.</p><ul><li><p><b>Autonomous Systems (100 thousand or so active)</b>: An AS corresponds to a network provider connected to the Internet. Each has a unique Autonomous System Number (ASN).</p></li><li><p><b>IP ranges (1 million or so)</b>: Each AS is assigned a set of numbers called IP addresses. Each of these IP addresses can be used by the AS to identify a computer on its network when connecting to other networks on the Internet. These addresses are assigned by the Regional Internet Registries (RIR), of which there are 5. Data sent from one IP address to another hops from one AS to another based on a “route” that is determined by a protocol called BGP.</p></li><li><p><b>Domain names (&gt;1 billion)</b>: Domain names are the human-readable names that correspond to Internet services (like “cloudflare.com” or “mail.google.com”). These Internet services are accessed via the Internet by connecting to their IP address, which can be obtained from their domain name via the <a href="https://www.cloudflare.com/learning/dns/what-is-dns/">Domain Name System (DNS)</a>.</p></li><li><p><b>Content (infinite)</b>: The main use case of the Internet is to enable the transfer of specific pieces of data from one point on the network to another. This data can be of any form or type.</p></li></ul>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7wDRty2HT3gbpxOlTNAgm5/98e8e5b4168678a79bb091946813a845/name-to-asn-_3.5x.png" />
            
            </figure><p>When you type a website such as blog.cloudflare.com into your browser, a number of things happen. First, a (recursive) DNS service is contacted to get the IP address of the site. This DNS server configured by your ISP when you connect to the Internet, or it can be a public service such as 1.1.1.1 or 8.8.8.8. A query to the DNS service travels from network to network along a path determined by BGP announcements. If the recursive DNS server does not know the answer to the query, then it contacts the appropriate authoritative DNS services, starting with a root DNS server, down to a top level domain server (such as com or org), down to the DNS server that is authoritative for the domain. Once the DNS query has been answered, the browser sends an HTTP request to its IP address (traversing a sequence of networks), and in response, the server sends the content of the website.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2JHDdXKcnw2lce7hUVVvjP/a76212811897165d5e081bdc56be32a4/colorful-crypto-overview--copy-3_3.5x.png" />
            
            </figure><p>So what’s the problem with this picture? For one, every DNS query and every network hop needs to be trusted in order to trust the content of the site. Any DNS query could be modified, a network could advertise an IP that belongs to another network, and any machine along the path could modify the content. When the Internet was small, there were mechanisms to combat this sort of subterfuge. Network operators had a personal relationship with each other and could punish bad behavior, but given the number of networks in existence <a href="https://www.cidr-report.org/as2.0/autnums.html">almost 400,000 as of this week</a> this is becoming difficult to scale.</p><p>Cryptography is a tool that can encode these trust relationships and make the whole system reliant on hard math rather than physical handshakes and hopes.</p>
    <div>
      <h3>Building a taller tower of turtles</h3>
      <a href="#building-a-taller-tower-of-turtles">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4thrwRrwuZ1ctzlstdLLqv/4e0c8c5b89d4faa9f8ff3c5d76251f01/turtles.jpg" />
            
            </figure><p><a href="https://www.flickr.com/photos/wwarby/2499825928">Attribution 2.0 Generic (CC BY 2.0)</a></p><p>The two main tools that cryptography provides to help solve this problem are cryptographic hashes and digital signatures.</p><p>A hash function is a way to take any piece of data and transform it into a fixed-length string of data, called a digest or hash. A hash function is considered cryptographically strong if it is computationally infeasible (read: very hard) to find two inputs that result in the same digest, and that changing even one bit of the input results in a completely different digest. The most popular hash function that is considered secure is SHA-256, which has 256-bit outputs. For example, the SHA-256 hash of the word “crypto” is</p><p><code>DA2F073E06F78938166F247273729DFE465BF7E46105C13CE7CC651047BF0CA4</code></p><p>And the SHA-256 hash of “crypt0” is</p><p><code>7BA359D3742595F38347A0409331FF3C8F3C91FF855CA277CB8F1A3A0C0829C4</code></p><p>The other main tool is digital signatures. A digital signature is a value that can only be computed by someone with a private key, but can be verified by anyone with the corresponding public key. Digital signatures are way for a private key holder to “sign,” or attest to the authenticity of a specific message in a way that anyone can validate it.</p><p>These two tools can be combined to solidify the trust relationships on the Internet. By giving private keys to the trusted parties who are responsible for defining the relationships between ASs, IPs, domain names and content, you can create chains of trust that can be publicly verified. Rather than hope and pray, these relationships can be validated in real time at scale.</p><p>Let’s take our webpage loading example and see where digital signatures can be applied.</p><p><b>Routing</b>. Time-bound delegation of trust is defined through a system called the RPKI. RPKI defines an object called a Resource Certificate, an attestation that states that a given IP range belongs to a specific ASN for this period of time, digitally signed by the RIR responsible for assigning the IP range. Networks share routes via BGP, and if a route is advertised for an IP that does not conform the the Resource Certificate, the network can choose not to accept it.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2GgcQBjG2ecuqivy9h7SCe/f6bea1c0abb365d2cab805c221d00ae6/roas_3x.png" />
            
            </figure><p><b>DNS.</b> Adding cryptographic assurance to routing is powerful, but if a network adversary can change the content of the data (such as the DNS responses), then the system is still at risk. DNSSEC is a system built to provide a trusted link between names and IP addresses. The root of trust in DNSSEC is the DNS root key, which is managed with an <a href="https://www.iana.org/dnssec/ceremonies">elaborate signing ceremony</a>.</p><p><b>HTTPS</b>. When you connect to a site, not only do you want it to be coming from the right host, you also want the content to be private. The Web PKI is a system that issues certificates to sites, allowing you to bind the domain name to a time-bounded private key. Because there are many CAs, additional accountability systems like <a href="/introducing-certificate-transparency-and-nimbus/">certificate transparency</a> need to be involved to help keep the system in check.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6oSYpHOMRKf2EtzXWvcTbE/5b1100ad17540b9946cdbf91604a72fc/connection-to-asn_3.5x.png" />
            
            </figure><p>This cryptographic scaffolding turns the Internet into an encoded system of trust. With these systems in place, Internet users no longer need to trust every network and party involved in this diagram, they only need to trust the RIRs, DNSSEC and the CAs (and know the correct time).</p><p>This week we’ll be making some announcements that help strengthen this system of accountability.</p>
    <div>
      <h2>Privacy and integrity with friends</h2>
      <a href="#privacy-and-integrity-with-friends">
        
      </a>
    </div>
    <p>The Internet is great because it connects us to each other, but the details of how it connects us are important. The technical choices made when Internet was designed come with some interesting human implications.</p><p>One implication is <b>trackability</b>. Your IP address is contained on every packet you send over the Internet. This acts as a unique identifier for anyone (corporations, governments, etc.) to track what you do online. Furthermore, if you connect to a server, that server’s identity is sent in plaintext on the request <b>even over HTTPS</b>, revealing your browsing patterns to any intermediary who cares to look.</p><p>Another implication is <b>malleability</b>. Resources on the Internet are defined by <i>where</i> they are, not <i>what</i> they are. If you want to go to CNN or BBC, then you connect to the server for cnn.com or bbc.co.uk and validate the certificate to make sure it’s the right site. But once you’ve made the connection, there’s no good way to know that the actual content is what you expect it to be. If the server is hacked, it could send you anything, including dangerous malicious code. HTTPS is a secure pipe, but there’s no inherent way to make sure what gets sent through the pipe is what you expect.</p><p>Trackability and malleability are not inherent features of interconnectedness. It is possible to design networks that don’t have these downsides. It is also possible to build new networks with better characteristic on top of the existing Internet. The key ingredient is cryptography.</p>
    <div>
      <h3>Tracking-resilient networking</h3>
      <a href="#tracking-resilient-networking">
        
      </a>
    </div>
    <p>One of the networks built on top of the Internet that provides good privacy properties is Tor. The Tor network is run by a group of users who allow their computers to be used to route traffic for other members of the network. Using cryptography, it is possible to route traffic from one place to another without points along the path knowing both the source and the destination at the same time. This is called <a href="https://en.wikipedia.org/wiki/Onion_routing">onion routing</a> because it involves multiple layers of encryption, like an onion. Traffic coming out of the Tor network is “anonymous” because it could have come from anyone connected to the network. Everyone just blends in, making it hard to track individuals.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/68kxhYzrB1vuZ1FogJ2MgK/e11e433e5f39a95cd6ef9ed4009b5c8b/Tor-Onion-Cloudflare.png" />
            
            </figure><p>Similarly, web services can use onion routing to serve content inside the Tor network without revealing their location to visitors. Instead of using a hostname to identify their network location, so-called onion services use a cryptographic public key as their address. There are hundreds of onion services in use, including the one <a href="/welcome-hidden-resolver/">we use for 1.1.1.1</a> or the one in <a href="https://en.wikipedia.org/wiki/Facebookcorewwwi.onion">use by Facebook</a>.</p><p>Troubles occur at the boundary between Tor network and the rest of the Internet. This is especially true for user is attempting to access services that rely on abuse prevention mechanisms based on reputation. Since Tor is used by both privacy-conscious users and malicious bots, connections from both get lumped together and as the expression goes, one bad apple ruins the bunch. This unfortunately exposes legitimate visitors to anti-abuse mechanisms like CAPTCHAs. Tools like <a href="/cloudflare-supports-privacy-pass/">Privacy Pass</a> help reduce this burden but don’t eliminate it completely. This week we’ll be announcing a new way to improve this situation.</p>
    <div>
      <h3>Bringing integrity to content delivery</h3>
      <a href="#bringing-integrity-to-content-delivery">
        
      </a>
    </div>
    <p>Let’s revisit the issue of malleability: the fact that you can’t always trust the other side of a connection to send you the content you expect. There are technologies that allow users to insure the integrity of content without trusting the server. One such technology is a feature of HTML called <a href="https://www.w3.org/TR/SRI/">Subresource Integrity (SRI)</a>. SRI allows a webpage with sub-resources (like a script or stylesheet) to embed a unique cryptographic hash into the page so that when the sub-resource is loaded, it is checked to see that is matches the expected value. This protects the site from loading unexpected scripts from third parties, <a href="/an-introduction-to-javascript-based-ddos/">a known attack vector</a>.</p><p>Another idea is to flip this on its head: what if instead of fetching a piece of content from a specific location on the network, you asked the network to find piece of content that matches a given hash? By assigning resources based on their actual content rather than by location it’s possible to create a network in which you can fetch content from anywhere on the network and still know it’s authentic. This idea is called <i>content addressing</i> and there are networks built on top of the Internet that use it. These content addressed networks, based on protocols such <a href="https://ipfs.io/">IPFS</a> and <a href="https://datproject.org/">DAT</a>, are blazing a trail new trend in Internet applications called the Distributed Web. With the Distributed Web applications, malleability is no longer an issue, opening up a new set of possibilities.</p>
    <div>
      <h3>Combining strengths</h3>
      <a href="#combining-strengths">
        
      </a>
    </div>
    <p>Networks based on cryptographic principles, like Tor and IPFS, have one major downside compared to networks based on names: usability. Humans are exceptionally bad at remembering or distinguishing between cryptographically-relevant numbers. Take, for instance, the New York Times’ onion address:</p><p><code>https://www.nytimes3xbfgragh.onion/</code></p><p>This is would easily confused with similar-looking onion addresses, such as</p><p><code>https://www.nytimes3xfkdbgfg.onion/</code></p><p>which may be controlled by a malicious actor.</p><p>Content addressed networks are even worse from the perspective of regular people. For example, there is a snapshot of the Turkish version of Wikipedia on IPFS with the hash:</p><p><code>QmT5NvUtoM5nWFfrQdVrFtvGfKFmG7AHE8P34isapyhCxX</code></p><p>Try typing this hash into your browser without making a mistake.</p><p>These naming issues are things Cloudflare is perfectly positioned to help solve.First, by putting the hash address of an IPFS site in the DNS (and adding DNSSEC for trust) you can give your site a traditional hostname while maintaining a chain of trust.</p><p>Second, by enabling browsers to use a traditional DNS name to access the web through onion services, you can provide safer access to your site for Tor user with the added benefit of being better able to distinguish between bots and humans.With Cloudflare as the glue, is is possible to connect both standard internet and tor users to web sites and services on both the traditional web with the distributed web.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/rzDTSMsIy1c5frGel8Eff/ea82dba88874f5772d6f7bdc3cc54776/bowtie-diagram-crypto-week-2018-v02_medium-1.gif" />
            
            </figure><p>This is the promise of Crypto Week: using cryptographic guarantees to make a stronger, more trustworthy and more private internet without sacrificing usability.</p>
    <div>
      <h2>Happy Crypto Week</h2>
      <a href="#happy-crypto-week">
        
      </a>
    </div>
    <p>In conclusion, we’re working on many cutting-edge technologies based on cryptography and applying them to <a href="https://blog.cloudflare.com/50-years-of-the-internet-work-in-progress-to-a-better-internet/">make the Internet better</a>. The first announcement today is the launch of Cloudflare's <a href="/distributed-web-gateway/">Distributed Web Gateway</a> and <a href="/e2e-integrity/">browser extension</a>. Keep tabs on the Cloudflare blog for exciting updates as the week progresses.</p><p>I’m very proud of the team’s work on Crypto Week, which was made possible by the work of a dedicated team, including several brilliant interns. If this type of work is interesting to you, Cloudflare is hiring for the <a href="https://boards.greenhouse.io/cloudflare/jobs/634967?gh_jid=634967">crypto team</a> and <a href="https://www.cloudflare.com/careers/">others</a>!</p> ]]></content:encoded>
            <category><![CDATA[Crypto Week]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[IPFS]]></category>
            <category><![CDATA[HTTPS]]></category>
            <category><![CDATA[Tor]]></category>
            <category><![CDATA[DNS]]></category>
            <category><![CDATA[Reliability]]></category>
            <category><![CDATA[Cryptography]]></category>
            <guid isPermaLink="false">2NzZFGM5fxcJ3xnCx2v7jD</guid>
            <dc:creator>Nick Sullivan</dc:creator>
        </item>
        <item>
            <title><![CDATA[A Detailed Look at RFC 8446 (a.k.a. TLS 1.3)]]></title>
            <link>https://blog.cloudflare.com/rfc-8446-aka-tls-1-3/</link>
            <pubDate>Fri, 10 Aug 2018 23:00:00 GMT</pubDate>
            <description><![CDATA[ TLS 1.3 (RFC 8446) was published today. This article provides a deep dive into the changes introduced in TLS 1.3 and its impact on the future of internet security. ]]></description>
            <content:encoded><![CDATA[ <p>For the last five years, the Internet Engineering Task Force (IETF), the standards body that defines internet protocols, has been working on standardizing the latest version of one of its most important security protocols: Transport Layer Security (TLS). TLS is used to secure the web (and much more!), providing encryption and ensuring the authenticity of every HTTPS website and API. The latest version of TLS, TLS 1.3 (<a href="https://www.rfc-editor.org/rfc/pdfrfc/rfc8446.txt.pdf">RFC 8446</a>) was published today. It is the first major overhaul of the protocol, bringing significant security and performance improvements. This article provides a deep dive into the changes introduced in TLS 1.3 and its impact on the future of internet security.</p>
    <div>
      <h3>An evolution</h3>
      <a href="#an-evolution">
        
      </a>
    </div>
    <p>One major way Cloudflare provides <a href="https://www.cloudflare.com/application-services/solutions/api-security/">security</a> is by supporting HTTPS for websites and web services such as APIs. With HTTPS (the “S” stands for secure) the communication between your browser and the server travels over an encrypted and authenticated channel. Serving your content over HTTPS instead of HTTP provides confidence to the visitor that the content they see is presented by the legitimate content owner and that the communication is safe from eavesdropping. This is a big deal in a world where online privacy is more important than ever.</p><p>The machinery under the hood that makes HTTPS secure is a protocol called TLS. It has its roots in a protocol called Secure Sockets Layer (SSL) developed in the mid-nineties at Netscape. By the end of the 1990s, Netscape handed SSL over to the IETF, who renamed it TLS and have been the stewards of the protocol ever since. Many people still refer to web encryption as SSL, even though the vast majority of services have switched over to supporting TLS only. The term SSL continues to have popular appeal and Cloudflare has kept the term alive through product names like <a href="/keyless-ssl-the-nitty-gritty-technical-details/">Keyless SSL</a> and <a href="/introducing-universal-ssl/">Universal SSL</a>.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/59tFn3me3Oe6OcjT24CqYF/22a662ccc88b06adc516449b8e2be657/image5.png" />
            
            </figure><p>In the IETF, protocols are called RFCs. TLS 1.0 was RFC 2246, TLS 1.1 was RFC 4346, and TLS 1.2 was RFC 5246. Today, TLS 1.3 was published as RFC 8446. RFCs are generally published in order, keeping 46 as part of the RFC number is a nice touch.</p>
    <div>
      <h3>TLS 1.2 wears parachute pants and shoulder pads</h3>
      <a href="#tls-1-2-wears-parachute-pants-and-shoulder-pads">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5p8wOtJF3L8LEprZDoO0z7/c4742d6066b73c33e4f5e98afddb83ff/image11.jpg" />
            
            </figure><p><a href="https://memegenerator.net/Mc-Hammer-Pants">MC Hammer</a>, like SSL, was popular in the 90s</p><p>Over the last few years, TLS has seen its fair share of problems. First of all, there have been problems with the code that implements TLS, including <a href="/answering-the-critical-question-can-you-get-private-ssl-keys-using-heartbleed/">Heartbleed</a>, <a href="https://www.imperialviolet.org/2014/09/26/pkcs1.html">BERserk</a>, <a href="https://gotofail.com/">goto fail;</a>, and more. These issues are not fundamental to the protocol and mostly resulted from a lack of testing. Tools like <a href="https://github.com/RUB-NDS/TLS-Attacker">TLS Attacker</a> and <a href="https://security.googleblog.com/2016/12/project-wycheproof.html">Project Wycheproof</a> have helped improve the robustness of TLS implementation, but the more challenging problems faced by TLS have had to do with the protocol itself.</p><p>TLS was designed by engineers using tools from mathematicians. Many of the early design decisions from the days of SSL were made using heuristics and an incomplete understanding of how to design robust security protocols. That said, this isn’t the fault of the protocol designers (Paul Kocher, Phil Karlton, Alan Freier, Tim Dierks, Christopher Allen and others), as the entire industry was still learning how to do this properly. When TLS was designed, formal papers on the design of secure authentication protocols like Hugo Krawczyk’s landmark <a href="http://webee.technion.ac.il/~hugo/sigma-pdf.pdf">SIGMA</a> paper were still years away. TLS was 90s crypto: It meant well and seemed cool at the time, but the modern cryptographer’s design palette has moved on.</p><p>Many of the design flaws were discovered using <a href="https://en.wikipedia.org/wiki/Formal_verification">formal verification</a>. Academics attempted to prove certain security properties of TLS, but instead found counter-examples that were turned into real vulnerabilities. These weaknesses range from the purely theoretical (<a href="https://access.redhat.com/articles/2112261">SLOTH</a> and <a href="https://eprint.iacr.org/2018/298.pdf">CurveSwap</a>), to feasible for highly resourced attackers (<a href="https://weakdh.org/imperfect-forward-secrecy-ccs15.pdf">WeakDH</a>, <a href="/logjam-the-latest-tls-vulnerability-explained/">LogJam</a>, <a href="https://censys.io/blog/freak">FREAK</a>, <a href="https://nakedsecurity.sophos.com/2016/08/25/anatomy-of-a-cryptographic-collision-the-sweet32-attack/">SWEET32</a>), to practical and dangerous (<a href="https://en.wikipedia.org/wiki/POODLE">POODLE</a>, <a href="https://robotattack.org/">ROBOT</a>).</p>
    <div>
      <h3>TLS 1.2 is slow</h3>
      <a href="#tls-1-2-is-slow">
        
      </a>
    </div>
    <p>Encryption has always been important online, but historically it was only used for things like logging in or sending credit card information, leaving most other data exposed. There has been a major trend in the last few years towards using HTTPS for all traffic on the Internet. This has the positive effect of protecting more of what we do online from eavesdroppers and <a href="/an-introduction-to-javascript-based-ddos/">injection attacks</a>, but has the downside that new connections get a bit slower.</p><p>For a browser and web server to agree on a key, they need to exchange cryptographic data. The exchange, called the “handshake” in TLS, has remained largely unchanged since TLS was standardized in 1999. The handshake requires two additional round-trips between the browser and the server before encrypted data can be sent (or one when resuming a previous connection). The additional cost of the TLS handshake for HTTPS results in a noticeable hit to latency compared to an HTTP alone. This additional delay can negatively impact performance-focused applications.</p>
    <div>
      <h3>Defining TLS 1.3</h3>
      <a href="#defining-tls-1-3">
        
      </a>
    </div>
    <p>Unsatisfied with the outdated design of TLS 1.2 and two-round-trip overhead, the IETF set about defining a new version of TLS. In August 2013, Eric Rescorla laid out a wishlist of features for the new protocol:<a href="https://www.ietf.org/proceedings/87/slides/slides-87-tls-5.pdf">https://www.ietf.org/proceedings/87/slides/slides-87-tls-5.pdf</a></p><p>After <a href="https://www.ietf.org/mail-archive/web/tls/current/msg20938.html">some debate</a>, it was decided that this new version of TLS was to be called TLS 1.3. The main issues that drove the design of TLS 1.3 were mostly the same as those presented five years ago:</p><ul><li><p>reducing handshake latency</p></li><li><p>encrypting more of the handshake</p></li><li><p>improving resiliency to cross-protocol attacks</p></li><li><p>removing legacy features</p></li></ul><p>The specification was shaped by volunteers through an open design process, and after four years of diligent work and vigorous debate, TLS 1.3 is now in its final form: RFC 8446. As adoption increases, the new protocol will make the internet both faster and more secure.</p><p>In this blog post I will focus on the two main advantages TLS 1.3 has over previous versions: security and performance.</p>
    <div>
      <h3>Trimming the hedges</h3>
      <a href="#trimming-the-hedges">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/57PQK3ofbneOYgOmlwNm4Y/e5c3c319903504002330efec0fc06db2/image10.jpg" />
            
            </figure><p><a href="https://commons.wikimedia.org/wiki/File:Williton_Highbridge_Nursery_topiary_garden.jpg">Creative Commons Attribution-Share Alike 3.0</a></p><p>In the last two decades, we as a society have learned a lot about how to write secure cryptographic protocols. The parade of cleverly-named attacks from POODLE to Lucky13 to SLOTH to LogJam showed that even TLS 1.2 contains antiquated ideas from the early days of cryptographic design. One of the design goals of TLS 1.3 was to correct previous mistakes by removing potentially dangerous design elements.</p>
    <div>
      <h4>Fixing key exchange</h4>
      <a href="#fixing-key-exchange">
        
      </a>
    </div>
    <p>TLS is a so-called “hybrid” cryptosystem. This means it uses both symmetric key cryptography (encryption and decryption keys are the same) and public key cryptography (encryption and decryption keys are different). Hybrid schemes are the predominant form of encryption used on the Internet and are used in <a href="https://en.wikipedia.org/wiki/Secure_Shell">SSH</a>, <a href="https://en.wikipedia.org/wiki/IPsec">IPsec</a>, <a href="https://en.wikipedia.org/wiki/Signal_Protocol">Signal</a>, <a href="https://www.wireguard.com/">WireGuard</a> and other protocols. In hybrid cryptosystems, public key cryptography is used to establish a shared secret between both parties, and the shared secret is used to create symmetric keys that can be used to encrypt the data exchanged.</p><p>As a general rule, public key crypto is slow and expensive (microseconds to milliseconds per operation) and symmetric key crypto is fast and cheap (nanoseconds per operation). Hybrid encryption schemes let you send a lot of encrypted data with very little overhead by only doing the expensive part once. Much of the work in TLS 1.3 has been about improving the part of the handshake, where public keys are used to establish symmetric keys.</p>
    <div>
      <h4>RSA key exchange</h4>
      <a href="#rsa-key-exchange">
        
      </a>
    </div>
    <p>The public key portion of TLS is about establishing a shared secret. There are two main ways of doing this with public key cryptography. The simpler way is with public-key encryption: one party encrypts the shared secret with the other party’s public key and sends it along. The other party then uses its private key to decrypt the shared secret and ... voila! They both share the same secret. This technique was discovered in 1977 by Rivest, Shamir and Adelman and is called RSA key exchange. In TLS’s RSA key exchange, the shared secret is decided by the client, who then encrypts it to the server’s public key (extracted from the certificate) and sends it to the server.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6vFfpyW3dU5vLUnbrzOGk8/5b9c70ff88d0da6d3210fc37f13e8184/image4.png" />
            
            </figure><p>The other form of key exchange available in TLS is based on another form of public-key cryptography, invented by Diffie and Hellman in 1976, so-called Diffie-Hellman key agreement. In Diffie-Hellman, the client and server both start by creating a public-private key pair. They then send the public portion of their key share to the other party. When each party receives the public key share of the other, they combine it with their own private key and end up with the same value: the pre-main secret. The server then uses a digital signature to ensure the exchange hasn’t been tampered with. This key exchange is called “ephemeral” if the client and server both choose a new key pair for every exchange.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6tjgSGvMVdzh3LZvt1HZT1/98031ef05fdc4353af60d27062fdb67a/image3.png" />
            
            </figure><p>Both modes result in the client and server having a shared secret, but RSA mode has a serious downside: it’s not <a href="/staying-on-top-of-tls-attacks/">forward secret</a>. That means that if someone records the encrypted conversation and then gets ahold of the RSA private key of the server, they can decrypt the conversation. This even applies if the conversation was recorded and the key is obtained some time well into the future. In a world where national governments are recording encrypted conversations and using exploits like <a href="https://en.wikipedia.org/wiki/Heartbleed">Heartbleed</a> to steal private keys, this is a realistic threat.</p><p>RSA key exchange has been problematic for some time, and not just because it’s not forward-secret. It’s also notoriously difficult to do correctly. In 1998, Daniel Bleichenbacher discovered a vulnerability in the way RSA encryption was done in SSL and created what’s called the “million-message attack,” which allows an attacker to perform an RSA private key operation with a server’s private key by sending a million or so well-crafted messages and looking for differences in the error codes returned. The attack has been refined over the years and in some cases only requires thousands of messages, making it feasible to do from a laptop. It was recently discovered that major websites (including facebook.com) were also vulnerable to a variant of Bleichenbacher’s attack called the <a href="https://robotattack.org/">ROBOT attack</a> as recently as 2017.</p><p>To reduce the risks caused by non-forward secret connections and million-message attacks, RSA encryption was removed from TLS 1.3, leaving ephemeral Diffie-Hellman as the only key exchange mechanism. Removing RSA key exchange brings other advantages, as we will discuss in the performance section below.</p>
    <div>
      <h4>Diffie-Hellman named groups</h4>
      <a href="#diffie-hellman-named-groups">
        
      </a>
    </div>
    <p>When it comes to cryptography, giving too many options leads to the wrong option being chosen. This principle is most evident when it comes to choosing Diffie-Hellman parameters. In previous versions of TLS, the choice of the Diffie-Hellman parameters was up to the participants. This resulted in some implementations choosing incorrectly, resulting in vulnerable implementations being deployed. TLS 1.3 takes this choice away.</p><p>Diffie-Hellman is a powerful tool, but not all Diffie-Hellman parameters are “safe” to use. The security of Diffie-Hellman depends on the difficulty of a specific mathematical problem called the <a href="https://en.wikipedia.org/wiki/Discrete_logarithm">discrete logarithm problem</a>. If you can solve the discrete logarithm problem for a set of parameters, you can extract the private key and break the security of the protocol. Generally speaking, the bigger the numbers used, the harder it is to solve the discrete logarithm problem. So if you choose small DH parameters, you’re in trouble.</p><p>The LogJam and WeakDH attacks of 2015 showed that many TLS servers could be tricked into using small numbers for Diffie-Hellman, allowing an attacker to break the security of the protocol and decrypt conversations.</p><p>Diffie-Hellman also requires the parameters to have certain other mathematical properties. In 2016, Antonio Sanso found an <a href="http://arstechnica.com/security/2016/01/high-severity-bug-in-openssl-allows-attackers-to-decrypt-https-traffic/">issue in OpenSSL</a> where parameters were chosen that lacked the right mathematical properties, resulting in another vulnerability.</p><p>TLS 1.3 takes the opinionated route, restricting the Diffie-Hellman parameters to ones that are known to be secure. However, it still leaves several options; permitting only one option makes it difficult to update TLS in case these parameters are found to be insecure some time in the future.</p>
    <div>
      <h3>Fixing ciphers</h3>
      <a href="#fixing-ciphers">
        
      </a>
    </div>
    <p>The other half of a hybrid crypto scheme is the actual encryption of data. This is done by combining an authentication code and a symmetric cipher for which each party knows the key. As I’ll describe, there are many ways to encrypt data, most of which are wrong.</p>
    <div>
      <h4>CBC mode ciphers</h4>
      <a href="#cbc-mode-ciphers">
        
      </a>
    </div>
    <p>In the last section we described TLS as a hybrid encryption scheme, with a public key part and a symmetric key part. The public key part is not the only one that has caused trouble over the years. The symmetric key portion has also had its fair share of issues. In any secure communication scheme, you need both encryption (to keep things private) and integrity (to make sure people don’t modify, add, or delete pieces of the conversation). Symmetric key encryption is used to provide both encryption and integrity, but in TLS 1.2 and earlier, these two pieces were combined in the wrong way, leading to security vulnerabilities.</p><p>An algorithm that performs symmetric encryption and decryption is called a symmetric cipher. Symmetric ciphers usually come in two main forms: block ciphers and stream ciphers.</p><p>A stream cipher takes a fixed-size key and uses it to create a stream of pseudo-random data of arbitrary length, called a key stream. To encrypt with a stream cipher, you take your message and combine it with the key stream by XORing each bit of the key stream with the corresponding bit of your message.. To decrypt, you take the encrypted message and XOR it with the key stream. Examples of pure stream ciphers are RC4 and ChaCha20. Stream ciphers are popular because they’re simple to implement and fast in software.</p><p>A block cipher is different than a stream cipher because it only encrypts fixed-sized messages. If you want to encrypt a message that is shorter or longer than the block size, you have to do a bit of work. For shorter messages, you have to add some extra data to the end of the message. For longer messages, you can either split your message up into blocks the cipher can encrypt and then use a block cipher mode to combine the pieces together somehow. Alternatively, you can turn your block cipher into a stream cipher by encrypting a sequence of counters with a block cipher and using that as the stream. This is called “counter mode”. One popular way of encrypting arbitrary length data with a block cipher is a mode called cipher block chaining (CBC).</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5hHGcSStKo5bHWDn64PXHq/4801697c668fc061eab0c0ab57c2fdd8/image9.png" />
            
            </figure><p></p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/510G6uPdPGbwTJHcsGqPSc/726945c085f0912e1307e18ef4393563/image7.png" />
            
            </figure><p>In order to prevent people from tampering with data, encryption is not enough. Data also needs to be integrity-protected. For CBC-mode ciphers, this is done using something called a message-authentication code (MAC), which is like a fancy checksum with a key. Cryptographically strong MACs have the property that finding a MAC value that matches an input is practically impossible unless you know the secret key. There are two ways to combine MACs and CBC-mode ciphers. Either you encrypt first and then MAC the ciphertext, or you MAC the plaintext first and then encrypt the whole thing. In TLS, they chose the latter, MAC-then-Encrypt, which turned out to be the wrong choice.</p><p>You can blame this choice for <a href="https://www.youtube.com/watch?v=-_8-2pDFvmg">BEAST</a>, as well as a slew of padding oracle vulnerabilities such as <a href="http://www.isg.rhul.ac.uk/tls/Lucky13.html">Lucky 13</a> and <a href="https://eprint.iacr.org/2015/1129">Lucky Microseconds</a>. Read my previous post on <a href="/padding-oracles-and-the-decline-of-cbc-mode-ciphersuites/">padding oracle attacks</a> for a comprehensive explanation of these flaws. The interaction between CBC mode and padding was also the cause of the widely publicized <a href="/sslv3-support-disabled-by-default-due-to-vulnerability/">POODLE vulnerability</a> in SSLv3 and some implementations of TLS.</p><p>RC4 is a classic stream cipher designed by Ron Rivest (the “R” of RSA) that was broadly supported since the early days of TLS. In 2013, it was found to have <a href="http://www.isg.rhul.ac.uk/tls/">measurable biases</a> that could be leveraged to allow attackers to decrypt messages.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7A6uMNALwNJtA0GSKOkxOL/5c9692baac1ab202e0e4d7f79e4ae8f2/image2.png" />
            
            </figure><p>AEAD Mode</p><p>In TLS 1.3, all the troublesome ciphers and cipher modes have been removed. You can no longer use CBC-mode ciphers or insecure stream ciphers such as RC4. The only type of symmetric crypto allowed in TLS 1.3 is a new construction called <a href="/it-takes-two-to-chacha-poly/">AEAD (authenticated encryption with additional data)</a>, which combines encryption and integrity into one seamless operation.</p>
    <div>
      <h3>Fixing digital signatures</h3>
      <a href="#fixing-digital-signatures">
        
      </a>
    </div>
    <p>Another important part of TLS is authentication. In every connection, the server authenticates itself to the client using a digital certificate, which has a public key. In RSA-encryption mode, the server proves its ownership of the private key by decrypting the pre-main secret and computing a MAC over the transcript of the conversation. In Diffie-Hellman mode, the server proves ownership of the private key using a digital signature. If you’ve been following this blog post so far, it should be easy to guess that this was done incorrectly too.</p>
    <div>
      <h4>PKCS#1v1.5</h4>
      <a href="#pkcs-1v1-5">
        
      </a>
    </div>
    <p>Daniel Bleichenbacher has made a living identifying problems with RSA in TLS. In 2006, he devised a pen-and-paper attack against RSA signatures as used in TLS. It was later discovered that major TLS implemenations including those of NSS and OpenSSL <a href="https://www.ietf.org/mail-archive/web/openpgp/current/msg00999.html">were vulnerable to this attack</a>. This issue again had to do with how difficult it is to implement padding correctly, in this case, the PKCS#1 v1.5 padding used in RSA signatures. In TLS 1.3, PKCS#1 v1.5 is removed in favor of the newer design <a href="https://en.wikipedia.org/wiki/Probabilistic_signature_scheme">RSA-PSS</a>.</p>
    <div>
      <h4>Signing the entire transcript</h4>
      <a href="#signing-the-entire-transcript">
        
      </a>
    </div>
    <p>We described earlier how the server uses a digital signature to prove that the key exchange hasn’t been tampered with. In TLS 1.2 and earlier, the server’s signature only covers part of the handshake. The other parts of the handshake, specifically the parts that are used to negotiate which symmetric cipher to use, are not signed by the private key. Instead, a symmetric MAC is used to ensure that the handshake was not tampered with. This oversight resulted in a number of high-profile vulnerabilities (FREAK, LogJam, etc.). In TLS 1.3 these are prevented because the server signs the entire handshake transcript.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2gJewngm3kPtCgXDvl7O4q/340d28439e4eaac4cd176359dfa19900/image1.png" />
            
            </figure><p>The FREAK, LogJam and CurveSwap attacks took advantage of two things:</p><ol><li><p>the fact that intentionally weak ciphers from the 1990s (called export ciphers) were still supported in many browsers and servers, and</p></li><li><p>the fact that the part of the handshake used to negotiate which cipher was used was not digitally signed.</p></li></ol><p>The on-path attacker can swap out the supported ciphers (or supported groups, or supported curves) from the client with an easily crackable choice that the server supports. They then break the key and forge two finished messages to make both parties think they’ve agreed on a transcript.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6PHIoZN6sxu78eUdoLWcbz/4f4a1fbbff72785aa2be24c5c9872e8f/image13.png" />
            
            </figure><p>These attacks are called downgrade attacks, and they allow attackers to force two participants to use the weakest cipher supported by both parties, even if more secure ciphers are supported. In this style of attack, the perpetrator sits in the middle of the handshake and changes the list of supported ciphers advertised from the client to the server to only include weak export ciphers. The server then chooses one of the weak ciphers, and the attacker figures out the key with a brute-force attack, allowing the attacker to forge the MACs on the handshake. In TLS 1.3, this type of downgrade attack is impossible because the server now signs the entire handshake, including the cipher negotiation.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/P4S0oZBnuJvkG23ljrAmN/a658f4a88dddcf2019fa22567e150a53/image14.png" />
            
            </figure>
    <div>
      <h3>Better living through simplification</h3>
      <a href="#better-living-through-simplification">
        
      </a>
    </div>
    <p>TLS 1.3 is a much more elegant and secure protocol with the removal of the insecure features listed above. This hedge-trimming allowed the protocol to be simplified in ways that make it easier to understand, and faster.</p>
    <div>
      <h4>No more take-out menu</h4>
      <a href="#no-more-take-out-menu">
        
      </a>
    </div>
    <p>In previous versions of TLS, the main negotiation mechanism was the ciphersuite. A ciphersuite encompassed almost everything that could be negotiated about a connection:</p><ul><li><p>type of certificates supported</p></li><li><p>hash function used for deriving keys (e.g., SHA1, SHA256, ...)</p></li><li><p>MAC function (e.g., HMAC with SHA1, SHA256, …)</p></li><li><p>key exchange algorithm (e.g., RSA, ECDHE, …)</p></li><li><p>cipher (e.g., AES, RC4, ...)</p></li><li><p>cipher mode, if applicable (e.g., CBC)</p></li></ul><p>Ciphersuites in previous versions of TLS had grown into monstrously large alphabet soups. Examples of commonly used cipher suites are: DHE-RC4-MD5 or ECDHE-ECDSA-AES-GCM-SHA256. Each ciphersuite was represented by a code point in a table maintained by an organization called the Internet Assigned Numbers Authority (IANA). Every time a new cipher was introduced, a new set of combinations needed to be added to the list. This resulted in a combinatorial explosion of code points representing every valid choice of these parameters. It had become a bit of a mess.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2W8o2jcAOgb3EUcGitVN8b/59d24160803d908ed4417494d57ea288/image8.png" />
            
            </figure><p>TLS 1.2</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/171emSynzRJ1MSpJv08jc1/18c165eb9c379b48418a5f25a47131e1/image16.png" />
            
            </figure><p></p><p>TLS 1.3</p><p>TLS 1.3 removes many of these legacy features, allowing for a clean split between three orthogonal negotiations:</p><ul><li><p>Cipher + HKDF Hash</p></li><li><p>Key Exchange</p></li><li><p>Signature Algorithm</p></li></ul>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5eHObkLXwOPPw9MEbaxSsc/8e20132ec65ebb83b1e43528711fe05d/image6.png" />
            
            </figure><p>This simplified cipher suite negotiation and radically reduced set of negotiation parameters opens up a new possibility. This possibility enables the TLS 1.3 handshake latency to drop from two round-trips to only one round-trip, providing the performance boost that will ensure that TLS 1.3 will be popular and widely adopted.</p>
    <div>
      <h3>Performance</h3>
      <a href="#performance">
        
      </a>
    </div>
    <p>When establishing a new connection to a server that you haven’t seen before, it takes two <a href="https://www.cloudflare.com/learning/cdn/glossary/round-trip-time-rtt/">round-trips</a> before data can be sent on the connection. This is not particularly noticeable in locations where the server and client are geographically close to each other, but it can make a big difference on mobile networks where latency can be as high as 200ms, an amount that is noticeable for humans.</p>
    <div>
      <h3>1-RTT mode</h3>
      <a href="#1-rtt-mode">
        
      </a>
    </div>
    <p>TLS 1.3 now has a radically simpler cipher negotiation model and a reduced set of key agreement options (no RSA, no user-defined DH parameters). This means that every connection will use a DH-based key agreement and the parameters supported by the server are likely easy to guess (ECDHE with X25519 or P-256). Because of this limited set of choices, the client can simply choose to send DH key shares in the first message instead of waiting until the server has confirmed which key shares it is willing to support. That way, the server can learn the shared secret and send encrypted data one round trip earlier. Chrome’s implementation of TLS 1.3, for example, sends an X25519 keyshare in the first message to the server.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3E3tuAB7cL1jf7HXHLegge/461301a79e282e3034a6acc1bb537e49/image3.png" />
            
            </figure><p></p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5xa8AA1zO4jZ4yjPcIukcM/a29464a13527710055cd6031cae54c92/image15.png" />
            
            </figure><p>In the rare situation that the server does not support one of the key shares sent by the client, the server can send a new message, the HelloRetryRequest, to let the client know which groups it supports. Because the list has been trimmed down so much, this is not expected to be a common occurrence.</p>
    <div>
      <h3>0-RTT resumption</h3>
      <a href="#0-rtt-resumption">
        
      </a>
    </div>
    <p>A further optimization was inspired by the <a href="https://docs.google.com/document/u/1/d/1g5nIXAIkN_Y-7XJW5K45IblHd_L2f5LTaDUDwvZ5L6g/edit">QUIC protocol</a>. It lets clients send encrypted data in their first message to the server, resulting in no additional latency cost compared to unencrypted HTTP. This is a big deal, and once TLS 1.3 is widely deployed, the encrypted web is sure to feel much snappier than before.</p><p>In TLS 1.2, there are two ways to resume a connection, <a href="/tls-session-resumption-full-speed-and-secure/">session ids and session tickets</a>. In TLS 1.3 these are combined to form a new mode called PSK (pre-shared key) resumption. The idea is that after a session is established, the client and server can derive a shared secret called the “resumption main secret”. This can either be stored on the server with an id (session id style) or encrypted by a key known only to the server (session ticket style). This session ticket is sent to the client and redeemed when resuming a connection.</p><p>For resumed connections, both parties share a resumption main secret so key exchange is not necessary except for providing forward secrecy. The next time the client connects to the server, it can take the secret from the previous session and use it to encrypt application data to send to the server, along with the session ticket. Something as amazing as sending encrypted data on the first flight does come with its downfalls.</p>
    <div>
      <h3>Replayability</h3>
      <a href="#replayability">
        
      </a>
    </div>
    <p>There is no interactivity in 0-RTT data. It’s sent by the client, and consumed by the server without any interactions. This is great for performance, but comes at a cost: replayability. If an attacker captures a 0-RTT packet that was sent to server, they can replay it and there’s a chance that the server will accept it as valid. This can have interesting negative consequences.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4aI0oRVRPjH8lmqKPfo2Uu/66828a933209d66d8f8ac0db0d77d54d/0-rtt-attack-_2x.png" />
            
            </figure><p>An example of dangerous replayed data is anything that changes state on the server. If you increment a counter, perform a database transaction, or do anything that has a permanent effect, it’s risky to put it in 0-RTT data.</p><p>As a client, you can try to protect against this by only putting “safe” requests into the 0-RTT data. In this context, “safe” means that the request won’t change server state. In HTTP, different methods are supposed to have different semantics. HTTP GET requests are supposed to be safe, so a browser can usually protect HTTPS servers against replay attacks by only sending GET requests in 0-RTT. Since most page loads start with a GET of “/” this results in faster page load time.</p><p>Problems start to happen when data sent in 0-RTT are used for state-changing requests. To help prevent against this failure case, TLS 1.3 also includes the time elapsed value in the session ticket. If this diverges too much, the client is either approaching the speed of light, or the value has been replayed. In either case, it’s prudent for the server to reject the 0-RTT data.</p><p>For more details about <a href="/introducing-0-rtt/">0-RTT, and the improvements to session resumption</a> in TLS 1.3, check out this previous blog post.</p>
    <div>
      <h3>Deployability</h3>
      <a href="#deployability">
        
      </a>
    </div>
    <p>TLS 1.3 was a radical departure from TLS 1.2 and earlier, but in order to be deployed widely, it has to be backwards compatible with existing software. One of the reasons TLS 1.3 has taken so long to go from draft to final publication was the fact that some existing software (namely middleboxes) wasn’t playing nicely with the new changes. Even minor changes to the TLS 1.3 protocol that were visible on the wire (such as eliminating the redundant ChangeCipherSpec message, bumping the version from 0x0303 to 0x0304) ended up causing connection issues for some people.</p><p>Despite the fact that future flexibility was built into the TLS spec, some implementations made incorrect assumptions about how to handle future TLS versions. The phenomenon responsible for this change is called <i>ossification</i> and I explore it more fully in the context of TLS in my previous post about <a href="/why-tls-1-3-isnt-in-browsers-yet/">why TLS 1.3 isn’t deployed yet</a>. To accommodate these changes, TLS 1.3 was modified to look a lot like TLS 1.2 session resumption (at least on the wire). This resulted in a much more functional, but less aesthetically pleasing protocol. This is the price you pay for upgrading one of the most widely deployed protocols online.</p>
    <div>
      <h3>Conclusions</h3>
      <a href="#conclusions">
        
      </a>
    </div>
    <p>TLS 1.3 is a modern security protocol built with modern tools like <a href="http://tls13tamarin.github.io/TLS13Tamarin/">formal</a> <a href="https://eprint.iacr.org/2016/081">analysis</a> that retains its backwards compatibility. It has been tested widely and iterated upon using real world deployment data. It’s a cleaner, faster, and more secure protocol ready to become the de facto two-party encryption protocol online. Draft 28 of TLS 1.3 is enabled by default for <a href="/you-get-tls-1-3-you-get-tls-1-3-everyone-gets-tls-1-3/">all Cloudflare customers</a>, and we will be rolling out the final version soon.</p><p>Publishing TLS 1.3 is a huge accomplishment. It is one the best recent examples of how it is possible to take 20 years of deployed legacy code and change it on the fly, resulting in a better internet for everyone. TLS 1.3 has been debated and analyzed for the last three years and it’s now ready for prime time. Welcome, RFC 8446.</p> ]]></content:encoded>
            <category><![CDATA[Encryption]]></category>
            <category><![CDATA[TLS]]></category>
            <category><![CDATA[TLS 1.3]]></category>
            <category><![CDATA[HTTPS]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[Research]]></category>
            <category><![CDATA[Cryptography]]></category>
            <guid isPermaLink="false">2sBEBduE1Y7lYRV2e70E5m</guid>
            <dc:creator>Nick Sullivan</dc:creator>
        </item>
        <item>
            <title><![CDATA[Going Proactive on Security: Driving Encryption Adoption Intelligently]]></title>
            <link>https://blog.cloudflare.com/being-proactive/</link>
            <pubDate>Tue, 24 Jul 2018 17:32:43 GMT</pubDate>
            <description><![CDATA[ It's no secret that Cloudflare operates at a huge scale. Cloudflare provides security and performance to over 9 million websites all around the world, from small businesses and WordPress blogs to Fortune 500 companies. That means one in every 10 web requests goes through our network. ]]></description>
            <content:encoded><![CDATA[ <p>It's no secret that Cloudflare operates at a huge scale. Cloudflare provides security and performance to over 9 million websites all around the world, from small businesses and WordPress blogs to Fortune 500 companies. That means one in every 10 web requests goes through our network.</p><p>However, hidden behind the scenes, we offer support in using our platform to all our customers - whether they're on our free plan or on our Enterprise offering. This blog post dives into some of the technology that helps make this possible and how we're using it to drive encryption and build a better web.</p>
    <div>
      <h3>Why Now?</h3>
      <a href="#why-now">
        
      </a>
    </div>
    <p>Recently web browser vendors have been working on extending encryption on the internet. Traditionally they would use positive indicators to mark encrypted traffic as secure; when traffic was served securely over HTTPS, a green padlock would indicate in your browser that this was the case. In moving to standardise encryption online, Google Chrome have been leading the charge in marking insecure page loads as "Not Secure". Today, this UI change has been pushed out to all Google Chrome users globally for all websites: any website loaded over HTTP will be marked as insecure.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4kSP68dcfVP7ZePmZVcX5m/52364a548f6f0540eb2b39d04edc41ac/chrome68.png" />
            
            </figure><p>That's not all though; all resources loaded by a website need to be loaded over HTTPS and such sites need to be configured properly to avoid mixed-content warnings, not to mention correctly configuring secure cryptography at the web server. Cloudflare helped bring widespread adoption of HTTPS to the internet by offering <a href="https://www.cloudflare.com/application-services/products/ssl/">free of charge SSL certificates</a>; in doing so we've become experts at knowing where web developers trip up in configuring HTTPS on their websites. HTTPS is now important for everyone who builds on the web, not just those with an interest in cryptography.</p>
    <div>
      <h3>Meet HelperBot</h3>
      <a href="#meet-helperbot">
        
      </a>
    </div>
    <p>In recent months, we’ve taken this expertise to help our Cloudflare customers avoid common mistakes. One of things me and my team have been working on building has been intelligent systems which automatically triage support tickets and present relevant debugging information upfront to the agent assigned to the ticket.</p><p>We use a custom-build Natural Language Processing model to determine the issues related to what the customer is discussing, and then we run technical tests in a Chain-of-Responsibility (with the most relevant to the customer running first) to determine what's going wrong. We then automatically triage the ticket and present this information to the support agent in the ticket.</p><p>Here's an example of a piece of the information we present upfront:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5NpVZj6VaEibiI2EjHoWwE/e0d7c8520532f96ff1770fb7a928f58a/Screen-Shot-2018-07-20-at-22.32.15.png" />
            
            </figure><p>Whilst we initially manually built automated debugging tests, we soon used Search Based Software Engineering strategies to self-write debugging automations based on various data points (such as the underlying technologies powering a site, their configuration or their error rates). When we detect anomalies, we are able to present this information upfront to our support agents to reduce the manual debugging they must conduct. In essence, we are able to get the software to write itself from test behaviour, within reason.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7DyM6G9zIBhPfuHQKGLrYV/700d3be7e165f2bb67f111e786aca410/Screen-Shot-2018-07-20-at-22.32.26.png" />
            
            </figure><p>Whilst this data is largely mostly internally used; we are starting to A/B test new versions of our support ticket submission form which present a subset of this information upfront to users before they write into us - allowing them to the answers to their problem quicker.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2Ai05dz42OOzBvTPyZbKCB/f5a3457bf73ef074f847680ab19c31e3/Screen-Shot-2018-07-20-at-22.42.01.png" />
            
            </figure>
    <div>
      <h3>Being Proactive About Security</h3>
      <a href="#being-proactive-about-security">
        
      </a>
    </div>
    <p>To help drive adoption of a more secure internet - and drive down common misconfigurations of SSL - we have started testing emailing customers proactively about Mixed Content errors and Redirect Loops associated with HTTPS web server misconfigurations.</p><p>By joining forces with our Marketing team, we were able to run an ongoing campaign of testing user behaviour to proactive security advice. Users receive messages similar to the one below.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7iIYIvaExMtNBkBPxwqdyU/b432461153c27b5dead0a9773320d300/Screen-Shot-2018-07-20-at-22.49.18.png" />
            
            </figure><p>With this capability, we decided to expose the functionality to a wider audience, including those not already using Cloudflare.</p>
    <div>
      <h3>SSL Test Tool (Powered by HelperBot-External)</h3>
      <a href="#ssl-test-tool-powered-by-helperbot-external">
        
      </a>
    </div>
    
            <figure>
            <a href="https://www.cloudflare.com/lp/ssl-test?utm_medium=website&amp;utm_source=blog&amp;utm_campaign=chrome68">
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3z3r2uTJBYPDFo2AXJ2Lla/bcdc986185ffaff6959524f32943e3b0/Screen-Shot-2018-07-21-at-00.53.26.png" />
            </a>
            </figure><p>To help website owners make the transition to HTTPS, we've launched <a href="https://www.cloudflare.com/lp/ssl-test?utm_medium=website&amp;utm_source=blog&amp;utm_campaign=chrome68">the SSL Test Tool</a>. We internally codenamed the backend as HelperBot-External, after the internal HelperBot service. We decided to take a subset of the SSL tests we use internally and allow someone to run a basic version of the scan on their own site. This helps users understand what they need to do to move their site to HTTPS by detecting the most common issues. By doing so, we seek to help users who are struggling to get over the line in enabling HTTPS on their sites by providing them some dynamic guidance in a plain-English fashion.</p><p>The tool runs 12 tests across three key categories of errors: HTTPS Disabled, Client Errors and Cryptography Errors. Unlike other tools, these are tests are based on the questions we see real users ask about their SSL configuration and the tasks they most struggle with. This is a tool designed to support all web developers in enabling HTTPS, not just those with an interest in cryptography. For example; by educating users about mixed content errors, we are able to make the case for them enabling HTTPS Strict Transport Security, thereby <a href="https://www.cloudflare.com/learning/security/glossary/website-security-checklist/">improving the security practices</a> they adopt.</p><p>Further; these tests are available to everyone. We believe it’s important that the entire Internet be safer, not only for our customers and their visitors (although, admittedly, Cloudflare’s SSL and crypto features make it very simple to be HTTPS-ready).</p>
    <div>
      <h3>Conclusion: Just the Beginning</h3>
      <a href="#conclusion-just-the-beginning">
        
      </a>
    </div>
    <p>As we grow our intelligence capabilities; we do so to provide better performance and security to our customers. We want build a better internet and make our users more successful on our platform. Whilst there's still plenty of ground left to cover in building out our intelligent capability for supporting customers, we're developing rapidly and focussed on using those skills to improve things our customers care about.</p> ]]></content:encoded>
            <category><![CDATA[HTTPS]]></category>
            <category><![CDATA[Chrome]]></category>
            <category><![CDATA[SSL]]></category>
            <category><![CDATA[Support]]></category>
            <category><![CDATA[Security]]></category>
            <guid isPermaLink="false">7ipQzFpDytxabYUE49T7jE</guid>
            <dc:creator>Junade Ali</dc:creator>
        </item>
        <item>
            <title><![CDATA[Today, Chrome Takes Another Step Forward in Addressing the Design Flaw That is an Unencrypted Web]]></title>
            <link>https://blog.cloudflare.com/today-chrome-takes-another-step-forward-in-addressing-the-design-flaw-that-is-an-unencrypted-web/</link>
            <pubDate>Tue, 24 Jul 2018 15:04:00 GMT</pubDate>
            <description><![CDATA[ I still remember my first foray onto the internet as a university student back in the mid 90's. It was a simpler time back then, of course; we weren't doing our personal banking or our tax returns or handling our medical records so encrypting the transport layer wasn't exactly a high priority.  ]]></description>
            <content:encoded><![CDATA[ <p><i>The following is a guest post by Troy Hunt, awarded </i><a href="https://www.troyhunt.com/about/"><i>Security expert</i></a><i>, </i><a href="https://www.troyhunt.com/"><i>blogger</i></a><i>, and Pluralsight author. He’s also the creator of the popular </i><a href="https://haveibeenpwned.com/"><i>Have I been pwned?</i></a><i>, the free aggregation service that helps the owners of over 5 billion accounts impacted by data breaches.</i></p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5KzdaWJzeTAjCK93nkKTlP/47b50fe081c28d84805f57ff82036494/chrome-68-troy-hhunt-quote_0.75x.png" />
            
            </figure><p><a href="https://twitter.com/share?ref_src=twsrc%5Etfw">Tweet</a></p><p>I still clearly remember my first foray onto the internet as a university student back in the mid 90's. It was a simpler online time back then, of course; we weren't doing our personal banking or our tax returns or handling our medical records so the whole premise of encrypting the transport layer wasn't exactly a high priority. In time, those services came along and so did the need to have some assurances about the confidentiality of the material we were sending around over other people's networks and computers. SSL as it was at the time was costly, but hey, banks and the like could absorb that given the nature of their businesses. However, at the time, there were all sorts of problems with the premise of serving traffic securely ranging from the cost of certs to the effort involved in obtaining and configuring them through to the performance hit on the infrastructure. We've spent the last couple of decades fixing these shortcomings and subsequently, driving site owners towards a more secure web. Today represents just one more step in that journey: as of today, Chrome is flagging all non-secure connections as... not secure!</p><p>I want to delve into the premise of this a little deeper because certainly there are those who question the need for the browser to be so shouty about a lack of encryption. I particularly see this point of view expressed as it relates to sites without the need for confidentiality, for example a static site that collects no personal data. But let me set the stage for this blog post because we're actually addressing a very fundamental problem here:</p>
    <div>
      <h3>The push for HTTPS is merely addressing a design flaw with the original, unencrypted web.</h3>
      <a href="#the-push-for-https-is-merely-addressing-a-design-flaw-with-the-original-unencrypted-web">
        
      </a>
    </div>
    <p>I mean think about it - we've been plodding along standing up billions of websites and usually having no idea whether requests are successfully reaching the correct destination, whether they've been observed, tampered with, logged or otherwise mishandled somewhere along the way. We'd <i>never</i> sit down and design a network like this today but as with so many aspects of the web, we're still dealing with the legacy of decisions made in a very different time.</p><p>So back to Chrome for moment and the "Not secure" visual indicator. When I run training on HTTPS, I load up a website in the browser over a secure connection and I ask the question - "How do we know this connection is secure"? It's a question usually met by confused stares as we literally see the word "Secure" sitting up next to the address bar. We know the connection is secure because the browser tells us this explicitly. Now, let's try it with a site loaded over an insecure connection - "How do we know this connection is not secure"? And the penny drops because the answer is always "We know it's not secure because it doesn't tell us that it is secure"! Isn't that an odd inversion? <i>Was</i> an odd inversion because as of today, both secure and non-secure connections get the same visual treatment so finally, we have parity.</p><p>But is parity what we actually want? think back to the days when Chrome didn't tell you an insecure connection wasn't secure (ah, isn't it nice that's in the past already?!); browsers could get away with this <i>because that was the normal state!</i> Why explicitly say anything when the connection is "normal"? But now we're changing what "normal" means and in the future that means we'll be able to apply the same logic as Chrome used to: visual indicators for the normal state won't be necessary or in other words, we won't need to say "secure" any more. Instead, we can focus on the messaging around deviations from normal, namely connections that aren't secure. Google has already flagged that we'll see this behaviour in the future too, it's just a matter of time.</p><p>Let's take a moment to reflect on what that word "normal" means as it relates to secure comms on the internet because it's something that changes over time. A perfect example of that is <a href="https://scotthelme.co.uk/how-widely-used-are-security-based-http-response-headers/">Scott Helme's six-monthly Alexa Top 1M report</a>. A couple of times a year, Scott publishes stats on the adoption of a range of different security constructs by the world's largest websites. One of those security constructs is the use of HTTPS or more specifically, sites that automatically redirect non-secure requests to the secure scheme. In that report above, he found that 6.7% of sites did this in August 2015. Let's have a look at just how quickly that number has changed and for ease of legibility, I'll list them all below followed by the change from the previous scan 6 months earlier:</p><ul><li><p>Aug 2015: 6.7%</p></li><li><p>Feb 2016: 9.4% (+42%)</p></li><li><p>Aug 2016: 13.8% (+46%)</p></li><li><p>Feb 2017: 20.0% (+45%)</p></li><li><p>Aug 2017: 30.8% (+48%)</p></li><li><p>Feb 2018: 38.4% (+32%)</p></li></ul><p>That's an <i>astonishingly</i> high growth rate, pretty much doubling every 12 months. We can't sustain that rate forever, of course, but depending on how you look at it, the numbers are even higher than that. <a href="https://docs.telemetry.mozilla.org/datasets/other/ssl/reference.html">Firefox's telemetry</a> suggests that as of today, 73% of all requests are served over a secure HTTPS connection. That number is much higher than Scott's due to the higher prevalence of the world's largest websites implementing HTTPS more frequently than the smaller ones. In fact, Scott's own figures graphically illustrate this:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5vArpeYnYlrbUxOeOqrOP9/ac69cc49464fb3b115d13f51098a5451/Troy_Hunt_Image.png" />
            
            </figure><p>Each point on the graph is a cluster of 4,000 websites with the largest ones on the left and the smallest on the right. It's clear that well over half of the largest sites are doing HTTPS by default whilst the smallest ones are much closer to one quarter. This can be explained by the fact that larger services tend to be those that we've traditionally expected higher levels of security on; they're <a href="https://www.cloudflare.com/ecommerce/">e-commerce sites</a>, social media platforms, banks and so on. Paradoxically, those sites are also the ones that are less trivial to roll over to HTTPS whilst the ones to the right of the graph are more likely to literally be lunchtime jobs. Last month I produced <a href="https://httpsiseasy.com/">a free 4-part series called "HTTP Is Easy"</a> and part 1 literally went from zero HTTPS to full HTTPS across the entire site in 5 minutes. It took another 5 minutes to get a higher grade than what most banks have for their transport layer encryption. HTTPS really <i>is</i> easy!</p><p>Yet still, there remain those who are unconvinced that secure connections are always necessary. Content integrity, they argue, is really not that important, what can a malicious party actually do with a static site such as a blog anyway? Good question! In no particular order, <a href="https://securitywarrior9.blogspot.com/2018/06/cross-site-request-forgery-intex-router.html">they can inject script to modify the settings of vulnerable routers and hijack DNS</a>, <a href="https://blog.torproject.org/egypt-internet-censorship">inject cryptominers into the browser</a>, <a href="https://citizenlab.ca/2015/04/chinas-great-cannon/">weaponise people's browsers into a DDoS cannon</a> or <a href="https://beefproject.com/">serve malware or phishing pages to unsuspecting victims</a>. Just to really drive home the real-world risks, <a href="https://www.troyhunt.com/heres-why-your-static-website-needs-https/">I demo'd all those in a single video a couple of weeks ago</a>. Mind you, the sorts of sites for whom owners are questioning the need for HTTPS are precisely the sorts of sites that tend to be 5-minute exercises to put behind Cloudflare so regardless of debates about how necessary it is, the actual effort involved in doing it is usually negligible. Oh - and it'll give you access to HTTP/2 and Brotli compression which are both great for <a href="https://www.cloudflare.com/solutions/ecommerce/optimization/">performance</a> <i>and</i> only work over HTTPS plus enable you to access <a href="https://developer.mozilla.org/en-US/docs/Web/Security/Secure_Contexts/features_restricted_to_secure_contexts">a whole range of browser features that are only available in secure contexts</a>.</p><p>Today is just one more correction in a series that's been running for some time now. In Jan last year it was both Chrome and Firefox flagging insecure pages accepting passwords or credit cards as not secure. In October Chrome began showing the same visual indicator when entering data into <i>any</i> non-secure form. In March this year Safari on iOS began showing "Not Secure" when entering text into an insecure login form. We all know what's happened today and as I flagged earlier, the future holds yet more changes as we move towards a more "secure by default" web. (Incidentally, note how it's multiple browser vendors driving this change, it's by no means solely Google's doing.)</p><p>Bit by bit, we're gradually fixing the design flaws of the web.</p><hr /><p><b>A Note from Cloudflare</b><i>In June, Troy authored a post entitled “</i><a href="https://www.troyhunt.com/https-is-easy/"><i>HTTPS is Easy!</i></a><i>,” which highlights the simplicity of converting a site to HTTPS with Cloudflare. It’s worth noting that, as indicated in his post, we were (pleasantly) surprised to see this series.</i></p><p><i>At Cloudflare, it’s our mission to build a better Internet, and a part of that is democratizing modern web technologies to everyone. This was the motivation for launching Universal SSL in 2014 - a move that made us the first company to offer SSL for free to anyone. With the release of Chrome 68, we want to continue making HTTPS easy, and have launched a free tool to help any website owner troubleshoot common problems with HTTPS configuration.</i></p>
    <div>
      <h4>Are you Chome 68 ready? Check your website with our <a href="https://cfl.re/ssl-test">free SSL Test</a>.</h4>
      <a href="#are-you-chome-68-ready-check-your-website-with-our">
        
      </a>
    </div>
     ]]></content:encoded>
            <category><![CDATA[HTTPS]]></category>
            <category><![CDATA[Chrome]]></category>
            <category><![CDATA[SSL]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[Guest Post]]></category>
            <guid isPermaLink="false">5l5nzMLUg5cHi67NAHy0wK</guid>
            <dc:creator>Troy Hunt (Guest Author)</dc:creator>
        </item>
        <item>
            <title><![CDATA[T-25 days until Chrome starts flagging HTTP sites as "Not Secure"]]></title>
            <link>https://blog.cloudflare.com/chrome-not-secure-for-http/</link>
            <pubDate>Thu, 28 Jun 2018 13:00:00 GMT</pubDate>
            <description><![CDATA[ Less than one month from today, on July 23, Google will start prominently labeling any site loaded in Chrome without HTTPS as "Not Secure". ]]></description>
            <content:encoded><![CDATA[ <p>Less than one month from today, on July 24*, Google will start prominently labeling any site loaded in Chrome without HTTPS as "<b>Not Secure</b>".</p><p>When we <a href="/https-or-bust-chromes-plan-to-label-sites-as-not-secure/">wrote about</a> Google’s plans back in February, the percent of sites loaded over HTTPS clocked in at 69.7%. Just one year prior to that only 52.5% of sites were loaded using SSL/TLS—the encryption protocol behind HTTPS—so tremendous progress has been made.</p><p>Unfortunately, quite a few popular sites on the web still don’t support HTTPS (or fail to redirect insecure requests) and will soon be flagged by Google. I spent some time scanning the top one million sites, and here’s what I learned about the 946,039 reachable over plaintext (unencrypted) HTTP:</p>
            <figure>
            <a href="http://staging.blog.mrk.cfdata.org/content/images/2018/06/http-infographic.png">
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3zcX4tP4T6W84IPJfm4UUR/836a1bdd934582fb992cd507130e76a4/http-infographic.png" />
            </a>
            </figure><p>If you were to ask the operators of these sites why they don’t protect themselves and their visitors with HTTPS, the responses you’d get could be bucketed into the following three groups: "I don’t need it", "it’s difficult to do", or "it’s slow".</p><p>None of these are legitimate answers, but they’re common misconceptions so let’s take each in turn.</p>
    <div>
      <h3>Myth #1: "HTTPS is difficult to deploy"</h3>
      <a href="#myth-1-https-is-difficult-to-deploy">
        
      </a>
    </div>
    
    <div>
      <h4>For Individual Sites</h4>
      <a href="#for-individual-sites">
        
      </a>
    </div>
    <p>This was true.. in the mid 1990s, when I placed my first SSL certificate order. All that has changed.</p><p>Back then, I was a high school student writing software for my friend’s mom’s company. We were getting ready to launch her website and I learned that we needed something called an "<a href="https://www.cloudflare.com/application-services/products/ssl/">SSL certificate</a>" to transact securely online, but I had no idea how to get one.</p><p>After a bit of research conducted over my blazingly fast <a href="https://en.wikipedia.org/wiki/USRobotics#/media/File:Fax_modem_antigo.jpg">US Robotics Sportster 14.4k</a> modem (we had recently upgraded from the relatively slow MultiTech 9600), I found out that we had to mail the company’s "original" Articles of Incorporation, emblazoned with the State Seal of Massachusetts and signed by company officers, along with a hefty check to some far away office. I asked her what to put for my title, so she shrugged and said "CTO" as that sounded more official and likely to get us approved. A week or so later, the CA <i>finally</i> emailed the certificate.</p><p>Thankfully, we’ve come a long way since then. Today, you can protect your site with HTTPS in a matter of seconds, for free, either by signing up for Cloudflare or using a CA such as Let’s Encrypt. If you use Cloudflare we’ll renew your certificate automatically, and store it within milliseconds of your users for optimal performance, using our <a href="https://www.cloudflare.com/network/">150+ data centers</a> around the world. As an added benefit, once we’re handling your SSL/TLS traffic, you can start using technologies like <a href="/tag/workers/">Cloudflare Workers</a> to implement any logic you want at the edge.</p>
    <div>
      <h4>For SaaS Providers</h4>
      <a href="#for-saas-providers">
        
      </a>
    </div>
    <p>While the "it’s difficult" excuse rings hollow for individual site operators, things do get a bit more challenging when you’re dealing with issuing (and regularly renewing) hundreds, thousands, or even millions of certificates. Such is the case for SaaS providers who write and deploy software on another company’s domain, e.g., <a href="https://blog.example.com">https://blog.example.com</a> or <a href="https://mystore.example">https://mystore.example</a>.</p><p>The reason that it can seem difficult to manage SSL certificates on behalf of other companies (putting aside performance and scale for a second), is that Certificate Authorities are only supposed to give out SSL certificates for hostnames where the requestor has "demonstrated control". Fortunately, methods such as HTTP-based validation, i.e., placing a random token on a well-defined path that the CA can access, have reduced the burden on the end-customer to a single step.</p><p>For those that want the benefits of having an edge provider like Cloudflare in front of their servers for acceleration and protection, our <a href="https://www.cloudflare.com/application-services/products/ssl-for-saas-providers/">SSL for SaaS</a> product reduces this process to a single API call. Alternatively, those that wish to handle their own issuance, renewal, certificate hosting, and DDoS protection, the HTTP validation method or <a href="https://github.com/ietf-wg-acme/acme/">ACME protocol</a> can be used.</p>
    <div>
      <h3>Myth #2: "I don’t need HTTPS"</h3>
      <a href="#myth-2-i-dont-need-https">
        
      </a>
    </div>
    <p>This argument is the most puzzling to me, especially <a href="http://this.how/googleAndHttp/">when spouted</a> by people who should know better. Even if you don’t care about performance—see myth #3 below—surely you care about the safety and privacy of those visiting your site.</p><p>Without HTTPS, anyone in the path between your visitor’s browser and your site or API can snoop on (or modify) your content without your consent. This includes <a href="https://en.wikipedia.org/wiki/Internet_censorship_and_surveillance_by_country">governments</a>, employers, and even especially <a href="https://arstechnica.com/tech-policy/2013/04/how-a-banner-ad-for-hs-ok/">internet</a> <a href="http://forums.xfinity.com/t5/Customer-Service/Are-you-aware/td-p/3009551">service</a> <a href="https://thehackernews.com/2016/02/china-hacker-malware.html">providers</a>.</p><p>If you care about your users receiving your content unmodified and being safe from maliciously injected advertisements or malware, you care about—and must use—HTTPS.</p><p>Besides safety, there are additional benefits such as <a href="https://webmasters.googleblog.com/2014/08/https-as-ranking-signal.html">SEO</a> and access to new web features: increasingly, the major browser vendors such as Apple, Google, Mozilla, and Microsoft, are <a href="https://bugzilla.mozilla.org/show_bug.cgi?id=1072859">restricting functionality</a> to only work over HTTPS. As for mobile apps, Google will soon block <a href="https://android-developers.googleblog.com/2018/03/previewing-android-p.html">unencrypted connections</a> by default, in their upcoming version of Android. Apple also <a href="https://developer.apple.com/videos/play/wwdc2016/706">announced</a> (and will soon hopefully follow through on their requirement) that apps must use HTTPS.</p>
    <div>
      <h3>Myth #3: "HTTPS is slow"</h3>
      <a href="#myth-3-https-is-slow">
        
      </a>
    </div>
    <p>Lastly, the other common myth about HTTPS is that it’s “slow”. This belief is a holdover from an era when SSL/TLS could actually have a negative performance impact on a site, but that's <a href="https://istlsfastyet.com/#cdn-paas">no longer the case</a> today. In fact, HTTPS is required to enable and enjoy the performance benefits of HTTP/2.</p>
            <figure>
            <a href="http://staging.blog.mrk.cfdata.org/content/images/2018/06/is-tls-fast-yet.png">
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4aiEaxmU31tRO3Uqb0F65H/27d5b717d21fc2a76e87981dc8d89b11/is-tls-fast-yet.png" />
            </a>
            </figure><p>Detractors typically think HTTPS is slow for two primary reasons: i) it takes marginally more CPU power to encrypt and decrypt data; and ii) establishing a TLS session takes two network round trips between the browser and the server.</p><p>Even with decade old hardware, SSL/TLS adds less than 1% of CPU load, as Adam Langley <a href="https://www.imperialviolet.org/2010/06/25/overclocking-ssl.html">explained</a> while debunking the HTTPS performance/cost myth. Today’s processors also come with instruction sets such as AES-NI, that help performance. Further, session resumption technologies reduce the TLS 1.2 overhead and TLS 1.3 aims to <a href="/introducing-0-rtt/">eliminate these round-trips</a> entirely.</p><p>When HTTPS content is served from the edge, typically 10–20 milliseconds away from your users in the case of Cloudflare, SSL/TLS enabled sites are incredibly fast and performant. And even when they are not served from an edge provider it bears repeating that SSL/TLS is not a performance burden! There’s really no excuse not to use it.</p>
    <div>
      <h4>Will my site show “Not Secure” on July 23 24*?</h4>
      <a href="#will-my-site-show-not-secure-on-july-23-24">
        
      </a>
    </div>
    <p>To help you determine if your site is ready for July 23 24*, we’ve built the handy widget shown at the top of the page. Simply type in your domain name (without explicitly specifying "http://" or "https://" to emulate what your visitors typically do) and hit enter.</p><p>Using a Cloudflare Worker, we’ll connect to your site and check to see if it’s redirected to a secure HTTPS link.</p>
    <div>
      <h4>How can I avoid my site showing "Not Secure"?</h4>
      <a href="#how-can-i-avoid-my-site-showing-not-secure">
        
      </a>
    </div>
    <p>If you'd like to avoid your website showing "Not Secure" in Chrome, all you need to do is obtain an SSL certificate and configure your site to redirect all traffic to HTTPS.</p><p>If you're using Cloudflare, we’ll take care of the SSL certificate order and renewal for you; take a look at Troy Hunt's excellent video "HTTPS Is Easy!" series here: <a href="https://httpsiseasy.com/">https://httpsiseasy.com/</a>. You should be sure to use the "Always use HTTPS" toggle to redirect HTTP visitors to HTTPS:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3DWy4aBDklTs1ptQPKr3G8/14e7ef9849af6fb0a5aae8b601a89dae/always-use-https.png" />
            
            </figure><p>Advanced users should also consider <a href="https://support.cloudflare.com/hc/en-us/articles/204183088-Does-Cloudflare-offer-HSTS-HTTP-Strict-Transport-Security-">using HSTS</a> to instruct the browser to always load your content over HTTPS, saving it a round trip (and page load time) on subsequent requests. And by turning on <a href="/fixing-the-mixed-content-problem-with-automatic-https-rewrites/">Automatic HTTPS Rewrites</a>, you can also rewrite any content that would normally be loaded over HTTP to use HTTPS (if available):</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/29wEVaj8lNfDcfAqpSq8Ms/02351641a576ec39139199c89402e345/automatic-https-rewrites.png" />
            
            </figure><p>If you're trying to protect your customers' vanity domains that are pointed to your SaaS application, <a href="https://www.cloudflare.com/ssl-for-saas-providers/">reach out</a>, and we can help you with this process.</p>
    <div>
      <h4>Want to help us secure the web with HTTPS?</h4>
      <a href="#want-to-help-us-secure-the-web-with-https">
        
      </a>
    </div>
    <p>The team that manages HTTPS and SSL certificate issuance at Cloudflare is hiring, in both Engineering and Product Management. Check out our open positions here:</p><ul><li><p><a href="https://boards.greenhouse.io/cloudflare/jobs/982340?gh_jid=982340">Product Manager, SSL/TLS and Crypto</a></p></li><li><p><a href="https://boards.greenhouse.io/cloudflare/jobs/589508?gh_jid=589508">Web Services Engineer</a></p></li><li><p><a href="https://boards.greenhouse.io/cloudflare/jobs/589507?gh_jid=589507">Security Software Engineer</a></p></li><li><p><a href="https://boards.greenhouse.io/cloudflare/jobs/936927?gh_jid=936927">Full Stack Engineer</a></p></li></ul><p><i>If you have a worker you'd like to share, or want to check out workers from other Cloudflare users, visit the </i><a href="https://community.cloudflare.com/tags/recipe-exchange"><i>“Recipe Exchange”</i></a><i> in the Workers section of the </i><a href="https://community.cloudflare.com/c/developers/workers"><i>Cloudflare Community Forum</i></a><i>.</i></p><p>* After this post was published Google pushed the release back one day, to the 24th.</p> ]]></content:encoded>
            <category><![CDATA[HTTPS]]></category>
            <category><![CDATA[Chrome]]></category>
            <category><![CDATA[Serverless]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <guid isPermaLink="false">2md7E1MxzPpLm5w37yYZFq</guid>
            <dc:creator>Patrick R. Donahue</dc:creator>
        </item>
        <item>
            <title><![CDATA[BGP leaks and cryptocurrencies]]></title>
            <link>https://blog.cloudflare.com/bgp-leaks-and-crypto-currencies/</link>
            <pubDate>Tue, 24 Apr 2018 22:31:13 GMT</pubDate>
            <description><![CDATA[ Over the few last hours, a dozen news stories have broken about how an attacker attempted (and perhaps managed) to steal cryptocurrencies using a BGP leak. ]]></description>
            <content:encoded><![CDATA[ <p>Over the few last hours, a <a href="https://www.forbes.com/sites/thomasbrewster/2018/04/24/a-160000-ether-theft-just-exploited-a-massive-blind-spot-in-internet-security/">dozen news stories have broken</a> about how an attacker attempted (and <a href="https://twitter.com/killeswagon/status/988795209361252357">perhaps managed</a>) to steal cryptocurrencies using a BGP leak.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4C9febUAV4dp0191tLDXmU/a8d8614a63ad59c6b83ee26c5ba23ae1/6818192898_c132e81824_z.jpg" />
            
            </figure><p><a href="https://creativecommons.org/licenses/by/2.0/">CC BY 2.0</a> <a href="https://www.flickr.com/photos/77519207@N02/6818192898/in/photolist-EGDA1W-Ga795-6yLrTS-22PPou3-gi3qi-8S6vb4-bov2cY-dgBNxk-ba28ar-ebQUDY-acXCjq-zZppue-j8nDM9-78GCT9-zFTmT1-zFT1ME-a8iKNR-6Hbzuk-bmMV3X-6Hbt1t-HkBYhJ-h7mEUc-8kza6J-inYagg-PUtWHj-cMHLr-g1zfvy-emgRCp-262Z5jD-CLumQP-M13Veh-ur2aSQ-68UJQ1">image</a> by <a href="https://www.flickr.com/photos/77519207@N02/">elhombredenegro</a></p>
    <div>
      <h3>What is BGP?</h3>
      <a href="#what-is-bgp">
        
      </a>
    </div>
    <p>The Internet is composed of routes. For our DNS resolver <a href="https://cloudflare-dns.com/"><b>1.1.1.1</b></a> , we tell the world that all the IPs in the range <code>1.1.1.0</code> to <code>1.1.1.255</code> can be accessed at any Cloudflare PoP.</p><p>For the people who do not have a <a href="/think-global-peer-local-peer-with-cloudflare-at-100-internet-exchange-points/">direct link to our routers</a>, they receive the route via transit providers, who will deliver packets to those addresses as they are connected to Cloudflare and the rest of the Internet.</p><p>This is the normal way the Internet operates.</p><p>There are authorities (Regional Internet Registries, or RIRs) in charge of distributing IP addresses in order to avoid people using the same address space. Those are <a href="https://www.iana.org/">IANA</a>, <a href="https://www.ripe.net">RIPE</a>, <a href="https://www.arin.net">ARIN</a>, <a href="https://www.lacnic.net">LACNIC</a>, <a href="https://www.apnic.net">APNIC</a> and <a href="https://www.afrinic.net">AFRINIC</a>.</p>
    <div>
      <h3>What is a BGP leak?</h3>
      <a href="#what-is-a-bgp-leak">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2CRfX9GucawxrG5pGUXig7/2a352c32b739c1ed9c8006e63548f3fb/6385512087_802c680220_z.jpg" />
            
            </figure><p><a href="https://creativecommons.org/licenses/by/2.0/">CC BY 2.0</a> <a href="https://www.flickr.com/photos/magnus_d/6385512087/">image</a> by <a href="https://www.flickr.com/photos/magnus_d/">Magnus D</a></p><p>The broad definition of a BGP leak would be IP space that is announced by somebody not allowed by the owner of the space. When a transit provider picks up Cloudflare's announcement of <code>1.1.1.0/24</code> and announces it to the Internet, we allow them to do so. They are also verifying using the RIR information that only Cloudflare can announce it to them.</p><p>Although it can get tricky checking all the announcements. Especially when there are <b>700,000+</b> routes on the Internet and chains of providers exchanging traffic between each other.</p><p>By their nature, route leaks are localized. The more locally connected you are, the smaller the risk of accepting a leaked route is.In order to be accepted over a legitimate route, the route has to be either:</p><ul><li><p>A smaller prefix (<code>10.0.0.1/32</code> = 1 IP vs <code>10.0.0.0/24</code> = 256 IPs)</p></li><li><p>Have better metrics than a prefix with the same length (shorter path)</p></li></ul><p>The cause of a BGP leak is usually a <b>configuration mistake</b>: a router suddenly announces the IPs it learned. Or smaller prefixes used internally for traffic engineering suddenly becoming public.</p><p>But sometimes it is done with a <b>malicious intent</b>. The prefix can be re-routed through in order to passively analyze the data. Or somebody can also set-up a service to reply illegitimately instead. This has <a href="/why-google-went-offline-today-and-a-bit-about/">happened before</a>.</p>
    <div>
      <h3>What happened today?</h3>
      <a href="#what-happened-today">
        
      </a>
    </div>
    <p>Cloudflare maintains a range of BGP collectors gathering BGP information from hundreds of routers around the planet.</p><p>Between approximately <b>11:05:00 UTC and 12:55:00 UTC today</b> we saw the following announcements:</p>
            <pre><code>BGP4MP|04/24/18 11:05:42|A|205.251.199.0/24|10297
BGP4MP|04/24/18 11:05:42|A|205.251.197.0/24|10297
BGP4MP|04/24/18 11:05:42|A|205.251.195.0/24|10297
BGP4MP|04/24/18 11:05:42|A|205.251.193.0/24|10297
BGP4MP|04/24/18 11:05:42|A|205.251.192.0/24|10297
...
BGP4MP|04/24/18 11:05:54|A|205.251.197.0/24|4826,6939,10297</code></pre>
            <p>Those are more specifics announcements of the ranges:</p><ul><li><p><code>205.251.192.0/23</code></p></li><li><p><code>205.251.194.0/23</code></p></li><li><p><code>205.251.196.0/23</code></p></li><li><p><code>205.251.198.0/23</code></p></li></ul><p>This IP space is allocated to <b>Amazon</b> (AS16509). But the ASN that announced it was <b>eNet Inc</b> (AS10297) to their peers and forwarded to <b>Hurricane Electric</b> (AS6939).</p><p>Those IPs are for <a href="https://ip-ranges.amazonaws.com/ip-ranges.json">Route53 Amazon DNS servers</a>. When you query for one of their client zones, those servers will reply.</p><p>During the two hours leak the servers on the IP range only responded to queries for <b>myetherwallet.com</b>. As some people noticed <a href="https://puck.nether.net/pipermail/outages/2018-April/011257.html">SERVFAIL</a>.</p><p>Any DNS resolver that was asked for names handled by Route53 would ask the authoritative servers that had been taken over via the BGP leak. This poisoned DNS resolvers whose routers had accepted the route.</p><p>This included <a href="https://cloudflare-dns.com/">Cloudflare DNS resolver 1.1.1.1</a>. We were affected in Chicago, Sydney, Melbourne, Perth, Brisbane, Cebu, Bangkok, Auckland, Muscat, Djibouti and Manilla. In the rest of the world, 1.1.1.1 worked normally.</p><blockquote><p>BGP hijack this morning affected Amazon DNS. eNet (AS10297) of Columbus, OH announced the following more-specifics of Amazon routes from 11:05 to 13:03 UTC today:205.251.192.0/24205.251.193.0/24205.251.195.0/24205.251.197.0/24205.251.199.0/24</p><p>— InternetIntelligence (@InternetIntel) <a href="https://twitter.com/InternetIntel/status/988792927068610561?ref_src=twsrc%5Etfw">April 24, 2018</a></p></blockquote><blockquote><p>Correction: the BGP hijack this morning was against AWS DNS not Google DNS. <a href="https://t.co/gp3VLbImpX">https://t.co/gp3VLbImpX</a></p><p>— InternetIntelligence (@InternetIntel) <a href="https://twitter.com/InternetIntel/status/988841601400270848?ref_src=twsrc%5Etfw">April 24, 2018</a></p></blockquote><p>For instance, the following query will return you legitimate Amazon IPs:</p>
            <pre><code>$ dig +short myetherwallet.com @205.251.195.239
54.192.146.xx</code></pre>
            <p>But during the hijack, it returned IPs associated with a <b>Russian provider</b> (AS48693 and AS41995). You did not need to accept the hijacked route to be victim of the attack, just use a DNS resolver that had been poisoned.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4bG1XFBxbEJg1akpE14ZqA/7f7f971aa49bf367eab386b15b86a1cb/Screen-Shot-2018-04-24-at-1.55.12-PM.png" />
            
            </figure><p>If you were using HTTPS, the fake website would display a TLS certificate signed by an unknown authority (the domain listed in the certificate was correct but it was self-signed). The only way for this attack to work would be to continue and accept the wrong certificate. From that point on, everything you send would be encrypted but the attacker had the keys.</p><p>If you were already logged-in, your browser will send the login information in the cookie. Otherwise, your username and password would be sent if you typed them in on a login page.</p><p>Once the attacker got the login information, it used them on the legitimate website to transfer and steal Ethereum.</p>
    <div>
      <h3>Summary in pictures</h3>
      <a href="#summary-in-pictures">
        
      </a>
    </div>
    <p>Normal case</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7iaSnn0dekTUVCYRDPzZfe/61a16e2ffd729258c166b87375a06e3f/Slide1.png" />
            
            </figure><p>After a BGP route leak</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6llrWMviALPMBhNwM4M1pY/8be9b2d2dd921583999c27edc3e8f093/Slide2.png" />
            
            </figure>
    <div>
      <h3>Affected regions</h3>
      <a href="#affected-regions">
        
      </a>
    </div>
    <p>As previously mentioned, <b>AS10279</b> announced this route. But only some regions got affected. Hurricane Electric has a strong presence <b>Australia</b>, mostly due to Internet costs. <b>Chicago</b> was affected because AS10279 has a physical presence resulting in direct peering.</p><p>The following graph displays the number of packets received in the affected regions and unaffected regions (Y axis normalized). The drop is due to the authoritative server not responding to our requests anymore (it only responded for the one website and all other Amazon domains were ignored).</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4XMCIDZDvIhrjAknLNsI1L/2b4cbc6934e6a54418304570080cc16f/Screen-Shot-2018-04-24-at-1.38.03-PM.png" />
            
            </figure><p>The others transits used by eNet (CenturyLink, Cogent and NTT) did not seem to have accepted this route: a reason could be that they have filters and/or Amazon as a customer.eNet provides IP services, so one of their clients could have been announcing it.</p>
    <div>
      <h3>Is there somebody to blame?</h3>
      <a href="#is-there-somebody-to-blame">
        
      </a>
    </div>
    <p>As there are many actors involved, it is hard to determine fault. Actors involved:</p><ul><li><p>The ISP that announced a subnet it did not own.</p></li><li><p>The transit providers that did not check the announcement before relaying it.</p></li><li><p>The ISPs that accepted the route.</p></li><li><p>The lack of protection on the DNS resolvers and authority.</p></li><li><p>The phishing website hosted on providers in Russia.</p></li><li><p>The website that did not enforce legitimate TLS certificates.</p></li><li><p>The user that clicked continue even though the TLS certificate was invalid.</p></li></ul><p>Just like a <b>blockchain</b>, a network change is usually visible and archived. RIPE maintains a <a href="https://ripe.net/ris/">database for this use</a>.</p>
    <div>
      <h3>Could we fix this?</h3>
      <a href="#could-we-fix-this">
        
      </a>
    </div>
    <p>This is a difficult question to answer. There are proposals for securing BGP:Some terms can be added to the RIR databases, so a list of allowed sources can be generated:</p>
            <pre><code>$ whois -h whois.radb.net ' -M 205.251.192.0/21' | egrep '^route:|^origin:|source:' | paste - - - | sort
route:      205.251.192.0/23	origin:     AS16509	source:     RADB
route:      205.251.192.0/23	origin:     AS16509	source:     REACH</code></pre>
            <p>Setting up RPKI/ROA records with the RIR as a source of truth regarding to the path of a route, although not everyone create those records or validate them. IP and BGP were created a few decades ago, with different requirements in mind regarding integrity and authenticity (less routes).</p><p>A few things can be done on the upper levels of the network stack.</p><p>On <b>DNS</b>, you can use <a href="https://en.wikipedia.org/wiki/Domain_Name_System_Security_Extensions">DNSSEC</a> to sign your records. The IPs returned by the fake DNS would not have been signed as they do not have the private keys.If you use Cloudflare as a DNS, you can enable DNSSEC within <a href="https://cloudflare.com/a/dns">a few clicks in the panel</a>.</p><p>On <b>HTTPS</b>, your browser will check the TLS certificate provided by the website. If <a href="https://en.wikipedia.org/wiki/HTTP_Strict_Transport_Security">HSTS</a> is enabled, the browser will require a valid certificate all the time. The only way to generate a legitimate TLS certificate for a domain would be to poison the cache of a non-DNSSEC DNS resolver of the Certificate Authority.</p><p><a href="https://en.wikipedia.org/wiki/DNS-based_Authentication_of_Named_Entities">DANE</a> provides a way of pinning certificates to a domain name using DNS.</p><p><a href="https://developers.cloudflare.com/1.1.1.1/dns-over-https/">DNS over HTTPS</a> would also allow to validate you are talking to the correct resolver in case the leak would be on the DNS resolvers instead of the DNS authority.</p><p>There is no perfect and unique solution. The more of these protections are in place, the harder it will be for a malicious actor to perform this kind of attack.</p> ]]></content:encoded>
            <category><![CDATA[BGP]]></category>
            <category><![CDATA[TLS]]></category>
            <category><![CDATA[HTTPS]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[Vulnerabilities]]></category>
            <category><![CDATA[Cryptography]]></category>
            <guid isPermaLink="false">5SKcQGZX6y2OJMjWbFOiCN</guid>
            <dc:creator>Louis Poinsignon</dc:creator>
        </item>
        <item>
            <title><![CDATA[A Solution to Compression Oracles on the Web]]></title>
            <link>https://blog.cloudflare.com/a-solution-to-compression-oracles-on-the-web/</link>
            <pubDate>Tue, 27 Mar 2018 12:00:00 GMT</pubDate>
            <description><![CDATA[ Compression is often considered an essential tool when reducing the bandwidth usage of internet services. The impact that the use of such compression schemes can have on security, however, has often been overlooked.  ]]></description>
            <content:encoded><![CDATA[ <p></p><p><a href="https://commons.wikimedia.org/wiki/File:Ressort_de_compression.jpg">CC 3.0 by Jean-Jacques MILAN</a></p><p><i>This is a guest post by Blake Loring, a PhD student at Royal Holloway, University of London. Blake worked at Cloudflare as an intern in the summer of 2017.</i></p><p>Compression is often considered an essential tool when reducing the bandwidth usage of internet services. The impact that the use of such compression schemes can have on security, however, has often been overlooked. The recently detailed <a href="https://en.wikipedia.org/wiki/CRIME">CRIME</a>, <a href="http://breachattack.com/">BREACH</a>, <a href="https://www.blackhat.com/eu-13/briefings.html#Beery">TIME</a> and <a href="https://www.blackhat.com/docs/us-16/materials/us-16-VanGoethem-HEIST-HTTP-Encrypted-Information-Can-Be-Stolen-Through-TCP-Windows-wp.pdf">HEIST</a> attacks on TLS have shown that if an attacker can make requests on behalf of a user then secret information can be extracted from encrypted messages using only the length of the response. Deciding whether an element of a web-page should be secret often depends on the content of the page, however there are some common elements of web-pages which should always remain secret such as <a href="https://en.wikipedia.org/wiki/Cross-site_request_forgery">Cross-Site Request Forgery (CSRF)</a> tokens. Such tokens are used to ensure that malicious webpages cannot forge requests from a user by enforcing that any request must contain a secret token included in a previous response.</p><p>I worked at Cloudflare last summer to investigate possible solutions to this problem. The result is a project called <a href="https://github.com/cloudflare/cf-nocompress">cf-nocompress</a>. The aim of this project was to develop a tool which automatically mitigates instances of the attack, in particular CSRF extraction, on Cloudflare hosted services transparently without significantly impacting the effectiveness of compression. We have published a <a href="https://github.com/cloudflare/cf-nocompress/tree/master/cf-nocompress">proof-of-concept implementation</a> on GitHub, and provide a <a href="https://compression.website">challenge site</a> and <a href="https://github.com/cloudflare/cf-nocompress/tree/master/example_attack/src/main">tool</a> which demonstrates the attack in action).</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/OWY4iAZGfQjKwmBgRkway/2bc96e67c7df62a5316ec450b5f56142/compression.website.jpg" />
            
            </figure>
    <div>
      <h3>The Problem</h3>
      <a href="#the-problem">
        
      </a>
    </div>
    <p>Most web compression schemes reduce the size of data by replacing common sequences with references to a dictionary of terms created during the compression. When using such compression schemes the size of the encrypted response will be reduced if there are repeated strings within the plaintext. This can be exploited through the use of a canary, an element in a request which we know will be added to the response, to test whether a string exists within the original response using the compressed response length. From this we can extract the contents of portions of a webpage incrementally by guessing each subsequent character. This attack creates an opportunity for malicious JavaScript to extract CSRF tokens and other confidential information from a webpage through malicious code served to a browser using either a packet sniffer (a methodology created by Duong and Rizzo as part of the <a href="https://blog.cryptographyengineering.com/2011/09/21/brief-diversion-beast-attack-on-tlsssl/">BEAST attack</a>) or JavaScript APIs which reveal network statistics (described by Vanhoef and Van Goethem in <a href="https://www.blackhat.com/docs/us-16/materials/us-16-VanGoethem-HEIST-HTTP-Encrypted-Information-Can-Be-Stolen-Through-TCP-Windows.pdf">HEIST</a>).</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/23iahybYJxsleUyYAbpWoQ/81440ac0fe67f75711e7d1c5608d41ef/Artboard---4.png" />
            
            </figure><p>There are two common mitigation schemes for this attack. The first is to send a unique CSRF every time a page is loaded. By removing the consistent element from the page the threat of attack is removed. This approach requires the server to keep state of valid CSRFs and whether they have been used, additionally it can only be used to protect page tokens and not user-readable data. Another approach is to XOR all secrets in a response with a per-request random number and then transmit the number with the response. Once received a piece of JavaScript can then be used to recover the original secret by XORing the data again. Alternatively, the server can be modified to expect the XOR variant and the random number rather than the original secret. This approach allows for all secrets to be protected, however it requires client side post-processing. Additionally, both approaches require extensive, per page, modification which make mitigation incredibly cumbersome in practice. At present the only way to fully mitigate such an attack is to disable compression entirely on vulnerable websites, an impractical solution for most websites and content delivery networks.</p>
    <div>
      <h3>Our Solution</h3>
      <a href="#our-solution">
        
      </a>
    </div>
    <p>We decided to use selective compression, compressing only non-secret parts of a page, in order to stop the extraction of secret information from a page. We found that in most cases a secret within a webpage can be described in terms of a classical regular expression. These descriptions allow us to identify secrets online as a response is streamed. Once the secrets are identified they can be flagged so that a modified compression library can ensure that they are not added to the dictionary. The primary advantage of this approach is that protection can be offered transparently by the web-server and the application does not need to be modified as long as a regular expression can be used to clearly express which portions of a response are secret. In addition, we do not need to maintain state for each user or require client-side JavaScript to appropriately render the page.</p><p>The proof-of-concept is implemented as a plugin for NGINX and requires a small patch to the gzip module. The plugin uses <a href="https://github.com/openresty/sregex">sregex</a> to identify secrets within a page. The modified gzip functions as normal, however when a secret is processed compression is disabled. This ensures secrets do not get added to the compression dictionary, removing any on response size.</p>
    <div>
      <h3>Additional security considerations</h3>
      <a href="#additional-security-considerations">
        
      </a>
    </div>
    <p>The regular expression matching engine we use in this proof-of-concept is not guaranteed to run in constant time. As such, matching a string against some regular expressions could introduce a timing based side-channel attack. This issue is compounded by the complexity of modern regular expressions as matching time can often be non-intuitive. Whilst in many cases the risk such an attack would pose is minimal, a limited matcher with constant runtime and restrictions on unbounded loops should be developed if our mitigation is adopted.</p>
    <div>
      <h3>The Challenge Site</h3>
      <a href="#the-challenge-site">
        
      </a>
    </div>
    <p>We have set up the challenge website <a href="https://compression.website/">compression.website</a> with protection, and a clone of the site <a href="https://compression.website/unsafe/">compression.website/unsafe</a> without it. The page is a simple form with a per-client CSRF designed to emulate common CSRF protection. Using the example attack presented with the library we have shown that we are able to extract the CSRF from the size of request responses in the unprotected variant but we have not been able to extract it on the protected site. We welcome attempts to extract the CSRF without access to the unencrypted response.</p> ]]></content:encoded>
            <category><![CDATA[Compression]]></category>
            <category><![CDATA[HTTPS]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[Cryptography]]></category>
            <guid isPermaLink="false">6s2fKoLeMHkDIfUFiJ7gxJ</guid>
            <dc:creator>Guest Author</dc:creator>
        </item>
    </channel>
</rss>