
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/">
    <channel>
        <title><![CDATA[ The Cloudflare Blog ]]></title>
        <description><![CDATA[ Get the latest news on how products at Cloudflare are built, technologies used, and join the teams helping to build a better Internet. ]]></description>
        <link>https://blog.cloudflare.com</link>
        <atom:link href="https://blog.cloudflare.com/" rel="self" type="application/rss+xml"/>
        <language>en-us</language>
        
        <lastBuildDate>Fri, 03 Apr 2026 21:53:58 GMT</lastBuildDate>
        <item>
            <title><![CDATA[Going Keyless Everywhere]]></title>
            <link>https://blog.cloudflare.com/going-keyless-everywhere/</link>
            <pubDate>Fri, 01 Nov 2019 13:01:00 GMT</pubDate>
            <description><![CDATA[ Time flies. The Heartbleed vulnerability was discovered just over five and a half years ago. Heartbleed became a household name not only because it was one of the first bugs with its own web page and logo, but because of what it revealed about the fragility of the Internet as a whole. ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Time flies. The <a href="/tag/heartbleed/">Heartbleed</a> vulnerability was discovered just over five and a half years ago. Heartbleed became a household name not only because it was one of the first bugs with its own <a href="http://heartbleed.com/">web page</a> and <a href="http://heartbleed.com/heartbleed.png">logo</a>, but because of what it revealed about the fragility of the Internet as a whole. With Heartbleed, one tiny bug in a cryptography library exposed the personal data of the users of almost every website online.</p><p>Heartbleed is an example of an underappreciated class of bugs: remote memory disclosure vulnerabilities. High profile examples other than <a href="/tag/heartbleed/">Heartbleed</a> include <a href="/incident-report-on-memory-leak-caused-by-cloudflare-parser-bug/">Cloudbleed</a> and most recently <a href="https://arxiv.org/abs/1807.10535">NetSpectre</a>. These vulnerabilities allow attackers to extract secrets from servers by simply sending them specially-crafted packets. Cloudflare recently completed a multi-year project to make our platform more resilient against this category of bug.</p><p>For the last five years, the industry has been dealing with the consequences of the design that led to Heartbleed being so impactful. In this blog post we’ll dig into memory safety, and how we re-designed Cloudflare’s main product to protect private keys from the next Heartbleed.</p>
    <div>
      <h2>Memory Disclosure</h2>
      <a href="#memory-disclosure">
        
      </a>
    </div>
    <p>Perfect security is not possible for businesses with an online component. History has shown us that no matter how robust their security program, an unexpected exploit can leave a company exposed. One of the more famous recent incidents of this sort is Heartbleed, a vulnerability in a commonly used cryptography library called OpenSSL that exposed the inner details of millions of web servers to anyone with a connection to the Internet. Heartbleed made international news, caused millions of dollars of damage, and <a href="https://blog.malwarebytes.com/exploits-and-vulnerabilities/2019/09/everything-you-need-to-know-about-the-heartbleed-vulnerability/">still hasn’t been fully resolved</a>.</p><p>Typical web services only return data via well-defined public-facing interfaces called APIs. Clients don’t typically get to see what’s going on under the hood inside the server, that would be a huge privacy and security risk. Heartbleed broke that paradigm: it enabled anyone on the Internet to get access to take a peek at the operating memory used by web servers, revealing privileged data usually not exposed via the API. Heartbleed could be used to extract the result of previous data sent to the server, including passwords and credit cards. It could also reveal the inner workings and cryptographic secrets used inside the server, including TLS <a href="/the-results-of-the-cloudflare-challenge/">certificate private keys</a>.</p><p>Heartbleed let attackers peek behind the curtain, but not too far. Sensitive data could be extracted, but not everything on the server was at risk. For example, Heartbleed did not enable attackers to steal the content of databases held on the server. You may ask: why was some data at risk but not others? The reason has to do with how modern operating systems are built.</p>
    <div>
      <h2>A simplified view of process isolation</h2>
      <a href="#a-simplified-view-of-process-isolation">
        
      </a>
    </div>
    <p>Most modern operating systems are split into multiple layers. These layers are analogous to security clearance levels. So-called user-space applications (like your browser) typically live in a low-security layer called user space. They only have access to computing resources (memory, CPU, networking) if the lower, more credentialed layers let them.</p><p>User-space applications need resources to function. For example, they need memory to store their code and working memory to do computations. However, it would be risky to give an application direct access to the physical RAM of the computer they’re running on. Instead, the raw computing elements are restricted to a lower layer called the operating system kernel. The kernel only runs specially-designed applications designed to safely manage these resources and mediate access to them for user-space applications.</p><p>When a new user space application process is launched, the kernel gives it a virtual memory space. This virtual memory space acts like real memory to the application but is actually a safely guarded translation layer the kernel uses to protect the real memory. Each application’s virtual memory space is like a parallel universe dedicated to that application. This makes it impossible for one process to view or modify another’s, the other applications are simply not addressable.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/46WR5JrLwEtc94VDZ7YZJK/8dd78b2efd297c87c430362e7883b4d3/image9-3.png" />
            
            </figure>
    <div>
      <h2>Heartbleed, Cloudbleed and the process boundary</h2>
      <a href="#heartbleed-cloudbleed-and-the-process-boundary">
        
      </a>
    </div>
    <p>Heartbleed was a vulnerability in the OpenSSL library, which was part of many web server applications. These web servers run in user space, like any common applications. This vulnerability caused the web server to return up to 2 kilobytes of its memory in response to a specially-crafted inbound request.</p><p>Cloudbleed was also a memory disclosure bug, albeit one specific to Cloudflare, that got its name because it was so similar to Heartbleed. With Cloudbleed, the vulnerability was not in OpenSSL, but instead in a secondary web server application used for HTML parsing. When this code parsed a certain sequence of HTML, it ended up inserting some process memory into the web page it was serving.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7qlhsxgqsCJwmzNREBXRYx/05a1c85bf7a8109890bf8f621f61bc55/image2.png" />
            
            </figure><p>It’s important to note that both of these bugs occurred in applications running in user space, not kernel space. This means that the memory exposed by the bug was necessarily part of the virtual memory of the application. Even if the bug were to expose megabytes of data, it would only expose data specific to that application, not other applications on the system.</p><p>In order for a web server to serve traffic over the encrypted HTTPS protocol, it needs access to the certificate’s private key, which is typically kept in the application’s memory. These keys were exposed to the Internet by Heartbleed. The Cloudbleed vulnerability affected a different process, the HTML parser, which doesn’t do HTTPS and therefore doesn’t keep the private key in memory. This meant that HTTPS keys were safe, even if other data in the HTML parser’s memory space wasn’t.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6B3QATkOKQfQndbFuDifgd/6b77a1e6fc06fdfa386158113aac5369/image4.png" />
            
            </figure><p>The fact that the HTML parser and the web server were different applications saved us from having to revoke and re-issue our customers’ <a href="https://www.cloudflare.com/application-services/products/ssl/">TLS certificates</a>. However, if another memory disclosure vulnerability is discovered in the web server, these keys are again at risk.</p>
    <div>
      <h2>Moving keys out of Internet-facing processes</h2>
      <a href="#moving-keys-out-of-internet-facing-processes">
        
      </a>
    </div>
    <p>Not all web servers keep private keys in memory. In some deployments, private keys are held in a separate machine called a Hardware Security Module (HSM). HSMs are built to withstand physical intrusion and tampering and are often built to comply with stringent compliance requirements. They can often be bulky and expensive. Web servers designed to take advantage of keys in an HSM connect to them over a physical cable and communicate with a specialized protocol called PKCS#11. This allows the web server to serve encrypted content while being physically separated from the private key.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1GBuW8GyHUgvg2VI7YKZpC/b5ca62effb7f36c3b7f3f8d80cc04f9e/image8-1.png" />
            
            </figure><p>At Cloudflare, we built our own way to separate a web server from a private key: <a href="/keyless-ssl-the-nitty-gritty-technical-details/">Keyless SSL</a>. Rather than keeping the keys in a separate physical machine connected to the server with a cable, the keys are kept in a key server operated by the customer in their own infrastructure (this can also be backed by an HSM).</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/73zinlbc5lRdAJuhY7q2Jl/2ac31d4a9220ed6ca468b553f105a1d2/image10-4.png" />
            
            </figure><p>More recently, we launched <a href="/introducing-cloudflare-geo-key-manager/">Geo Key Manager</a>, a service that allows users to store private keys in only select Cloudflare locations. Connections to locations that do not have access to the private key use Keyless SSL with a key server hosted in a datacenter that does have access.</p><p>In both Keyless SSL and Geo Key Manager, private keys are not only not part of the web server’s memory space, they’re often not even in the same country! This extreme degree of separation is not necessary to protect against the next Heartbleed. All that is needed is for the web server and the key server to not be part of the same application. So that’s what we did. We call this Keyless Everywhere.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/jFMg99U9Aiq8yNh43fx1l/d61671a3bb94f6fc415ad5aef8b4f808/image7-2.png" />
            
            </figure>
    <div>
      <h2>Keyless SSL is coming from inside the house</h2>
      <a href="#keyless-ssl-is-coming-from-inside-the-house">
        
      </a>
    </div>
    <p>Repurposing Keyless SSL for Cloudflare-held private keys was easy to conceptualize, but the path from ideation to live in production wasn't so straightforward. The core functionality of Keyless SSL comes from the open source <a href="https://github.com/cloudflare/gokeyless">gokeyless</a> which customers run on their infrastructure, but internally we use it as a library and have replaced the main package with an implementation suited to our requirements (we've creatively dubbed it gokeyless-internal).</p><p>As with all major architecture changes, it’s prudent to start with testing out the model with something new and low risk. In our case, the test bed was our experimental <a href="/introducing-tls-1-3/">TLS 1.3</a> implementation. In order to quickly iterate through draft versions of the TLS specification and push releases without affecting the majority of Cloudflare customers, we <a href="/introducing-tls-1-3/">re-wrote our custom nginx web server in Go</a> and deployed it in parallel to our existing infrastructure. This server was designed to never hold private keys from the start and only leverage gokeyless-internal. At this time there was only a small amount of TLS 1.3 traffic and it was all coming from the beta versions of browsers, which allowed us to work through the initial kinks of gokeyless-internal without exposing the majority of visitors to security risks or outages due to gokeyless-internal.</p><p>The first step towards making TLS 1.3 fully keyless was identifying and implementing the new functionality we needed to add to gokeyless-internal. Keyless SSL was designed to run on customer infrastructure, with the expectation of supporting only a handful of private keys. But our edge must simultaneously support millions of private keys, so we implemented the same <a href="/universal-ssl-how-it-scales/">lazy loading</a> logic we use in our web server, nginx. Furthermore, a typical customer deployment would put key servers behind a network load balancer, so they could be taken out of service for upgrades or other maintenance. Contrast this with our edge, where it’s important to maximize our resources by serving traffic during software upgrades. This problem is solved by the excellent <a href="/graceful-upgrades-in-go/">tableflip package</a> we use elsewhere at Cloudflare.</p><p>The next project to go Keyless was <a href="https://www.cloudflare.com/products/cloudflare-spectrum/">Spectrum</a>, which launched with default support for gokeyless-internal. With these small victories in hand, we had the confidence necessary to attempt the big challenge, which was porting our existing nginx infrastructure to a fully keyless model. After implementing the new functionality, and being satisfied with our integration tests, all that’s left is to turn this on in production and call it a day, right? Anyone with experience with large distributed systems knows how far "working in dev" is from "done," and this story is no different. Thankfully we were anticipating problems, and built a fallback into nginx to complete the handshake itself if any problems were encountered with the gokeyless-internal path. This allowed us to expose gokeyless-internal to production traffic without risking downtime in the event that our reimplementation of the nginx logic was not 100% bug-free.</p>
    <div>
      <h2>When rolling back the code doesn’t roll back the problem</h2>
      <a href="#when-rolling-back-the-code-doesnt-roll-back-the-problem">
        
      </a>
    </div>
    <p>Our deployment plan was to enable Keyless Everywhere, find the most common causes of fallbacks, and then fix them. We could then repeat this process until all sources of fallbacks had been eliminated, after which we could remove access to private keys (and therefore the fallback) from nginx. One of the early causes of fallbacks was gokeyless-internal returning ErrKeyNotFound, indicating that it couldn’t find the requested private key in storage. This should not have been possible, since nginx only makes a request to gokeyless-internal after first finding the certificate and key pair in storage, and we always write the private key and certificate together. It turned out that in addition to returning the error for the intended case of the key truly not found, we were also returning it when transient errors like timeouts were encountered. To resolve this, we updated those transient error conditions to return ErrInternal, and deployed to our <a href="https://en.wikipedia.org/wiki/Sentinel_species">canary datacenters</a>. Strangely, we found that a handful of instances in a single datacenter started encountering high rates of fallbacks, and the logs from nginx indicated it was due to a timeout between nginx and gokeyless-internal. The timeouts didn’t occur right away, but once a system started logging some timeouts it never stopped. Even after we rolled back the release, the fallbacks continued with the old version of the software! Furthermore, while nginx was complaining about timeouts, gokeyless-internal seemed perfectly healthy and was reporting reasonable performance metrics (sub-millisecond median request latency).</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5xlp86WlHoUaRjiXcdifcP/c06dad2ff1f58ff555470d8809d6bba8/image1-1.png" />
            
            </figure><p>To debug the issue, we added detailed logging to both nginx and gokeyless, and followed the chain of events backwards once timeouts were encountered.</p>
            <pre><code>➜ ~ grep 'timed out' nginx.log | grep Keyless | head -5
2018-07-25T05:30:49.000 29m41 2018/07/25 05:30:49 [error] 4525#0: *1015157 Keyless SSL request/response timed out while reading Keyless SSL response, keyserver: 127.0.0.1
2018-07-25T05:30:49.000 29m41 2018/07/25 05:30:49 [error] 4525#0: *1015231 Keyless SSL request/response timed out while waiting for Keyless SSL response, keyserver: 127.0.0.1
2018-07-25T05:30:49.000 29m41 2018/07/25 05:30:49 [error] 4525#0: *1015271 Keyless SSL request/response timed out while waiting for Keyless SSL response, keyserver: 127.0.0.1
2018-07-25T05:30:49.000 29m41 2018/07/25 05:30:49 [error] 4525#0: *1015280 Keyless SSL request/response timed out while waiting for Keyless SSL response, keyserver: 127.0.0.1
2018-07-25T05:30:50.000 29m41 2018/07/25 05:30:50 [error] 4525#0: *1015289 Keyless SSL request/response timed out while waiting for Keyless SSL response, keyserver: 127.0.0.1</code></pre>
            <p>You can see the first request to log a timeout had id 1015157. Also interesting that the first log line was "timed out while reading," but all the others are "timed out while waiting," and this latter message is the one that continues forever. Here is the matching request in the gokeyless log:</p>
            <pre><code>➜ ~ grep 'id=1015157 ' gokeyless.log | head -1
2018-07-25T05:30:39.000 29m41 2018/07/25 05:30:39 [DEBUG] connection 127.0.0.1:30520: worker=ecdsa-29 opcode=OpECDSASignSHA256 id=1015157 sni=announce.php?info_hash=%a8%9e%9dc%cc%3b1%c8%23%e4%93%21r%0f%92mc%0c%15%89&amp;peer_id=-ut353s-%ce%ad%5e%b1%99%06%24e%d5d%9a%08&amp;port=42596&amp;uploaded=65536&amp;downloaded=0&amp;left=0&amp;corrupt=0&amp;key=04a184b7&amp;event=started&amp;numwant=200&amp;compact=1&amp;no_peer_id=1 ip=104.20.33.147</code></pre>
            <p>Aha! That SNI value is clearly invalid (SNIs are like Host headers, i.e. they are domains, not URL paths), and it’s also quite long. Our storage system indexes certificates based on two indices: which SNI they correspond to, and which IP addresses they correspond to (for older clients that don’t support SNI). Our storage interface uses the memcached protocol, and the client library that gokeyless-internal uses rejects requests for keys longer than 250 characters (memcached’s maximum key length), whereas the nginx logic is to simply ignore the invalid SNI and treat the request as if only had an IP. The change in our new release had shifted this condition from <code>ErrKeyNotFound</code> to <code>ErrInternal</code>, which triggered cascading problems in nginx. The “timeouts” it encountered were actually a result of throwing away all in-flight requests multiplexed on a connection which happened to return <code>ErrInternal</code>for a single request. These requests were retried, but once this condition triggered, nginx became overloaded by the number of retried requests plus the continuous stream of new requests coming in with bad SNI, and was unable to recover. This explains why rolling back gokeyless-internal didn’t fix the problem.</p><p>This discovery finally brought our attention to nginx, which thus far had escaped blame since it had been working reliably with customer key servers for years. However, communicating over localhost to a multitenant key server is fundamentally different than reaching out over the public Internet to communicate with a customer’s key server, and we had to make the following changes:</p><ul><li><p>Instead of a long connection timeout and a relatively short response timeout for customer key servers, extremely short connection timeouts and longer request timeouts are appropriate for a localhost key server.</p></li><li><p>Similarly, it’s reasonable to retry (with backoff) if we timeout waiting on a customer key server response, since we can’t trust the network. But over localhost, a timeout would only occur if gokeyless-internal were overloaded and the request were still queued for processing. In this case a retry would only lead to more total work being requested of gokeyless-internal, making the situation worse.</p></li><li><p>Most significantly, nginx must not throw away all requests multiplexed on a connection if any single one of them encounters an error, since a single connection no longer represents a single customer.</p></li></ul>
    <div>
      <h2>Implementations matter</h2>
      <a href="#implementations-matter">
        
      </a>
    </div>
    <p>CPU at the edge is one of our most precious assets, and it’s closely guarded by our performance team (aka CPU police). Soon after turning on Keyless Everywhere in one of our canary datacenters, they noticed gokeyless using ~50% of a core per instance. We were shifting the sign operations from nginx to gokeyless, so of course it would be using more CPU now. But nginx should have seen a commensurate reduction in CPU usage, right?</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5UKCYIeE5MqU3j7jG8GFiy/38fbb7e9842218b75153a512d908ae31/image5.png" />
            
            </figure><p>Wrong. Elliptic curve operations are very fast in Go, but it’s known that <a href="https://github.com/golang/go/issues/21525">RSA operations are much slower than their BoringSSL counterparts</a>.</p><p>Although Go 1.11 includes optimizations for RSA math operations, we needed more speed. Well-tuned assembly code is required to match the performance of BoringSSL, so Armando Faz from our Crypto team helped claw back some of the lost CPU by reimplementing parts of the <a href="https://golang.org/pkg/math/big/">math/big</a> package with platform-dependent assembly in an internal fork of Go. The recent <a href="https://github.com/golang/go/wiki/AssemblyPolicy">assembly policy</a> of Go prefers the use of Go portable code instead of assembly, so these optimizations were not upstreamed. There is still room for more optimizations, and for that reason we’re still evaluating moving to cgo + BoringSSL for sign operations, despite <a href="https://dave.cheney.net/2016/01/18/cgo-is-not-go">cgo’s many downsides</a>.</p>
    <div>
      <h2>Changing our tooling</h2>
      <a href="#changing-our-tooling">
        
      </a>
    </div>
    <p>Process isolation is a powerful tool for protecting secrets in memory. Our move to Keyless Everywhere demonstrates that this is not a simple tool to leverage. Re-architecting an existing system such as nginx to use process isolation to protect secrets was time-consuming and difficult. Another approach to memory safety is to use a memory-safe language such as Rust.</p><p>Rust was originally developed by Mozilla but is starting <a href="https://www.infoq.com/articles/programming-language-trends-2019/">to be used much more widely</a>. The main advantage that Rust has over C/C++ is that it has memory safety features without a garbage collector.</p><p>Re-writing an existing application in a new language such as Rust is a daunting task. That said, many new Cloudflare features, from the powerful <a href="/announcing-firewall-rules/">Firewall Rules</a> feature to our <a href="/announcing-warp-plus/">1.1.1.1 with WARP</a> app, have been written in Rust to take advantage of its powerful memory-safety properties. We’re really happy with Rust so far and plan on using it even more in the future.</p>
    <div>
      <h2>Conclusion</h2>
      <a href="#conclusion">
        
      </a>
    </div>
    <p>The harrowing aftermath of Heartbleed taught the industry a lesson that should have been obvious in retrospect: keeping important secrets in applications that can be accessed remotely via the Internet is a risky security practice. In the following years, with a lot of work, we leveraged process separation and Keyless SSL to ensure that the next Heartbleed wouldn’t put customer keys at risk.</p><p>However, this is not the end of the road. Recently memory disclosure vulnerabilities such as <a href="https://arxiv.org/abs/1807.10535">NetSpectre</a> have been discovered which are able to bypass application process boundaries, so we continue to actively explore new ways to keep keys secure.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/41IkUo52ZjxvkXCjUsoKGE/280a2f7580e8d374abffe61b61615bff/image3.png" />
            
            </figure><p></p> ]]></content:encoded>
            <category><![CDATA[Crypto Week]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[Keyless SSL]]></category>
            <category><![CDATA[Cryptography]]></category>
            <guid isPermaLink="false">4l12BK6lPNLLMUIUI3kNN</guid>
            <dc:creator>Nick Sullivan</dc:creator>
            <dc:creator>Chris Broglie</dc:creator>
        </item>
        <item>
            <title><![CDATA[Delegated Credentials for TLS]]></title>
            <link>https://blog.cloudflare.com/keyless-delegation/</link>
            <pubDate>Fri, 01 Nov 2019 13:00:00 GMT</pubDate>
            <description><![CDATA[ Announcing support for a new cryptographic protocol making it possible to deploy encrypted services while still maintaining performance and control of private keys: Delegated Credentials for TLS.  ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Today we’re happy to announce support for a new cryptographic protocol that helps make it possible to deploy encrypted services in a global network while still maintaining fast performance and tight control of private keys: Delegated Credentials for TLS. We have been working with partners from Facebook, Mozilla, and the broader IETF community to define this emerging standard. We’re excited to share the gory details today in this blog post.</p><p>Also, be sure to check out the blog posts on the topic by our friends at <a href="https://engineering.fb.com/security/delegated-credentials-improving-the-security-of-tls-certificates/">Facebook</a> and <a href="https://blog.mozilla.org/security/2019/11/01/validating-delegated-credentials-for-tls-in-firefox/">Mozilla</a>!</p>
    <div>
      <h2>Deploying TLS globally</h2>
      <a href="#deploying-tls-globally">
        
      </a>
    </div>
    <p>Many of the technical problems we face at Cloudflare are widely shared problems across the Internet industry. As gratifying as it can be to solve a problem for ourselves and our customers, it can be even more gratifying to solve a problem for the entire Internet. For the past three years, we have been working with peers in the industry to solve a specific shared problem in the TLS infrastructure space: How do you terminate TLS connections while storing keys remotely and maintaining performance and availability? Today we’re announcing that Cloudflare now supports Delegated Credentials, the result of this work.</p><p>Cloudflare’s TLS/SSL features are among the top reasons customers use our service. Configuring TLS is hard to do without internal expertise. By automating TLS, web site and web service operators gain the latest TLS features and the most secure configurations by default. It also reduces the risk of outages or bad press due to misconfigured or insecure encryption settings. Customers also gain early access to unique features like <a href="/introducing-tls-1-3/">TLS 1.3</a>, <a href="/towards-post-quantum-cryptography-in-tls/">post-quantum cryptography</a>, and <a href="/high-reliability-ocsp-stapling/">OCSP stapling</a> as they become available.</p><p>Unfortunately, for web services to authorize a service to terminate TLS for them, they have to trust the service with their private keys, which demands a high level of trust. For services with a global footprint, there is an additional level of nuance. They may operate multiple data centers located in places with varying levels of physical security, and each of these needs to be trusted to terminate TLS.</p><p>To tackle these problems of trust, Cloudflare has invested in two technologies: <a href="/announcing-keyless-ssl-all-the-benefits-of-cloudflare-without-having-to-turn-over-your-private-ssl-keys/">Keyless SSL</a>, which allows customers to use Cloudflare without sharing their private key with Cloudflare; and <a href="/introducing-cloudflare-geo-key-manager/">Geo Key Manager</a>, which allows customers to choose the geographical locations in which Cloudflare should keep their keys. Both of these technologies are able to be deployed without any changes to browsers or other clients. They also come with some downsides in the form of availability and performance degradation.</p><p>Keyless SSL introduces extra latency at the start of a connection. In order for a server without access to a private key to establish a connection with a client, that servers needs to reach out to a key server, or a remote point of presence, and ask them to do a private key operation. This not only adds additional latency to the connection, causing the content to load slower, but it also introduces some troublesome operational constraints on the customer. Specifically, the server with access to the key needs to be highly available or the connection can fail. Sites often use Cloudflare to improve their site’s availability, so having to run a high-availability key server is an unwelcome requirement.</p>
    <div>
      <h2>Turning a pull into a push</h2>
      <a href="#turning-a-pull-into-a-push">
        
      </a>
    </div>
    <p>The reason services like Keyless SSL that rely on remote keys are so brittle is their architecture: they are pull-based rather than push-based. Every time a client attempts a handshake with a server that doesn’t have the key, it needs to pull the authorization from the key server. An alternative way to build this sort of system is to periodically push a short-lived authorization key to the server and use that for handshakes. Switching from a pull-based model to a push-based model eliminates the additional latency, but it comes with additional requirements, including the need to change the client.</p><p>Enter the new TLS feature of <a href="https://tools.ietf.org/html/draft-ietf-tls-subcerts-04">Delegated Credentials</a> (DCs). A delegated credential is a short-lasting key that the certificate’s owner has delegated for use in TLS. They work like a power of attorney: your server authorizes our server to terminate TLS for a limited time. When a browser that supports this protocol connects to our edge servers we can show it this “power of attorney”, instead of needing to reach back to a customer’s server to get it to authorize the TLS connection. This reduces latency and improves performance and reliability.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1wNKw1iaNaUHBESq06OXIK/fbbba3ca4614c398480a03e7ce00fc1b/pull-diagram-1.jpg" />
            
            </figure><p>The pull model</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/qfW3dQvRSnHowxNEW2lSf/d457a3d0b7c52f9c3523c7f2d73cd94a/push-diagram.jpg" />
            
            </figure><p>The push model</p><p>A fresh delegated credential can be created and pushed out to TLS servers long before the previous credential expires. Momentary blips in availability will not lead to broken handshakes for clients that support delegated credentials. Furthermore, a Delegated Credentials-enabled TLS connection is just as fast as a standard TLS connection: there’s no need to connect to the key server for every handshake. This removes the main drawback of Keyless SSL for DC-enabled clients.</p><p>Delegated credentials are intended to be an Internet Standard RFC that anyone can implement and use, not a replacement for Keyless SSL. Since browsers will need to be updated to support the standard, proprietary mechanisms like Keyless SSL and Geo Key Manager will continue to be useful. Delegated credentials aren’t just useful in our context, which is why we’ve developed it openly and with contributions from across industry and academia. Facebook has integrated them into their own TLS implementation, and you can read more about how they view the security benefits <a href="https://engineering.fb.com/security/delegated-credentials/">here.</a>  When it comes to improving the security of the Internet, we’re all on the same team.</p><p><i>"We believe delegated credentials provide an effective way to boost security by reducing certificate lifetimes without sacrificing reliability. This will soon become an Internet standard and we hope others in the industry adopt delegated credentials to help make the Internet ecosystem more secure."</i></p><p></p><p>— <b>Subodh Iyengar</b>, software engineer at Facebook</p>
    <div>
      <h2>Extensibility beyond the PKI</h2>
      <a href="#extensibility-beyond-the-pki">
        
      </a>
    </div>
    <p>At Cloudflare, we’re interested in pushing the state of the art forward by experimenting with new algorithms. In TLS, there are three main areas of experimentation: ciphers, key exchange algorithms, and authentication algorithms. Ciphers and key exchange algorithms are only dependent on two parties: the client and the server. This freedom allows us to deploy exciting new choices like <a href="/do-the-chacha-better-mobile-performance-with-cryptography/">ChaCha20-Poly1305</a> or <a href="/towards-post-quantum-cryptography-in-tls/">post-quantum key agreement</a> in lockstep with browsers. On the other hand, the authentication algorithms used in TLS are dependent on certificates, which introduces certificate authorities and the entire public key infrastructure into the mix.</p><p>Unfortunately, the public key infrastructure is very conservative in its choice of algorithms, making it harder to adopt newer cryptography for authentication algorithms in TLS. For instance, <a href="https://en.wikipedia.org/wiki/EdDSA">EdDSA</a>, a highly-regarded signature scheme, is not supported by certificate authorities, and <a href="https://www.mozilla.org/en-US/about/governance/policies/security-group/certs/policy/">root programs limit the certificates that will be signed.</a> With the emergence of quantum computing, experimenting with new algorithms is essential to determine which solutions are deployable and functional on the Internet.</p><p>Since delegated credentials introduce the ability to use new authentication key types without requiring changes to certificates themselves, this opens up a new area of experimentation. Delegated credentials can be used to provide a level of flexibility in the transition to post-quantum cryptography, by enabling new algorithms and modes of operation to coexist with the existing PKI infrastructure. It also enables tiny victories, like the ability to use smaller, faster Ed25519 signatures in TLS.</p>
    <div>
      <h2>Inside DCs</h2>
      <a href="#inside-dcs">
        
      </a>
    </div>
    <p>A delegated credential contains a public key and an expiry time. This bundle is then signed by a certificate along with the certificate itself, binding the delegated credential to the certificate for which it is acting as “power of attorney”. A supporting client indicates its support for delegated credentials by including an extension in its Client Hello.</p><p>A server that supports delegated credentials composes the TLS Certificate Verify and Certificate messages as usual, but instead of signing with the certificate’s private key, it includes the certificate along with the DC, and signs with the DC’s private key. Therefore, the private key of the certificate only needs to be used for the signing of the DC.</p><p>Certificates used for signing delegated credentials require a special X.509 certificate extension (currently only available at <a href="https://docs.digicert.com/manage-certificates/certificate-profile-options/">DigiCert</a>). This requirement exists to avoid breaking assumptions people may have about the impact of temporary access to their keys on security, particularly in cases involving HSMs and the still unfixed <a href="/rfc-8446-aka-tls-1-3/">Bleichenbacher oracles</a> in older TLS versions.  Temporary access to a key can enable signing lots of delegated credentials which start far in the future, and as a result support was made opt-in. Early versions of QUIC had <a href="https://www.nds.ruhr-uni-bochum.de/media/nds/veroeffentlichungen/2015/08/21/Tls13QuicAttacks.pdf">similar issues</a>, and ended up adopting TLS to fix them. Protocol evolution on the Internet requires working well with already existing protocols and their flaws.</p>
    <div>
      <h2>Delegated Credentials at Cloudflare and Beyond</h2>
      <a href="#delegated-credentials-at-cloudflare-and-beyond">
        
      </a>
    </div>
    <p>Currently we use delegated credentials as a performance optimization for Geo Key Manager and Keyless SSL. Customers can update their certificates to include the special extension for delegated credentials, and we will automatically create delegated credentials and distribute them to the edge through the Keyless SSL or Geo Key Manager. For more information, see the <a href="https://developers.cloudflare.com/ssl/keyless-ssl/dc/">documentation.</a> It also enables us to be more conservative about where we keep keys for customers, improving our security posture.</p><p>Delegated Credentials would be useless if it wasn’t also supported by browsers and other HTTP clients. Christopher Patton, a former intern at Cloudflare, implemented support in Firefox and its underlying NSS security library. <a href="https://blog.mozilla.org/security/2019/11/01/validating-delegated-credentials-for-tls-in-firefox/">This feature is now in the Nightly versions of Firefox</a>. You can turn it on by activating the configuration option security.tls.enable_delegated_credentials at about:config. Studies are ongoing on how effective this will be in a wider deployment. There also is support for Delegated Credentials in BoringSSL.</p><p><i>"At Mozilla we welcome ideas that help to make the Web PKI more robust. The Delegated Credentials feature can help to provide secure and performant TLS connections for our users, and we're happy to work with Cloudflare to help validate this feature."</i></p><p></p><p>— <b>Thyla van der Merwe</b>, Cryptography Engineering Manager at Mozilla</p><p>One open issue is the question of client clock accuracy. Until we have a wide-scale study we won’t know how many connections using delegated credentials will break because of the 24 hour time limit that is imposed.  Some clients, in particular mobile clients, may have inaccurately set clocks, the root cause of one third of all <a href="https://www.cloudflare.com/learning/ssl/common-errors/">certificate errors</a> in Chrome. Part of the way that we’re aiming to solve this problem is through standardizing and improving <a href="/roughtime/">Roughtime</a>, so web browsers and other services that need to validate certificates can do so independent of the client clock.</p><p>Cloudflare’s global scale means that we see connections from every corner of the world, and from many different kinds of connection and device. That reach enables us to find rare problems with the deployability of protocols. For example, our <a href="/why-tls-1-3-isnt-in-browsers-yet/">early deployment</a> helped inform the development of the TLS 1.3 standard. As we enable developing protocols like delegated credentials, we learn about obstacles that inform and affect their future development.</p>
    <div>
      <h2>Conclusion</h2>
      <a href="#conclusion">
        
      </a>
    </div>
    <p>As new protocols emerge, we'll continue to play a role in their development and bring their benefits to our customers. Today’s announcement of a technology that overcomes some limitations of Keyless SSL is just one example of how Cloudflare takes part in improving the Internet not just for our customers, but for everyone. During the standardization process of turning the draft into an RFC, we’ll continue to maintain our implementation and come up with new ways to apply delegated credentials.</p> ]]></content:encoded>
            <category><![CDATA[Crypto Week]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[Keyless SSL]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[Research]]></category>
            <category><![CDATA[Cryptography]]></category>
            <guid isPermaLink="false">21MHSnISq1AaWWdB5lruxJ</guid>
            <dc:creator>Nick Sullivan</dc:creator>
            <dc:creator>Watson Ladd</dc:creator>
        </item>
        <item>
            <title><![CDATA[Announcing cfnts: Cloudflare's implementation of NTS in Rust]]></title>
            <link>https://blog.cloudflare.com/announcing-cfnts/</link>
            <pubDate>Thu, 31 Oct 2019 13:00:00 GMT</pubDate>
            <description><![CDATA[ Several months ago we announced that we were providing a new public time service. Part of what we were providing was the first major deployment of the new Network Time Security protocol, with a newly written implementation of NTS in Rust.  ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Several months ago we announced that we were providing a <a href="/secure-time/">new public time service.</a> Part of what we were providing was the first major deployment of the new Network Time Security (NTS) protocol, with a newly written implementation of NTS in Rust. In the process, we received helpful advice from the NTP community, especially from the NTPSec and Chrony projects. We’ve also participated in several interoperability events. Now we are returning something to the community: Our implementation, cfnts, is now <a href="https://github.com/cloudflare/cfnts">open source</a> and we welcome your pull requests and issues.</p><p>The journey from a blank source file to a working, deployed service was a lengthy one, and it involved many people across multiple teams.</p><hr /><p><i>"Correct time is a necessity for most security protocols in use on the Internet. Despite this, secure time transfer over the Internet has previously required complicated configuration on a case by case basis. With the introduction of NTS, secure time synchronization will finally be available for everyone. It is a small, but important, step towards increasing security in all systems that depend on accurate time. I am happy that Cloudflare are sharing their NTS implementation. A diversity of software with NTS support is important for quick adoption of the new protocol."</i></p><p></p><p>— <b>Marcus Dansarie</b>, coauthor of the <a href="https://datatracker.ietf.org/doc/draft-ietf-ntp-using-nts-for-ntp/">NTS specification</a></p><hr />
    <div>
      <h2>How NTS works</h2>
      <a href="#how-nts-works">
        
      </a>
    </div>
    <p>NTS is structured as a suite of two sub-protocols as shown in the figure below. The first is the Network Time Security Key Exchange (NTS-KE), which is always conducted over Transport Layer Security (TLS) and handles the creation of key material and parameter negotiation for the second protocol. The second is <a href="https://tools.ietf.org/html/rfc5905">NTPv4</a>, the current version of the NTP protocol, which allows the client to synchronize their time from the remote server.</p><p>In order to maintain the scalability of NTPv4, it was important that the server not maintain per-client state. A very small server can serve millions of NTP clients. Maintaining this property while providing security is achieved with cookies that the server provides to the client that contain the server state.</p><p>In the first stage, the client sends a request to the NTS-KE server and gets a response via TLS. This exchange carries out a number of functions:</p><ul><li><p>Negotiates the <a href="https://en.wikipedia.org/wiki/Authenticated_encryption">AEAD</a> algorithm to be used in the second stage.</p></li><li><p>Negotiates the second protocol. Currently, the standard only defines how NTS works with NTPv4.</p></li><li><p>Negotiates the NTP server IP address and port.</p></li><li><p>Creates cookies for use in the second stage.</p></li><li><p>Creates two symmetric keys (C2S and S2C) from the TLS session via exporters.</p></li></ul>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6ELZCQXF1AwQPqScsl7VRM/3a9b9848f80df1c72126b485ebbba15d/overview-of-NTP-_2x-1.png" />
            
            </figure><p>In the second stage, the client securely synchronizes the clock with the negotiated NTP server. To synchronize securely, the client sends NTPv4 packets with four special extensions:</p><ul><li><p><i>Unique Identifier Extension</i> contains a random nonce used to prevent replay attacks.</p></li><li><p><i>NTS Cookie Extension</i> contains one of the cookies that the client stores. Since currently only the client remembers the two AEAD keys (C2S and S2C), the server needs to use the cookie from this extension to extract the keys. Each cookie contains the keys encrypted under a secret key the server has.</p></li><li><p><i>NTS Cookie Placeholder Extension</i> is a signal from the client to request additional cookies from the server. This extension is needed to make sure that the response is not much longer than the request to prevent amplification attacks.</p></li><li><p><i>NTS Authenticator and Encrypted Extension Fields Extension</i> contains a ciphertext from the AEAD algorithm with C2S as a key and with the NTP header, timestamps, and all the previously mentioned extensions as associated data. Other possible extensions can be included as encrypted data within this field. Without this extension, the timestamp can be spoofed.</p></li></ul><p>After getting a request, the server sends a response back to the client echoing the <i>Unique Identifier Extension</i> to prevent replay attacks, the <i>NTS Cookie Extension</i> to provide the client with more cookies, and the <i>NTS Authenticator and Encrypted Extension Fields Extension</i> with an AEAD ciphertext with S2C as a key. But in the server response, instead of sending the <i>NTS Cookie Extension</i> in plaintext, it needs to be encrypted with the AEAD to provide unlinkability of the NTP requests.</p><p>The second handshake can be repeated many times without going back to the first stage since each request and response gives the client a new cookie. The expensive public key operations in TLS are thus amortized over a large number of requests. Furthermore, specialized timekeeping devices like FPGA implementations only need to implement a few symmetric cryptographic functions and can delegate the complex TLS stack to a different device.</p>
    <div>
      <h2>Why Rust?</h2>
      <a href="#why-rust">
        
      </a>
    </div>
    <p>While many of our services are written in <a href="/tag/go/">Go</a>, and we have considerable experience on the Crypto team with Go, a garbage collection pause in the middle of responding to an NTP packet would negatively impact accuracy. We picked <a href="/tag/rust/">Rust</a> because of its zero-overhead and useful features.</p><ul><li><p><b>Memory safety</b> After <a href="/answering-the-critical-question-can-you-get-private-ssl-keys-using-heartbleed/">Heartbleed</a>, <a href="/incident-report-on-memory-leak-caused-by-cloudflare-parser-bug/">Cloudbleed</a>, and the <a href="https://docs.microsoft.com/en-us/security-updates/securitybulletins/2009/ms09-044">steady</a> <a href="https://googleprojectzero.blogspot.com/2019/08/in-wild-ios-exploit-chain-1.html">drip</a> <a href="https://cve.mitre.org/cgi-bin/cvekey.cgi?keyword=%22heap+overflow%22">of</a> <a href="https://cve.mitre.org/cgi-bin/cvekey.cgi?keyword=%22buffer+overflow%22">vulnerabilities</a> caused by C’s lack of memory safety, it’s clear that C is not a good choice for new software dealing with untrusted inputs. The obvious solution for memory safety is to use garbage collection, but garbage collection has a substantial runtime overhead, while Rust has less runtime overhead.</p></li><li><p><b>Non-nullability</b> Null pointers are an edge case that is frequently not handled properly. Rust explicitly marks optionality, so all references in Rust can be safely dereferenced. The type system ensures that option types are properly handled.</p></li><li><p><b>Thread safety</b>  Data-race prevention is another key feature of Rust. Rust’s ownership model ensures that all cross-thread accesses are synchronized by default. While not a panacea, this eliminates a major class of bugs.</p></li><li><p><b>Immutability</b> Separating types into mutable and immutable is very important for reducing bugs. For example, in Java, when you pass an object into a function as a parameter, after the function is finished, you will never know whether the object has been mutated or not. Rust allows you to pass the object reference into the function and still be assured that the object is not mutated.</p></li><li><p><b>Error handling</b>  Rust result types help with ensuring that operations that can produce errors are identified and a choice made about the error, even if that choice is passing it on.</p></li></ul><p>While Rust provides safety with zero overhead, coding in Rust involves understanding linear types and for us a new language. In this case the importance of security and performance meant we chose Rust over a potentially easier task in Go.</p>
    <div>
      <h2>Dependencies we use</h2>
      <a href="#dependencies-we-use">
        
      </a>
    </div>
    <p>Because of our scale and for DDoS protection we needed a highly scalable server. For UDP protocols without the concept of a connection, the server can respond to one packet at a time easily, but for TCP this is more complex. Originally we thought about using <a href="https://github.com/tokio-rs/tokio">Tokio</a>. However, at the time Tokio suffered from scheduler <a href="https://github.com/tokio-rs/tokio/issues/449">problems that had caused other teams some issues</a>. As a result we decided to use <a href="https://github.com/tokio-rs/mio">Mio</a> directly, basing our work on the examples in <a href="https://github.com/ctz/rustls">Rustls</a>.</p><p>We decided to use Rustls over OpenSSL or BoringSSL because of the crate's consistent error codes and default support for authentication that is difficult to disable accidentally. While there are some features that are not yet supported, it got the job done for our service.</p>
    <div>
      <h2>Other engineering choices</h2>
      <a href="#other-engineering-choices">
        
      </a>
    </div>
    <p>More important than our choice of programming language was our implementation strategy. A working, fully featured NTP implementation is a complicated program involving a <a href="https://en.wikipedia.org/wiki/Phase-locked_loop">phase-locked loop.</a> These have a difficult reputation due to their nonlinear nature, beyond the usual complexities of closed loop control. The response of a phase lock loop to a disturbance can be estimated if the loop is locked and the disturbance small. However, lock acquisition, large disturbances, and the necessary filtering in NTP are all hard to analyze mathematically since they are not captured in the linear models applied for small scale analysis. While NTP works with the total phase, unlike the phase-locked loops of electrical engineering, there are still nonlinear elements. For NTP testing, changes to this loop requires weeks of operation to determine the performance as the loop responds very slowly.</p><p>Computer clocks are generally accurate over short periods, while networks are plagued with inconsistent delays. This <a href="https://tf.nist.gov/general/pdf/1551.pdf">demands a slow response</a>. Changes we make to our service have taken hours to have an effect, as the clients slowly adapt to the new conditions. While <a href="https://tools.ietf.org/html/rfc5905">RFC 5905</a> provides lots of details on an algorithm to adjust the clock, later implementations such as <a href="https://chrony.tuxfamily.org/">chrony</a> have improved upon the algorithm through much more sophisticated nonlinear filters.</p><p>Rather than implement these more sophisticated algorithms, we let chrony adjust the clock of our servers, and copy the state variables in the header from chrony and adjust the dispersion and root delay according to the formulas given in the RFC. This strategy let us focus on the new protocols.</p>
    <div>
      <h2>Prague</h2>
      <a href="#prague">
        
      </a>
    </div>
    <p>Part of what the Internet Engineering Task Force (IETF) does is organize events like hackathons where implementers of a new standard can get together and try to make their stuff work with one another. This exposes bugs and infelicities of language in the standard and the implementations. We attended the IETF 104 hackathon to develop our server and make it work with other implementations. The NTP working group members were extremely generous with their time, and during the process we uncovered a few issues relating to the exact way one has to handle ALPN with older OpenSSL versions.</p><p>At the IETF 104 in Prague we had a working client and server for NTS-KE by the end of the hackathon. This was a good amount of progress considering we started with nothing. However, without implementing NTP we didn’t actually know that our server and client were computing the right thing. That would have to wait for later rounds of testing.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1YlPzBw5H4kOUfQWJfgFXq/486254883416f6509d7abd54ec911ff1/Screen-Shot-2019-10-28-at-12.58.07-PM.png" />
            
            </figure><p>Wireshark during some NTS debugging</p>
    <div>
      <h2>Crypto Week</h2>
      <a href="#crypto-week">
        
      </a>
    </div>
    <p>As <a href="/secure-time/">Crypto Week 2019</a> approached we were busily writing code. All of the NTP protocol had to be implemented, together with the connection between the NTP and NTS-KE parts of the server. We also had to deploy processes to synchronize the ticket encrypting keys around the world and work on reconfiguring our own timing infrastructure to support this new service.</p><p>With a few weeks to go we had a working implementation, but we needed servers and clients out there to test with. But because we only support TLS 1.3 on the server, which had only just entered into OpenSSL, there were some compatibility problems.</p><p>We ended up compiling a chrony branch with NTS support and NTPsec ourselves and testing against time.cloudflare.com. We also tested our client against test servers set up by the chrony and NTPsec projects, in the hopes that this would expose bugs and have our implementations work nicely together. After a few lengthy days of debugging, we found out that our nonce length wasn’t exactly in accordance with the spec, which was quickly fixed. The NTPsec project was extremely helpful in this effort. Of course, this was the day that our <a href="https://sfist.com/2019/05/31/power-outage-hits-soma-china-basin/">office had a blackout</a>, so the testing happened outside in Yerba Buena Gardens.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3JaJ1t9QsDElwkgvBo5Uo/94b282d0f621c730e467495d03a486d2/pasted-image-0-3.png" />
            
            </figure><p>Yerba Buena commons. Taken by Wikipedia user Beyond My Ken. CC-BY-SA</p><p>During the deployment of time.cloudflare.com, we had to open up our firewall to incoming NTP packets. Since the start of Cloudflare’s network existence and because of NTP reflection attacks we had previously closed UDP port 123 on the router. Since source port 123 is also used by clients sometimes to send NTP packets, it’s impossible for NTP servers to filter reflection attacks without parsing the contents of NTP packet, which routers have difficulty doing.  In order to protect Cloudflare infrastructure we got an entire subnet just for the time service, so it could be aggressively throttled and rerouted in case of massive DDoS attacks. This is an exceptional case: most edge services at Cloudflare run on every available IP.</p>
    <div>
      <h2>Bug fixes</h2>
      <a href="#bug-fixes">
        
      </a>
    </div>
    <p>Shortly after the public launch, we discovered that older Windows versions shipped with NTP version 3, and our server only spoke version 4. This was easy to fix since the timestamps have not moved in NTP versions: we echo the version back and most still existing NTP version 3 clients will understand what we meant.</p><p>Also tricky was the failure of Network Time Foundation ntpd clients to expand the polling interval. It turns out that one has to echo back the client’s polling interval to have the polling interval expand. Chrony does not use the polling interval from the server, and so was not affected by this incompatibility.</p><p>Both of these issues were fixed in ways suggested by other NTP implementers who had run into these problems themselves. We thank Miroslav Lichter tremendously for telling us exactly what the problem was, and the members of the Cloudflare community who posted packet captures demonstrating these issues.</p>
    <div>
      <h2>Continued improvement</h2>
      <a href="#continued-improvement">
        
      </a>
    </div>
    <p>The original production version of cfnts was not particularly object oriented and several contributors were just learning Rust. As a result there was quite a bit of unwrap and unnecessary mutability flying around. Much of the code was in functions even when it could profitably be attached to structures. All of this had to be restructured. Keep in mind that some of the best code running in the real-world have been written, rewritten, and sometimes rewritten again! This is actually a good thing.</p><p>As an internal project we relied on Cloudflare’s internal tooling for building, testing, and deploying code. These were replaced with tools available to everyone like Docker to ensure anyone can contribute. Our repository is integrated with Circle CI, ensuring that all contributions are automatically tested. In addition to unit tests we test the entire end to end functionality of getting a measurement of the time from a server.</p>
    <div>
      <h2>The Future</h2>
      <a href="#the-future">
        
      </a>
    </div>
    <p>NTPsec has already released support for NTS but we see very little usage. Please try turning on NTS if you use NTPsec and see how it works with time.cloudflare.com.  As the draft advances through the standards process the protocol will undergo an incompatible change when the identifiers are updated and assigned out of the IANA registry instead of being experimental ones, so this is very much an experiment. Note that your daemon will need TLS 1.3 support and so could require manually compiling OpenSSL and then linking against it.</p><p>We’ve also added our time service to the public NTP pool. The NTP pool is a widely used volunteer-maintained service that provides NTP servers geographically spread across the world. Unfortunately, NTS doesn’t currently work well with the pool model, so for the best security, we recommend enabling NTS and using time.cloudflare.com and other NTS supporting servers.</p><p>In the future, we’re hoping that more clients support NTS, and have licensed our code liberally to enable this. We would love to hear if you incorporate it into a product and welcome contributions to make it more useful.</p><p>We’re also encouraged to see that Netnod has a <a href="https://www.netnod.se/time-and-frequency/netnod-launch-one-of-the-first-nts-enabled-time-services-in-the-world">production NTS service</a> at nts.ntp.se. The more time services and clients that adopt NTS, the more secure the Internet will be.</p>
    <div>
      <h2>Acknowledgements</h2>
      <a href="#acknowledgements">
        
      </a>
    </div>
    <p>Tanya Verma and <a href="/author/gabbi/">Gabbi Fisher</a> were major contributors to the code, especially the configuration system and the client code. We’d also like to thank Gary Miller, Miroslav Lichter, and all the people at Cloudflare who set up their laptops and home machines to point to time.cloudflare.com for early feedback.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/0pc2IWY3wfq8UQuTPPMAB/2d7651cc321580194ca41bbbb6b3e7e5/tales-from-the-crypto-team_2x--1--1.png" />
            
            </figure> ]]></content:encoded>
            <category><![CDATA[Crypto Week]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[Research]]></category>
            <category><![CDATA[Cryptography]]></category>
            <guid isPermaLink="false">1T4MGuuPm0kE0qsJuGUTw5</guid>
            <dc:creator>Watson Ladd</dc:creator>
            <dc:creator>Pop Chunhapanya</dc:creator>
        </item>
        <item>
            <title><![CDATA[The TLS Post-Quantum Experiment]]></title>
            <link>https://blog.cloudflare.com/the-tls-post-quantum-experiment/</link>
            <pubDate>Wed, 30 Oct 2019 13:00:00 GMT</pubDate>
            <description><![CDATA[ In a June 2019 experiment with Google, we implemented two post-quantum key exchanges, integrated them into our TLS stack and deployed the implementation on edge servers and in Chrome Canary clients. ]]></description>
            <content:encoded><![CDATA[ <p></p><p>In June, we <a href="/towards-post-quantum-cryptography-in-tls/">announced</a> a wide-scale post-quantum experiment with Google. We implemented two post-quantum (i.e., not yet known to be broken by quantum computers) key exchanges, integrated them into our TLS stack and deployed the implementation on our edge servers and in Chrome Canary clients. The goal of the experiment was to evaluate the performance and feasibility of deployment in TLS of two post-quantum key agreement ciphers.</p><p>In our <a href="/towards-post-quantum-cryptography-in-tls/">previous blog post</a> on <a href="https://www.cloudflare.com/learning/ssl/quantum/what-is-post-quantum-cryptography/">post-quantum cryptography</a>, we described differences between those two ciphers in detail. In case you didn’t have a chance to read it, we include a quick recap here. One characteristic of post-quantum key exchange algorithms is that the public keys are much larger than those used by "classical" algorithms. This will have an impact on the duration of the TLS handshake. For our experiment, we chose two algorithms: <a href="/towards-post-quantum-cryptography-in-tls/#cecpq2b-isogeny-based-post-quantum-tls">isogeny</a>-based SIKE and <a href="https://en.wikipedia.org/wiki/Lattice-based_cryptography">lattice</a>-based HRSS. The former has short key sizes (~330 bytes) but has a high computational cost; the latter has larger key sizes (~1100 bytes), but is a few orders of magnitude faster.</p><p>During NIST’s <i>Second PQC Standardization Conference</i>, Nick Sullivan <a href="https://csrc.nist.gov/CSRC/media/Presentations/measuring-tls-key-exchange-with-post-quantum-kem/images-media/sullivan-session-1-paper-pqc2019.pdf">presented</a> our approach to this experiment and some initial results. Quite accurately, he compared NTRU-HRSS to an ostrich and SIKE to a turkey—one is big and fast and the other is small and slow.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5600IZEbaJcZ2V0k94geH3/ab7f92737bba59c2ca9f53484bcbeba8/Screen-Shot-2019-10-29-at-1.33.21-PM.png" />
            
            </figure>
    <div>
      <h2>Setup &amp; Execution</h2>
      <a href="#setup-execution">
        
      </a>
    </div>
    <p>We based our experiment on TLS 1.3. Cloudflare operated the server-side TLS connections and Google Chrome (Canary and Dev builds) represented the client side of the experiment. We enabled both CECPQ2 (HRSS + X25519) and CECPQ2b (SIKE/p434 + X25519) key-agreement algorithms on all TLS-terminating edge servers. Since the post-quantum algorithms are considered experimental, the X25519 key exchange serves as a fallback to ensure the classical security of the connection.</p><p>Clients participating in the experiment were split into 3 groups—those who initiated TLS handshake with post-quantum CECPQ2, CECPQ2b or non post-quantum X25519 public keys. Each group represented approximately one third of the Chrome Canary population participating in the experiment.</p><p>In order to distinguish between clients participating in or excluded from the experiment, we added a custom extension to the TLS handshake. It worked as a simple flag sent by clients and echoed back by Cloudflare edge servers. This allowed us to measure the duration of TLS handshakes only for clients participating in the experiment.</p><p>For each connection, we collected telemetry metrics. The most important metric was a TLS server-side handshake duration defined as the time between receiving the Client Hello and Client Finished messages. The diagram below shows details of what was measured and how post-quantum key exchange was integrated with TLS 1.3.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/k6tBsggyYQ4ZSdjewV5kh/fe555ffe2b4fbf517cd88167416ee9b0/image1.png" />
            
            </figure><p>The experiment ran for 53 days in total, between August and October. During this time we collected millions of data samples, representing 5% of (anonymized) TLS connections that contained the extension signaling that the client was part of the experiment. We carried out the experiment in two phases.</p><p>In the first phase of the experiment, each client was assigned to use one of the three key exchange groups, and each client offered the same key exchange group for every connection. We collected over 10 million records over 40 days.</p><p>In the second phase of the experiment, client behavior was modified so that each client randomly chose which key exchange group to offer for each new connection, allowing us to directly compare the performance of each algorithm on a per-client basis. Data collection for this phase lasted 13 days and we collected 270 thousand records.</p>
    <div>
      <h2>Results</h2>
      <a href="#results">
        
      </a>
    </div>
    <p>We now describe our server-side measurement results. Client-side results are described at <a href="https://www.imperialviolet.org/2019/10/30/pqsivssl.html">https://www.imperialviolet.org/2019/10/30/pqsivssl.html</a>.</p>
    <div>
      <h3>What did we find?</h3>
      <a href="#what-did-we-find">
        
      </a>
    </div>
    <p>The primary metric we collected for each connection was the server-side handshake duration. The below histograms show handshake duration timings for all client measurements gathered in the first phase of the experiment, as well as breakdowns into the top five operating systems. The operating system breakdowns shown are restricted to only desktop/laptop devices except for Android, which consists of only mobile devices.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3DhkWkx8LGaOZpiziA6cWv/850d3d3fbd38bfd8b80c0550158af323/Screen-Shot-2019-10-29-at-2.04.13-PM.png" />
            
            </figure><p>It’s clear from the above plots that for most clients, CECPQ2b performs worse than CECPQ2 and CONTROL. Thus, the small key size of CECPQ2b does not make up for its large computational cost—the ostrich outpaces the turkey.</p>
    <div>
      <h3>Digging a little deeper</h3>
      <a href="#digging-a-little-deeper">
        
      </a>
    </div>
    <p>This means we’re done, right? Not quite. We are interested in determining if there are <i>any</i> populations of TLS clients for which CECPQ2b consistency outperforms CECPQ2. This requires taking a closer look at the long tail of handshake durations. The below plots show <a href="https://en.wikipedia.org/wiki/Cumulative_distribution_function">cumulative distribution functions</a> (CDFs) of handshake timings zoomed in on the 80th percentile (e.g., showing the top 20% of slowest handshakes).</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3ZK481qKXDJSN9exzz2DnN/eca75f3df9dadc29c41416632395a0a5/Screen-Shot-2019-10-29-at-2.04.33-PM-4.png" />
            
            </figure><p>Here, we start to see something interesting. For Android, Linux, and Windows devices, there is a <i>crossover</i> point where CECPQ2b actually starts to outperform CECPQ2 (Android: ~94th percentile, Linux: ~92nd percentile, Windows: ~95th percentile). macOS and ChromeOS do not appear to have these crossover points.</p><p>These effects are small but statistically significant in some cases. The below table shows approximate 95% <a href="https://en.wikipedia.org/wiki/Confidence_interval">confidence intervals</a> for the 50th (median), 95th, and 99th percentiles of handshake durations for each key exchange group and device type, calculated using <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.mstats.mquantiles_cimj.html">Maritz-Jarrett estimators</a>. The numbers within square brackets give the lower and upper bounds on our estimates for each percentile of the “true” distribution of handshake durations based on the samples collected in the experiment. For example, with a 95% confidence level we can say that the 99th percentile of handshake durations for CECPQ2 on Android devices lies between 4057ms and 4478ms, while the 99th percentile for CECPQ2b lies between 3276ms and 3646ms. Since the intervals do not overlap, we say that with <i>statistical significance</i>, the experiment indicates that CECPQ2b performs better than CECPQ2 for the slowest 1% of Android connections. Configurations where CECPQ2 or CECPQ2b outperforms the other with statistical significance are marked with green in the table.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/QEGlVhyK9fxu4gFAlhIq8/a8e8271d4965d40ce64b55ed634e06be/Screen-Shot-2019-10-29-at-2.23.52-PM.png" />
            
            </figure>
    <div>
      <h3>Per-client comparison</h3>
      <a href="#per-client-comparison">
        
      </a>
    </div>
    <p>A second phase of the experiment directly examined the performance of each key exchange algorithm for individual clients, where a client is defined to be a unique (anonymized) IP address and user agent pair. Instead of choosing a single key exchange algorithm for the duration of the experiment, clients randomly selected one of the experiment configurations for each new connection. Although the duration and sample size were limited for this phase of the experiment, we collected at least three handshake measurements for each group configuration from 3900 unique clients.</p><p>The plot below shows for each of these clients the difference in latency between CECPQ2 and CECPQ2b, taking the minimum latency sample for each key exchange group as the representative value. The CDF plot shows that for 80% of clients, CECPQ2 outperformed or matched CECPQ2b, and for 99% of clients, the latency gap remained within 70ms. At a high level, this indicates that very few clients performed significantly worse with CECPQ2 over CECPQ2b.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3FdAbGJuJBIDIjT0zExyvF/2a4cbf6befc35e6faa50492ca34d1dc0/TLS-handshake-latency-gap-per-client--All-_logspace_symlog_randomconfig.png" />
            
            </figure>
    <div>
      <h3>Do other factors impact the latency gap?</h3>
      <a href="#do-other-factors-impact-the-latency-gap">
        
      </a>
    </div>
    <p>We looked at a number of other factors—including session resumption, IP version, and network location—to see if they impacted the latency gap between CECPQ2 and CECPQ2b. These factors impacted the overall handshake latency, but we did not find that any made a significant impact on the latency gap between post-quantum ciphers. We share some interesting observations from this analysis below.</p>
    <div>
      <h4>Session resumption</h4>
      <a href="#session-resumption">
        
      </a>
    </div>
    <p>Approximately 53% of all connections in the experiment were completed with <a href="https://tools.ietf.org/html/rfc8446#section-2.2">TLS handshake resumption</a>. However, the percentage of resumed connections varied significantly based on the device configuration. Connections from mobile devices were only resumed ~25% of the time, while between 40% and 70% of connections from laptop/desktop devices were resumed. Additionally, resumption provided between a 30% and 50% speedup for all device types.</p>
    <div>
      <h4>IP version</h4>
      <a href="#ip-version">
        
      </a>
    </div>
    <p>We also examined the impact of IP version on handshake latency. Only 12.5% of the connections in the experiment used IPv6. These connections were 20-40% faster than IPv4 connections for desktop/laptop devices, but ~15% slower for mobile devices. This could be an artifact of IPv6 being generally deployed on newer devices with faster processors. For Android, the experiment was only run on devices with more modern processors, which perhaps eliminated the bias.</p>
    <div>
      <h4>Network location</h4>
      <a href="#network-location">
        
      </a>
    </div>
    <p>The slow connections making up the long tail of handshake durations were not isolated to a few countries, Autonomous Systems (ASes), or subnets, but originated from a globally diverse set of clients. We did not find a correlation between the relative performance of the two post-quantum key exchange algorithms based on these factors.</p>
    <div>
      <h2>Discussion</h2>
      <a href="#discussion">
        
      </a>
    </div>
    <p>We found that CECPQ2 (the ostrich) outperformed CECPQ2b (the turkey), for the majority of connections in the experiment, indicating that fast algorithms with large keys may be more suitable for TLS than slow algorithms with small keys. However, we observed the opposite—that CECPQ2b outperformed CECPQ2—for the slowest connections on some devices, including Windows computers and Android mobile devices. One possible explanation for this is packet fragmentation and packet loss. The maximum size of TCP packets that can be sent across a network is limited by the maximum transmission unit (MTU) of the network path, which is often ~1400 bytes. During the TLS handshake the server responds to the client with its public key and ciphertext, the combined size of which exceeds the MTU, so it is likely that handshake messages must be split across multiple TCP packets. This increases the risk of lost packets and delays due to retransmission. A repeat of this experiment that includes collection of fine-grained TCP telemetry could confirm this hypothesis.</p><p>A somewhat surprising result of this experiment is just how fast HRSS performs for the majority of connections. Recall that the CECPQ2 cipher performs key exchange operations for both X25519 and HRSS, but the additional overhead of HRSS is barely noticeable. Comparing benchmark results, we can see that HRSS will be faster than X25519 on the server side and slower on the client side.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1fH5vILIYldcwbFurzWlt5/05136ac598d8255bf108138854d88c0e/Screen-Shot-2019-10-29-at-2.24.57-PM.png" />
            
            </figure><p>In our design, the client side performs two operations—key generation and KEM decapsulation. Looking at those two operations we can see that the key generation is a bottleneck here.</p>
            <pre><code>Key generation: 	3553.5 [ops/sec]
KEM decapsulation: 	17186.7 [ops/sec]</code></pre>
            <p>In algorithms with quotient-style keys (like NTRU), the key generation algorithm performs an inversion in the quotient ring—an operation that is quite computationally expensive. Alternatively, a TLS implementation could generate ephemeral keys ahead of time in order to speed up key exchange. There are several other lattice-based key exchange candidates that may be worth experimenting with in the context of TLS key exchange, which are based on different underlying principles than the HRSS construction. These candidates have similar key sizes and faster key generation algorithms, but have their own drawbacks. <b>For now, HRSS looks like the more promising algorithm for use in TLS</b>.</p><p>In the case of SIKE, we <a href="https://github.com/post-quantum-cryptography/c-sike/">implemented</a> the most recent version of the algorithm, and instantiated it with the most performance-efficient parameter set for our experiment. The algorithm is computationally expensive, so we were required to use assembly to optimize it. In order to ensure best performance on Intel, most performance-critical operations have two different implementations; the library detects CPU capabilities and uses <a href="https://www.intel.com/content/dam/www/public/us/en/documents/white-papers/ia-large-integer-arithmetic-paper.pdf">faster instructions</a> if available, but otherwise falls back to a slightly slower generic implementation. We developed our own optimizations for 64-bit ARM CPUs. Nevertheless, our results show that SIKE incurred a significant overhead for every connection, especially on devices with weaker processors. It must be noted that high-performance isogeny-based public key cryptography is arguably much less developed than its lattice-based counterparts. Some ideas to develop this are <a href="https://www.youtube.com/watch?v=UfF3_YtYzPA">floating around</a>, and we expect to see performance improvements in the future.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1nRjTCxKGIX2eghIjIGTXb/9cfaf88594f50d09738177372fefb4d0/tales-from-the-crypto-team_2x-6.png" />
            
            </figure><p></p> ]]></content:encoded>
            <category><![CDATA[Crypto Week]]></category>
            <category><![CDATA[TLS]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[Research]]></category>
            <category><![CDATA[Post-Quantum]]></category>
            <category><![CDATA[Cryptography]]></category>
            <guid isPermaLink="false">3bVzoY16y2yy4PGR8r0cL5</guid>
            <dc:creator>Kris Kwiatkowski</dc:creator>
            <dc:creator>Luke Valenta</dc:creator>
        </item>
        <item>
            <title><![CDATA[DNS Encryption Explained]]></title>
            <link>https://blog.cloudflare.com/dns-encryption-explained/</link>
            <pubDate>Tue, 29 Oct 2019 13:00:00 GMT</pubDate>
            <description><![CDATA[ The Domain Name System (DNS) is the address book of the Internet. When you visit cloudflare.com or any other site, your browser will ask a DNS resolver for the IP address where the website can be found. Unfortunately, these DNS queries and answers are typically unprotected. ]]></description>
            <content:encoded><![CDATA[ <p></p><p>The <a href="https://www.cloudflare.com/learning/dns/what-is-dns/">Domain Name System (DNS)</a> is the address book of the Internet. When you visit cloudflare.com or any other site, your browser will ask a DNS resolver for the IP address where the website can be found. Unfortunately, these DNS queries and answers are typically unprotected. Encrypting DNS would improve user privacy and security. In this post, we will look at two mechanisms for encrypting DNS, known as <a href="https://www.cloudflare.com/learning/dns/dns-over-tls/">DNS over TLS (DoT) and DNS over HTTPS (DoH)</a>, and explain how they work.</p><p>Applications that want to resolve a <a href="https://www.cloudflare.com/learning/dns/glossary/what-is-a-domain-name/">domain name</a> to an IP address typically use DNS. This is usually not done explicitly by the programmer who wrote the application. Instead, the programmer writes something such as <code>fetch("https://example.com/news")</code> and expects a software library to handle the translation of “example.com” to an IP address.</p><p>Behind the scenes, the software library is responsible for discovering and connecting to the external <a href="https://www.cloudflare.com/learning/dns/what-is-recursive-dns/">recursive DNS resolver</a> and speaking the DNS protocol (see the figure below) in order to resolve the name requested by the application. The choice of the external DNS resolver and whether any privacy and security is provided at all is outside the control of the application. It depends on the software library in use, and the policies provided by the operating system of the device that runs the software.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2YN2i6IydAzTY2KPg0mdVA/09f322be351b4a423b5f1edf3b7a7839/DNS-flow-diagram.png" />
            
            </figure><p>Overview of DNS query and response</p>
    <div>
      <h2>The external DNS resolver</h2>
      <a href="#the-external-dns-resolver">
        
      </a>
    </div>
    <p>The operating system usually learns the resolver address from the local network using <a href="https://www.cloudflare.com/learning/dns/glossary/dynamic-dns/">Dynamic Host Configuration Protocol (DHCP)</a>. In home and mobile networks, it typically ends up using the resolver from the Internet Service Provider (ISP). In corporate networks, the selected resolver is typically controlled by the network administrator. If desired, users with control over their devices can override the resolver with a specific address, such as the address of a public resolver like Google’s 8.8.8.8 or <a href="/dns-resolver-1-1-1-1/">Cloudflare’s 1.1.1.1</a>, but most users will likely not bother changing it when connecting to a public Wi-Fi hotspot at a coffee shop or airport.</p><p>The choice of external resolver has a direct impact on the end-user experience. Most users do not change their resolver settings and will likely end up using the DNS resolver from their network provider. The most obvious observable property is the speed and accuracy of name resolution. Features that improve privacy or security might not be immediately visible, but will help to prevent others from profiling or interfering with your browsing activity. This is especially important on public Wi-Fi networks where anyone in physical proximity can capture and decrypt wireless network traffic.</p>
    <div>
      <h2>Unencrypted DNS</h2>
      <a href="#unencrypted-dns">
        
      </a>
    </div>
    <p>Ever since DNS was created in 1987, it has been largely unencrypted. Everyone between your device and the resolver is able to snoop on or even modify your DNS queries and responses. This includes anyone in your local Wi-Fi network, your Internet Service Provider (ISP), and transit providers. This may affect your privacy by revealing the domain names that are you are visiting.</p><p>What can they see? Well, consider this network packet capture taken from a laptop connected to a home network:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1d6w6OMSPn4jcLKyam6ArW/6c30de3175501fa4094e8da3c4965f03/dns-at-home-marked.png" />
            
            </figure><p>The following observations can be made:</p><ul><li><p>The UDP source port is 53 which is the standard port number for unencrypted DNS. The UDP payload is therefore likely to be a DNS answer.</p></li><li><p>That suggests that the source IP address 192.168.2.254 is a DNS resolver while the destination IP 192.168.2.14 is the DNS client.</p></li><li><p>The UDP payload could indeed be parsed as a DNS answer, and reveals that the user was trying to visit twitter.com.</p></li><li><p>If there are any future connections to 104.244.42.129 or 104.244.42.1, then it is most likely traffic that is directed at “twitter.com”.</p></li><li><p>If there is some further encrypted HTTPS traffic to this IP, succeeded by more DNS queries, it could indicate that a web browser loaded additional resources from that page. That could potentially reveal the pages that a user was looking at while visiting twitter.com.</p></li></ul><p>Since the DNS messages are unprotected, other attacks are possible:</p><ul><li><p>Queries could be directed to a resolver that performs <a href="https://www.cloudflare.com/learning/security/global-dns-hijacking-threat/">DNS hijacking</a>. For example, in the UK, Virgin Media and <a href="https://bt.custhelp.com/app/answers/detail/a_id/14244/c/402">BT</a> return a fake response for domains that do not exist, redirecting users to a search page. This redirection is possible because the computer/phone blindly trusts the DNS resolver that was advertised using DHCP by the ISP-provided gateway router.</p></li><li><p>Firewalls can easily intercept, block or modify any unencrypted DNS traffic based on the port number alone. It is worth noting that plaintext inspection is not a silver bullet for achieving visibility goals, because the DNS resolver can be bypassed.</p></li></ul>
    <div>
      <h2>Encrypting DNS</h2>
      <a href="#encrypting-dns">
        
      </a>
    </div>
    <p>Encrypting DNS makes it much harder for snoopers to look into your DNS messages, or to corrupt them in transit. Just as the web moved from unencrypted HTTP to encrypted HTTPS, there are now upgrades to the DNS protocol that encrypt DNS itself. Encrypting the web has made it possible for private and secure communications and commerce to flourish. Encrypting DNS will further enhance user privacy.</p><p>Two standardized mechanisms exist to secure the DNS transport between you and the resolver, <a href="https://tools.ietf.org/html/rfc7858">DNS over TLS (2016)</a> and <a href="https://tools.ietf.org/html/rfc8484">DNS Queries over HTTPS (2018)</a>. Both are based on <a href="https://www.cloudflare.com/learning/ssl/transport-layer-security-tls/">Transport Layer Security (TLS)</a> which is also used to secure communication between you and a website using HTTPS. In TLS, the server (be it a web server or DNS resolver) authenticates itself to the client (your device) using a certificate. This ensures that no other party can impersonate the server (the resolver).</p><p>With DNS over TLS (DoT), the original DNS message is directly embedded into the secure TLS channel. From the outside, one can neither learn the name that was being queried nor modify it. The intended client application will be able to decrypt TLS, it looks like this:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/YaRbOJm8SaN6SZvy40b7t/6da8977ecb1e13b2695893583c5291aa/dns-over-tls13-marked.png" />
            
            </figure><p>In the packet trace for unencrypted DNS, it was clear that a DNS request can be sent directly by the client, followed by a DNS answer from the resolver. In the encrypted DoT case however, some <a href="https://www.cloudflare.com/learning/ssl/what-happens-in-a-tls-handshake/">TLS handshake</a> messages are exchanged prior to sending encrypted DNS messages:</p><ul><li><p>The client sends a Client Hello, advertising its supported TLS capabilities.</p></li><li><p>The server responds with a Server Hello, agreeing on TLS parameters that will be used to secure the connection. The Certificate message contains the identity of the server while the Certificate Verify message will contain a digital signature which can be verified by the client using the server Certificate. The client typically checks this certificate against its local list of trusted Certificate Authorities, but the DoT specification mentions <a href="https://tools.ietf.org/html/rfc7858#section-3.2">alternative trust mechanisms</a> such as public key pinning.</p></li><li><p>Once the TLS handshake is Finished by both the client and server, they can finally start exchanging encrypted messages.</p></li><li><p>While the above picture contains one DNS query and answer, in practice the secure TLS connection will remain open and will be reused for future DNS queries.</p></li></ul><p>Securing unencrypted protocols by slapping TLS on top of a new port has been done before:</p><ul><li><p>Web traffic: HTTP (tcp/80) -&gt; HTTPS (tcp/443)</p></li><li><p>Sending email: SMTP (tcp/25) -&gt; SMTPS (tcp/465)</p></li><li><p>Receiving email: IMAP (tcp/143) -&gt; IMAPS (tcp/993)</p></li><li><p>Now: DNS (tcp/53 or udp/53) -&gt; DoT (tcp/853)</p></li></ul><p>A problem with introducing a new port is that existing firewalls may block it. Either because they employ a allowlist approach where new services have to be explicitly enabled, or a blocklist approach where a network administrator explicitly blocks a service. If the secure option (DoT) is less likely to be available than its insecure option, then users and applications might be tempted to try to fall back to unencrypted DNS. This subsequently could allow attackers to force users to an insecure version.</p><p>Such fallback attacks are not theoretical. <a href="/performing-preventing-ssl-stripping-a-plain-english-primer/">SSL stripping</a> has previously been used to downgrade HTTPS websites to HTTP, allowing attackers to steal passwords or hijack accounts.</p><p>Another approach, DNS Queries over HTTPS (DoH), was <a href="https://tools.ietf.org/html/rfc8484#section-1">designed</a> to support two primary use cases:</p><ul><li><p>Prevent the above problem where on-path devices interfere with DNS. This includes the port blocking problem above.</p></li><li><p>Enable web applications to access DNS through existing browser APIs.DoH is essentially HTTPS, the same encrypted standard the web uses, and reuses the same port number (tcp/443). Web browsers have already <a href="https://blog.mozilla.org/security/2015/04/30/deprecating-non-secure-http/">deprecated non-secure HTTP</a> in favor of HTTPS. That makes HTTPS a great choice for securely transporting DNS messages. An example of such a DoH request can be found <a href="https://tools.ietf.org/html/rfc8484#section-4.1.1">here</a>.</p></li></ul>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5nHSbgeKy6xjxzhkGzotY8/564e677dd0b05247a47927fa40a6d8f1/DoH-flow-diagram.png" />
            
            </figure><p>DoH: DNS query and response transported over a secure HTTPS stream</p><p>Some users have been concerned that the use of HTTPS could weaken privacy due to the potential use of cookies for tracking purposes. The DoH protocol designers <a href="https://tools.ietf.org/html/rfc8484#section-8">considered</a> various privacy aspects and explicitly discouraged use of HTTP cookies to prevent tracking, a recommendation that is widely respected. TLS session resumption improves TLS 1.2 handshake performance, but can potentially be used to correlate TLS connections. Luckily, use of <a href="https://www.cloudflare.com/learning/ssl/why-use-tls-1.3/">TLS 1.3</a> obviates the need for TLS session resumption by reducing the number of round trips by default, effectively addressing its associated privacy concern.</p><p>Using HTTPS means that HTTP protocol improvements can also benefit DoH. For example, the in-development <a href="/http3-the-past-present-and-future/">HTTP/3 protocol</a>, built on top of <a href="/the-road-to-quic/">QUIC</a>, could offer additional performance improvements in the presence of packet loss due to lack of head-of-line blocking. This means that multiple DNS queries could be sent simultaneously over the secure channel without blocking each other when one packet is lost.</p><p>A <a href="https://tools.ietf.org/html/draft-huitema-quic-dnsoquic">draft</a> for DNS over QUIC (DNS/QUIC) also exists and is similar to DoT, but without the head-of-line blocking problem due to the use of QUIC. Both <a href="https://www.cloudflare.com/learning/performance/what-is-http3/">HTTP/3</a> and DNS/QUIC, however, require a UDP port to be accessible. In theory, both could fall back to DoH over HTTP/2 and DoT respectively.</p>
    <div>
      <h2>Deployment of DoT and DoH</h2>
      <a href="#deployment-of-dot-and-doh">
        
      </a>
    </div>
    <p>As both DoT and DoH are relatively new, they are not universally deployed yet. On the server side, major public resolvers including Cloudflare’s 1.1.1.1 and <a href="https://www.cloudflare.com/cloudflare-vs-google-dns/">Google DNS</a> support it. Many ISP resolvers however still lack support for it. A small list of public resolvers supporting DoH can be found at <a href="https://github.com/DNSCrypt/dnscrypt-proxy/wiki/DNS-server-sources">DNS server sources</a>, another list of public resolvers supporting DoT and DoH can be found on <a href="https://dnsprivacy.org/wiki/display/DP/DNS+Privacy+Public+Resolvers">DNS Privacy Public Resolvers</a>.</p><p>There are two methods to enable DoT or DoH on end-user devices:</p><ul><li><p>Add support to applications, bypassing the resolver service from the operating system.</p></li><li><p>Add support to the operating system, transparently providing support to applications.</p></li></ul><p>There are generally three configuration modes for DoT or DoH on the client side:</p><ul><li><p>Off: DNS will not be encrypted.</p></li><li><p>Opportunistic mode: try to use a secure transport for DNS, but fallback to unencrypted DNS if the former is unavailable. This mode is vulnerable to downgrade attacks where an attacker can force a device to use unencrypted DNS. It aims to offer privacy when there are no on-path active attackers.</p></li><li><p>Strict mode: try to use DNS over a secure transport. If unavailable, fail hard and show an error to the user.</p></li></ul><p>The current state for system-wide configuration of DNS over a secure transport:</p>
<ul>
<li>Android 9: <a href="https://android-developers.googleblog.com/2018/04/dns-over-tls-support-in-android-p.html">supports</a> DoT through its “Private DNS” feature. Modes:
<ul>
<li>Opportunistic mode (“Automatic”) is used by default. The resolver from network settings (typically DHCP) will be used.</li>
<li>Strict mode can be <a href="http://staging.blog.mrk.cfdata.org/enable-private-dns-with-1-1-1-1-on-android-9-pie/">configured</a> by setting an explicit hostname. No IP address is allowed, the hostname is resolved using the default resolver and is also used for validating the certificate. (<a href="https://github.com/aosp-mirror/platform_frameworks_base/commit/a24d459a5d60c706472f9b620d079cd0a40a7279">Relevant source code</a>)</li>
</ul>
</li>
<li>iOS and Android users can also install the <a href="https://1.1.1.1/">1.1.1.1 app</a> to enable either DoH or DoT support in strict mode. Internally it uses the VPN programming interfaces to enable interception of unencrypted DNS traffic before it is forwarded over a secure channel.</li>
<li>
Linux with systemd-resolved from systemd 239: DoT through the <a href="https://www.freedesktop.org/software/systemd/man/resolved.conf.html#DNSOverTLS=">DNSOverTLS</a> option.
<ul>
<li>Off is the default.</li>
<li>Opportunistic mode can be configured, but no certificate validation is performed.</li>
<li>Strict mode is available since systemd 243. Any certificate signed by a trusted certificate authority is accepted. However, <a href="https://github.com/systemd/systemd/blob/v243/src/resolve/resolved-dnstls-gnutls.c#L62-L63">there is no hostname validation</a> with the GnuTLS backend while the OpenSSL backend <a href="https://github.com/systemd/systemd/blob/v243/src/resolve/resolved-dnstls-openssl.c#L86-L87">expects</a> an IP address.</li>
<li>In any case, no Server Name Indication (SNI) is sent. The certificate name is <a href="https://github.com/systemd/systemd/issues/9397">not validated</a>, making a on-path attacker rather trivial.</li>
</ul>
</li>
<li>
Linux, macOS, and Windows can <a href="https://developers.cloudflare.com/1.1.1.1/dns-over-https/cloudflared-proxy/">use</a> a DoH client in strict mode. The <code>cloudflared proxy-dns</code> command uses the Cloudflare DNS resolver by default, but users can override it through the proxy-dns-upstream option.
</li>
</ul><p>Web browsers support DoH instead of DoT:</p><ul><li><p>Firefox 62 <a href="https://support.mozilla.org/en-US/kb/firefox-dns-over-https">supports</a> DoH and provides several <a href="https://wiki.mozilla.org/Trusted_Recursive_Resolver">Trusted Recursive Resolver (TRR)</a> settings. By default DoH is disabled, but Mozilla is running an <a href="https://blog.mozilla.org/futurereleases/2019/09/06/whats-next-in-making-dns-over-https-the-default/">experiment</a> to enable DoH for some users in the USA. This experiment currently uses Cloudflare's 1.1.1.1 resolver, since we are the only provider that currently satisfies the <a href="https://wiki.mozilla.org/Security/DOH-resolver-policy">strict resolver policy</a> required by Mozilla. Since many DNS resolvers still do not support an encrypted DNS transport, Mozilla's approach will ensure that more users are protected using DoH.</p><ul><li><p>When enabled through the experiment, or through the “Enable DNS over HTTPS” option at Network Settings, Firefox will use opportunistic mode (network.trr.mode=2 at about:config).</p></li><li><p>Strict mode can be enabled with network.trr.mode=3, but requires an explicit resolver IP to be specified (for example, network.trr.bootstrapAddress=1.1.1.1).</p></li><li><p>While Firefox ignores the default resolver from the system, it can be configured with alternative resolvers. Additionally, enterprise deployments who use a resolver that does not support DoH have the <a href="https://support.mozilla.org/en-US/kb/configuring-networks-disable-dns-over-https">option</a> to disable DoH.</p></li></ul></li><li><p>Chrome 78 <a href="https://blog.chromium.org/2019/09/experimenting-with-same-provider-dns.html">enables</a> opportunistic DoH if the system resolver address matches one of the <a href="https://www.chromium.org/developers/dns-over-https">hard-coded DoH providers</a> (<a href="https://chromium.googlesource.com/chromium/src.git/+/f93a48e3720931c25a3abc7848b08afed43e3be2%5E%21/">source code change</a>). This experiment is enabled for all platforms except Linux and iOS, and excludes enterprise deployments by default.</p></li><li><p>Opera 65 <a href="https://blogs.opera.com/desktop/2019/09/opera-65-0-3430-0-developer-update/">adds</a> an option to enable DoH through Cloudflare's 1.1.1.1 resolver. This feature is off by default. Once enabled, it appears to use opportunistic mode: if 1.1.1.1:443 (without SNI) is reachable, it will be used. Otherwise it falls back to the default resolver, unencrypted.</p></li></ul><p>The <a href="https://github.com/curl/curl/wiki/DNS-over-HTTPS">DNS over HTTPS</a> page from the curl project has a comprehensive list of DoH providers and additional implementations.</p><p>As an alternative to encrypting the full network path between the device and the external DNS resolver, one can take a middle ground: use unencrypted DNS between devices and the gateway of the local network, but <a href="/dns-over-tls-built-in/">encrypt all DNS traffic</a> between the gateway router and the external DNS resolver. Assuming a secure wired or wireless network, this would protect all devices in the local network against a snooping ISP, or other adversaries on the Internet. As public Wi-Fi hotspots are not considered secure, this approach would not be safe on open Wi-Fi networks. Even if it is password-protected with WPA2-PSK, others will still be able to snoop and modify unencrypted DNS.</p>
    <div>
      <h2>Other security considerations</h2>
      <a href="#other-security-considerations">
        
      </a>
    </div>
    <p>The previous sections described secure DNS transports, DoH and DoT. These will only ensure that your client receives the untampered answer from the DNS resolver. It does not, however, protect the client against the resolver returning the wrong answer (through <a href="https://www.cloudflare.com/learning/security/global-dns-hijacking-threat/">DNS hijacking</a> or <a href="https://www.cloudflare.com/learning/dns/dns-cache-poisoning/">DNS cache poisoning</a> attacks). The “true” answer is determined by the owner of a domain or zone as reported by the authoritative name server. <a href="https://www.cloudflare.com/learning/dns/dns-security/">DNSSEC</a> allows clients to verify the integrity of the returned DNS answer and catch any unauthorized tampering along the path between the client and authoritative name server.</p><p>However deployment of DNSSEC is hindered by middleboxes that <a href="https://labs.ripe.net/Members/willem_toorop/sunrise-dns-over-tls-sunset-dnssec">incorrectly</a> forward DNS messages, and even if the information is available, stub resolvers used by applications might not even validate the results. A report from 2016 <a href="https://www.internetsociety.org/resources/doc/2016/state-of-dnssec-deployment-2016/">found</a> that only 26% of users use DNSSEC-validating resolvers.</p><p>DoH and DoT protect the transport between the client and the public resolver. The public resolver may have to reach out to additional authoritative name servers in order to resolve a name. Traditionally, the path between any resolver and the authoritative name server uses unencrypted DNS. To protect these DNS messages as well, we did an experiment with Facebook, using DoT between 1.1.1.1 and Facebook’s authoritative name servers. While setting up a secure channel using TLS increases latency, it can be amortized over many queries.</p><p>Transport encryption ensures that resolver results and metadata are protected. For example, the <a href="https://tools.ietf.org/html/rfc7871">EDNS Client Subnet (ECS)</a> information included with DNS queries could reveal the original client address that started the DNS query. Hiding that information along the path improves privacy. It will also <a href="https://labs.ripe.net/Members/willem_toorop/sunrise-dns-over-tls-sunset-dnssec">prevent</a> broken middle-boxes from breaking DNSSEC due to issues in forwarding DNS.</p>
    <div>
      <h2>Operational issues with DNS encryption</h2>
      <a href="#operational-issues-with-dns-encryption">
        
      </a>
    </div>
    <p>DNS encryption may bring challenges to individuals or organizations that rely on monitoring or modifying DNS traffic. Security appliances that rely on passive monitoring watch all incoming and outgoing network traffic on a machine or on the edge of a network. Based on unencrypted DNS queries, they could potentially identify machines which are infected with malware for example. If the DNS query is encrypted, then passive monitoring solutions will not be able to monitor domain names.</p><p>Some parties expect DNS resolvers to apply content filtering for purposes such as:</p><ul><li><p>Blocking domains used for malware distribution.</p></li><li><p>Blocking advertisements.</p></li><li><p>Perform parental control filtering, blocking domains associated with adult content.</p></li><li><p>Block access to domains serving illegal content according to local regulations.</p></li><li><p>Offer a <a href="https://en.wikipedia.org/wiki/Split-horizon_DNS">split-horizon DNS</a> to provide different answers depending on the source network.</p></li></ul><p>An advantage of blocking access to domains via the DNS resolver is that it can be centrally done, without reimplementing it in every single application. Unfortunately, it is also quite coarse. Suppose that a website hosts content for multiple users at example.com/videos/for-kids/ and example.com/videos/for-adults/. The DNS resolver will only be able to see “example.com” and can either choose to block it or not. In this case, application-specific controls such as browser extensions would be more effective since they can actually look into the URLs and selectively prevent content from being accessible.</p><p>DNS monitoring is not comprehensive. Malware could skip DNS and hardcode IP addresses, or use <a href="https://blog.netlab.360.com/an-analysis-of-godlua-backdoor-en/">alternative methods</a> to query an IP address. However, not all malware is that complicated, so DNS monitoring can still serve as a <a href="https://en.wikipedia.org/wiki/Defense_in_depth_%28computing%29">defence-in-depth</a> tool.</p><p>All of these non-passive monitoring or DNS blocking use cases require support from the DNS resolver. Deployments that rely on opportunistic DoH/DoT upgrades of the current resolver will maintain the same feature set as usually provided over unencrypted DNS. Unfortunately this is vulnerable to downgrades, as mentioned before. To solve this, system administrators can point endpoints to a DoH/DoT resolver in strict mode. Ideally this is done through secure device management solutions (<a href="https://en.wikipedia.org/wiki/Mobile_device_management">MDM</a>, <a href="https://en.wikipedia.org/wiki/Group_Policy">group policy</a> on Windows, etc.).</p>
    <div>
      <h2>Conclusion</h2>
      <a href="#conclusion">
        
      </a>
    </div>
    <p>One of the cornerstones of the Internet is mapping names to an address using DNS. DNS has traditionally used insecure, unencrypted transports. This has been abused by ISPs in the past for injecting <a href="https://www.icsi.berkeley.edu/pubs/networking/redirectingdnsforads11.pdf">advertisements</a>, but also causes a privacy leak. Nosey visitors in the coffee shop can use unencrypted DNS to follow your activity. All of these issues can be solved by using DNS over TLS (DoT) or DNS over HTTPS (DoH). These techniques to protect the user are relatively new and are seeing increasing adoption.</p><p>From a technical perspective, DoH is very similar to HTTPS and follows the general industry trend to <a href="https://blog.mozilla.org/security/2015/04/30/deprecating-non-secure-http/">deprecate non-secure options</a>. DoT is a simpler transport mode than DoH as the HTTP layer is removed, but that also makes it easier to be blocked, either deliberately or by accident.</p><p>Secondary to enabling a secure transport is the choice of a DNS resolver. Some vendors will use the locally configured DNS resolver, but try to opportunistically upgrade the unencrypted transport to a more secure transport (either DoT or DoH). Unfortunately, the DNS resolver usually defaults to one provided by the ISP which may not support secure transports.</p><p>Mozilla has adopted a different approach. Rather than relying on local resolvers that may not even support DoH, they allow the user to explicitly select a resolver. Resolvers recommended by Mozilla have to satisfy <a href="https://wiki.mozilla.org/Security/DOH-resolver-policy">high standards</a> to protect user privacy. To ensure that parental control features based on DNS remain functional, and to support the split-horizon use case, Mozilla has <a href="https://support.mozilla.org/en-US/kb/configuring-networks-disable-dns-over-https">added</a> a mechanism that allows private resolvers to disable DoH.</p><p>The DoT and DoH transport protocols are ready for us to move to a more secure Internet. As can be seen in previous packet traces, these protocols are similar to existing mechanisms to <a href="https://www.cloudflare.com/application-services/solutions/">secure application traffic</a>. Once this security and privacy hole is closed, there will be <a href="https://arxiv.org/pdf/1906.09682.pdf">many</a> <a href="/esni/">more</a> to tackle.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2rE1TtD6u8D6Wk64LriLUB/53d8e662554bed780841a5a40d63434a/tales-from-the-crypto-team_2x-5.png" />
            
            </figure><p></p> ]]></content:encoded>
            <category><![CDATA[Crypto Week]]></category>
            <category><![CDATA[DNS]]></category>
            <category><![CDATA[DoH]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[Research]]></category>
            <category><![CDATA[Cryptography]]></category>
            <guid isPermaLink="false">42cfohkBmNPzLvTAUhd700</guid>
            <dc:creator>Peter Wu</dc:creator>
        </item>
        <item>
            <title><![CDATA[Supporting the latest version of the Privacy Pass Protocol]]></title>
            <link>https://blog.cloudflare.com/supporting-the-latest-version-of-the-privacy-pass-protocol/</link>
            <pubDate>Mon, 28 Oct 2019 13:00:00 GMT</pubDate>
            <description><![CDATA[ At Cloudflare, we are committed to supporting and developing new privacy-preserving technologies that benefit all Internet users. In November 2017, we announced server-side support for the Privacy Pass protocol, a piece of work developed in collaboration with the academic community. ]]></description>
            <content:encoded><![CDATA[ 
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/445gSyr8pFihUh221BqGV5/0e704d8d2ecd834689eea53c04554be5/Privacy-Pass-_2x-2.png" />
            
            </figure><p>At Cloudflare, we are committed to supporting and developing new privacy-preserving technologies that benefit all Internet users. In November 2017, we announced server-side support for the <a href="http://staging.blog.mrk.cfdata.org/cloudflare-supports-privacy-pass/">Privacy Pass protocol</a>, a piece of work developed in <a href="https://petsymposium.org/2018/files/papers/issue3/popets-2018-0026.pdf">collaboration with the academic community</a>. Privacy Pass, in a nutshell, allows clients to provide proof of trust <a href="https://privacypass.github.io/protocol/">without revealing where and when the trust was provided</a>. The aim of the protocol is then to allow anyone to prove they are trusted by a server, without that server being able to track the user via the trust that was assigned.</p><p>On a technical level, Privacy Pass clients receive attestation tokens from a server, that can then be redeemed in the future. These tokens are provided when a server deems the client to be trusted; for example, after they have logged into a service or if they prove certain characteristics. The redeemed tokens are cryptographically unlinkable to the attestation originally provided by the server, and so they do not reveal anything about the client.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7LVQnmDxgw0kv43MipnEO5/ed93518b30730567d1780e22fa46e606/imageLikeEmbed--2-.png" />
            
            </figure>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/62mh0kvdwZSIUQhkLHmqLt/ea96c6a856a3b53c7ceb8a3b52c6dd3d/imageLikeEmbed--1-.png" />
            
            </figure><p>To use Privacy Pass, clients can install an <a href="https://github.com/privacypass/challenge-bypass-extension">open-source</a> browser extension available in <a href="https://chrome.google.com/webstore/detail/privacy-pass/ajhmfdgkijocedmfjonnpjfojldioehi?hl=en">Chrome</a> &amp; <a href="https://addons.mozilla.org/en-US/firefox/addon/privacy-pass/">Firefox</a>. There have been over 150,000 individual downloads of Privacy Pass worldwide; approximately 130,000 in Chrome and more than 20,000 in Firefox. The extension is supported by Cloudflare to make websites more accessible for users. This complements previous work, including the launch of <a href="http://staging.blog.mrk.cfdata.org/cloudflare-onion-service/">Cloudflare onion services</a> to help improve accessibility for users of the Tor Browser.</p><p>The initial release was almost two years ago, and it was followed up with a <a href="https://petsymposium.org/2018/files/papers/issue3/popets-2018-0026.pdf">research publication</a> that was presented at the <a href="https://www.youtube.com/watch?v=9DsUi-UF2pM&amp;list=PLWSQygNuIsPd6YJmGV9kn1mP2A6-IBCoU&amp;index=10">Privacy Enhancing Technologies Symposium 2018</a> (winning a Best Student Paper award). Since then, Cloudflare has been working with the wider community to build on the initial design and improve Privacy Pass. We’ll be talking about the work that we have done to develop the existing implementations, alongside the protocol itself.</p><h1>What’s new?</h1><p><b>Support for Privacy Pass v2.0 browser extension:</b></p><ul><li><p>Easier configuration of workflow.</p></li><li><p>Integration with new service provider (hCaptcha).</p></li><li><p>Compliance with hash-to-curve draft.</p></li><li><p>Possible to rotate keys outside of extension release.</p></li><li><p>Available in <a href="https://chrome.google.com/webstore/detail/privacy-pass/ajhmfdgkijocedmfjonnpjfojldioehi?hl=en">Chrome</a> and <a href="https://addons.mozilla.org/en-US/firefox/addon/privacy-pass/">Firefox</a> (works best with up-to-date browser versions).</p></li></ul><p><b>Rolling out a new server backend using Cloudflare Workers platform:</b></p><ul><li><p>Cryptographic operations performed using internal V8 engine.</p></li><li><p>Provides public redemption API for Cloudflare Privacy Pass v2.0 tokens.</p></li><li><p>Available by making POST requests to <a href="https://privacypass.cloudflare.com/api/redeem">https://privacypass.cloudflare.com/api/redeem</a>. See the documentation for <a href="https://privacypass.github.io/api-redeem">example usage</a>.</p></li><li><p>Only compatible with extension v2.0 (check that you have updated!).</p></li></ul><p><b>Standardization:</b></p><ul><li><p>Continued development of oblivious pseudorandom functions (OPRFs) <a href="https://datatracker.ietf.org/doc/draft-irtf-cfrg-voprf/">draft</a> in prime-order groups with CFRG@IRTF.</p></li><li><p><a href="https://github.com/alxdavids/draft-privacy-pass">New draft</a> specifying Privacy Pass protocol.</p></li></ul><h1>Extension v2.0</h1><p>In the time since the release, we’ve been working on a number of new features. Today we’re excited to announce support for version 2.0 of the extension, the first update since the original release. The extension continues to be available for <a href="https://chrome.google.com/webstore/detail/privacy-pass/ajhmfdgkijocedmfjonnpjfojldioehi?hl=en">Chrome</a> and <a href="https://addons.mozilla.org/en-US/firefox/addon/privacy-pass/">Firefox</a>. You may need to download v2.0 manually from the store if you have auto-updates disabled in your browser.</p><p>The extension remains under active development and we still regard our support as in the beta phase. This will continue to be the case as the draft specification of the protocol continues to be written in collaboration with the wider community.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6WHnz7X5VULjd2rQPvSCYM/3fb47ca24f01788504fd4768813267fc/pasted-image-0-2.png" />
            
            </figure>
    <div>
      <h3>New Integrations</h3>
      <a href="#new-integrations">
        
      </a>
    </div>
    <p>The client implementation uses the <a href="https://developer.chrome.com/extensions/webRequest">WebRequest API</a> to look for certain types of HTTP requests. When these requests are spotted, they are rewritten to include some cryptographic data required for the Privacy Pass protocol. This allows Privacy Pass providers receiving this data to authorize access for the user.</p><p>For example, a user may receive Privacy Pass tokens for completing some server security checks. These tokens are stored by the browser extension, and any future request that needs similar security clearance can be modified to add a stored token as an extra HTTP header. The server can then check the client token and verify that the client has the correct authorization to proceed.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1EOFEhhNfe23pe6Beqsprx/f11bd08106711abf443dd53afd45f013/imageLikeEmbed--4-.png" />
            
            </figure><p>While Cloudflare supports a particular type of request flow, it would be impossible to expect different service providers to all abide by the same exact interaction characteristics. One of the major changes in the v2.0 extension has been a technical rewrite to instead use a central configuration file. The config is specified in the <a href="https://github.com/privacypass/challenge-bypass-extension/blob/master/src/ext/config.js">source code</a> of the extension and allows easier modification of the browsing characteristics that initiate Privacy Pass actions. This makes adding new, completely different request flows possible by simply cloning and adapting the configuration for new providers.</p><p>To demonstrate that such integrations are now possible with other services beyond Cloudflare, a new version of the extension will soon be rolling out that is supported by the CAPTCHA provider <a href="https://www.hcaptcha.com/">hCaptcha</a>. Users that solve ephemeral challenges provided by hCaptcha will receive privacy-preserving tokens that will be redeemable at other hCaptcha customer sites.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/66fIxiUIFsXF1mShWhpp9U/8e1e7cf73b844e2128b8766a8dde95dc/image-8-1.png" />
            
            </figure><p><i>“hCaptcha is focused on user privacy, and supporting Privacy Pass is a natural extension of our work in this area. We look forward to working with Cloudflare and others to make this a common and widely adopted standard, and are currently exploring other applications. Implementing Privacy Pass into our globally distributed service was relatively straightforward, and we have enjoyed working with the Cloudflare team to improve the open source Chrome browser extension in order to deliver the best experience for our users.”</i></p><p></p><p>— <b>Eli-Shaoul Khedouri</b>, founder of hCaptcha</p><p>This hCaptcha integration with the Privacy Pass browser extension acts as a proof-of-concept in establishing support for new services. Any new providers that would like to integrate with the Privacy Pass browser extension can do so simply by making a PR to the <a href="https://github.com/privacypass/challenge-bypass-extension/">open-source repository</a>.</p>
    <div>
      <h2>Improved cryptographic functionality</h2>
      <a href="#improved-cryptographic-functionality">
        
      </a>
    </div>
    <p>After the release of v1.0 of the extension, there were features that were still unimplemented. These included proper zero-knowledge proof validation for checking that the server was always using the same committed key. In v2.0 this functionality has been completed, verifiably preventing a malicious server from attempting to deanonymize users by using a different key for each request.</p><p>The cryptographic operations required for Privacy Pass are performed using <a href="http://staging.blog.mrk.cfdata.org/a-relatively-easy-to-understand-primer-on-elliptic-curve-cryptography/">elliptic curve cryptography</a> (ECC). The extension currently uses the <a href="https://www.secg.org/SEC2-Ver-1.0.pdf">NIST P-256</a> curve, for which we have included some optimisations. Firstly, this makes it possible to store elliptic curve points in compressed and uncompressed data formats. This means that browser storage can be reduced by 50%, and that server responses can be made smaller too.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6k9bRq8TswnyzrNdFxl0km/9db49e7d648d03b225077aecb6ee0fa0/imageLikeEmbed--5-.png" />
            
            </figure><p>Secondly, support has been added for hashing to the P-256 curve using the “Simplified Shallue-van de Woestijne-Ulas” (SSWU) method specified in an ongoing draft (<a href="https://tools.ietf.org/html/draft-irtf-cfrg-hash-to-curve-03">https://tools.ietf.org/html/draft-irtf-cfrg-hash-to-curve-03</a>) for standardizing encodings for hashing to elliptic curves. The implementation is compliant with the specification of the “P256-SHA256-SSWU-” ciphersuite in this draft.</p><p>These changes have a dual advantage, firstly ensuring that the P-256 hash-to-curve specification is compliant with the draft specification. Secondly this ciphersuite removes the necessity for using probabilistic methods, such as <a href="https://tools.ietf.org/html/draft-irtf-cfrg-vrf-05#section-5.4.1.1">hash-and-increment</a>. The hash-and-increment method has a non-negligible chance of failure, and the running time is highly dependent on the hidden client input. While it is not clear how to abuse timing attack vectors currently, using the SSWU method should reduce the potential for attacking the implementation, and learning the client input.</p>
    <div>
      <h2>Key rotation</h2>
      <a href="#key-rotation">
        
      </a>
    </div>
    <p>As we mentioned above, verifying that the server is always using the same key is an important part of ensuring the client’s privacy. This ensures that the server cannot segregate the user base and reduce client privacy by using different secret keys for each client that it interacts with. The server guarantees that it’s always using the same key by publishing a commitment to its public key somewhere that the client can access.</p><p>Every time the server issues Privacy Pass tokens to the client, it also produces a <a href="https://en.wikipedia.org/wiki/Zero-knowledge_proof">zero-knowledge proof</a> that it has produced these tokens using the correct key.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4SUH19vNZ1MZ3G7hvTNN40/6717cc2c64b8fc3a69efb014d76411c8/imageLikeEmbed--6-.png" />
            
            </figure><p>Before the extension stores any tokens, it first verifies the proof against the commitments it knows. Previously, these commitments were stored directly in the source code of the extension. This meant that if the server wanted to rotate its key, then it required releasing a new version of the extension, which was unnecessarily difficult. The extension has been modified so that the commitments are stored in a <a href="https://github.com/privacypass/ec-commitments">trusted location</a> that the client can access when it needs to verify the server response. Currently this location is a separate Privacy Pass <a href="https://github.com/privacypass/ec-commitments">GitHub repository</a>. For those that are interested, we have provided a more detailed description of the new commitment format in Appendix A at the end of this post.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1ak9ZJ0QWKpWnQBAa4oOpe/59a1907df42318f2e63dbd889c80f839/imageLikeEmbed--7-.png" />
            
            </figure><h1>Implementing server-side support in Workers</h1><p>So far we have focused on client-side updates. As part of supporting v2.0 of the extension, we are rolling out some major changes to the server-side support that Cloudflare uses. For version 1.0, we used a <a href="https://github.com/privacypass/challenge-bypass-server">Go implementation</a> of the server. In v2.0 we are introducing a new server implementation that runs in the <a href="https://www.cloudflare.com/products/cloudflare-workers/">Cloudflare Workers</a> platform. This server implementation is only compatible with v2.0 releases of Privacy Pass, so you may need to update your extension if you have auto-updates turned off in your browser.</p><p>Our server will run at <a href="https://privacypass.cloudflare.com">https://privacypass.cloudflare.com</a>, and all Privacy Pass requests sent to the Cloudflare edge are handled by Worker scripts that run on this domain. Our implementation has been rewritten using Javascript, with cryptographic operations running in the <a href="https://v8.dev/">V8 engine</a> that powers Cloudflare Workers. This means that we are able to run highly efficient and constant-time cryptographic operations. On top of this, we benefit from the enhanced performance provided by running our code in the Workers Platform, as close to the user as possible.</p>
    <div>
      <h2>WebCrypto support</h2>
      <a href="#webcrypto-support">
        
      </a>
    </div>
    <p>Firstly, you may be asking, how do we manage to implement cryptographic operations in Cloudflare Workers? Currently, support for performing cryptographic operations is provided in the Workers platform via the <a href="https://developers.cloudflare.com/workers/reference/apis/web-crypto/">WebCrypto API</a>. This API allows users to compute functionality such as cryptographic hashing, alongside more complicated operations like ECDSA signatures.</p><p>In the Privacy Pass protocol, as we’ll discuss a bit later, the main cryptographic operations are performed by a protocol known as a verifiable oblivious pseudorandom function (VOPRF). Such a protocol allows a client to learn function outputs computed by a server, without revealing to the server what their actual input was. The verifiable aspect means that the server must also prove (in a publicly verifiable way) that the evaluation they pass to the user is correct. Such a function is pseudorandom because the server output is indistinguishable from a random sequence of bytes.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2KJRvlrFMHsy9QlSofzWVs/b208b4b31a1169d2a3b60ffb396049a8/imageLikeEmbed--8-.png" />
            
            </figure><p>The VOPRF functionality requires a server to perform low-level ECC operations that are not currently exposed in the WebCrypto API. We balanced out the possible ways of getting around this requirement. First we trialled trying to use the WebCrypto API in a non-standard manner, using EC Diffie-Hellman key exchange as a method for performing the scalar multiplication that we needed. We also tried to implement all operations using pure JavaScript. Unfortunately both methods were unsatisfactory in the sense that they would either mean integrating with large external cryptographic libraries, or they would be far too slow to be used in a performant Internet setting.</p><p>In the end, we settled on a solution that adds functions necessary for Privacy Pass to the internal WebCrypto interface in the Cloudflare V8 Javascript engine. This algorithm mimics the sign/verify interface provided by signature algorithms like ECDSA. In short, we use the <code>sign()</code> function to issue Privacy Pass tokens to the client. While <code>verify()</code> can be used by the server to verify data that is redeemed by the client. These functions are implemented directly in the V8 layer and so they are much more performant and secure (running in constant-time, for example) than pure JS alternatives.</p><p>The Privacy Pass WebCrypto interface is not currently available for public usage. If it turns out there is enough interest in using this additional algorithm in the Workers platform, then we will consider making it public.</p>
    <div>
      <h3>Applications</h3>
      <a href="#applications">
        
      </a>
    </div>
    <p>In recent times, VOPRFs have been shown to be a highly useful primitive in establishing many cryptographic tools. Aside from Privacy Pass, they are also essential for constructing password-authenticated key exchange protocols such as <a href="https://datatracker.ietf.org/doc/draft-krawczyk-cfrg-opaque/">OPAQUE</a>. They have also been used in designs of <a href="https://eprint.iacr.org/2016/799">private set intersection</a>, <a href="https://eprint.iacr.org/2014/650">password-protected secret-sharing</a> protocols, and <a href="https://medium.com/least-authority/the-path-from-s4-to-privatestorage-ae9d4a10b2ae">privacy-preserving access-control</a> for private data storage.</p>
    <div>
      <h2>Public redemption API</h2>
      <a href="#public-redemption-api">
        
      </a>
    </div>
    <p>Writing the server in Cloudflare Workers means that we will be providing server-side support for Privacy Pass on a <a href="https://privacypass.cloudflare.com">public domain</a>! While we only issue tokens to clients after we are sure that we can trust them, anyone will be able to redeem the tokens using our public redemption API at <a href="https://privacypass.cloudflare.com/api/redeem">https://privacypass.cloudflare.com/api/redeem</a>. As we roll-out the server-side component worldwide, you will be able to interact with this API and verify Cloudflare Privacy Pass tokens <a href="https://privacypass.github.io/api-redeem">independently of the browser extension</a>.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7zI9w6HIR8cXOr884kCw8m/601fe8c196b434ac50fdb12eaca63927/imageLikeEmbed--9-.png" />
            
            </figure><p>This means that any service can accept Privacy Pass tokens from a client that were issued by Cloudflare, and then verify them with the Cloudflare redemption API. Using the result provided by the API, external services can check whether Cloudflare has authorized the user in the past.</p><p>We think that this will benefit other service providers because they can use the attestation of authorization from Cloudflare in their own decision-making processes, without sacrificing the privacy of the client at any stage. We hope that this ecosystem can grow further, with potentially more services providing public redemption APIs of their own. With a more diverse set of issuers, these attestations will become more useful.</p><p>By running our server on a public domain, we are effectively a customer of the Cloudflare Workers product. This means that we are also able to make use of <a href="https://developers.cloudflare.com/workers/reference/storage/">Workers KV</a> for protecting against malicious clients. In particular, servers must check that clients are not re-using tokens during the redemption phase. The performance of Workers KV in analyzing reads makes this an obvious choice for providing double-spend protection globally.</p><p>If you would like to use the public redemption API, we provide documentation for using it at <a href="https://privacypass.github.io/api-redeem">https://privacypass.github.io/api-redeem</a>. We also provide some example requests and responses in Appendix B at the end of the post.</p><h1>Standardization &amp; new applications</h1><p>In tandem with the recent engineering work that we have been doing on supporting Privacy Pass, we have been collaborating with the wider community in an attempt to standardize both the <a href="https://datatracker.ietf.org/doc/draft-irtf-cfrg-voprf/">underlying VOPRF functionality</a>, and the <a href="https://github.com/alxdavids/draft-privacy-pass">protocol itself</a>. While the process of standardization for oblivious pseudorandom functions (OPRFs) has been running for over a year, the recent efforts to standardize the Privacy Pass protocol have been driven by very recent applications that have come about in the last few months.</p><p>Standardizing protocols and functionality is an important way of providing interoperable, secure, and performant interfaces for running protocols on the Internet. This makes it easier for developers to write their own implementations of this complex functionality. The process also provides helpful peer reviews from experts in the community, which can lead to better surfacing of potential security risks that should be mitigated in any implementation. Other benefits include coming to a consensus on the most reliable, scalable and performant protocol designs for all possible applications.</p>
    <div>
      <h2>Oblivious pseudorandom functions</h2>
      <a href="#oblivious-pseudorandom-functions">
        
      </a>
    </div>
    <p>Oblivious pseudorandom functions (OPRFs) are a generalization of VOPRFs that do not require the server to prove that they have evaluated the functionality properly. Since July 2019, we have been collaborating <a href="https://datatracker.ietf.org/doc/draft-irtf-cfrg-voprf/">on a draft</a> with the <a href="https://irtf.org/cfrg">Crypto Forum Research Group</a> (CFRG) at the Internet Research Task Force (IRTF) to standardize an OPRF protocol that operates in prime-order groups. This is a generalisation of the setting that is provided by <a href="http://staging.blog.mrk.cfdata.org/tag/elliptic-curves/">elliptic curves</a>. This is the same VOPRF construction that was <a href="http://staging.blog.mrk.cfdata.org/privacy-pass-the-math/">originally specified</a> by the Privacy Pass protocol and is based heavily on the original protocol design from the <a href="https://eprint.iacr.org/2014/650.pdf">paper of Jarecki, Kiayias and Krawczyk</a>.</p><p>One of the recent changes that we've made in the draft is to increase the size of the key that we consider for performing OPRF operations on the server-side. Existing research suggests that it is possible to create specific queries that can lead to small amounts of the key being leaked. For keys that provide only 128 bits of security this can be a problem as leaking too many bits would reduce security <a href="https://www.keylength.com/en/4/">beyond currently accepted levels</a>. To counter this, we have effectively increased the minimum key size to 192 bits. This prevents this leakage becoming an attack vector using any practical methods. We discuss these attacks in more detail later on when discussing our future plans for VOPRF development.</p>
    <div>
      <h2>Recent applications and standardizing the protocol</h2>
      <a href="#recent-applications-and-standardizing-the-protocol">
        
      </a>
    </div>
    <p>The application that we demonstrated when originally supporting Privacy Pass was always intended as a proof-of-concept for the protocol. Over the past few months, a number of new possibilities have arisen in areas that go far beyond what was previously envisaged.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/SceyvCGOJ7quMiDSyPxR5/9785c42a79cf313d05ef5eb5a113d2f2/imageLikeEmbed--10-.png" />
            
            </figure><p>For example, the <a href="https://github.com/WICG/trust-token-api">trust token API</a>, developed by the <a href="https://wicg.io/">Web Incubator Community Group</a>, has been proposed as an interface for using Privacy Pass. This applications allows third-party vendors to check that a user has received a trust attestation from a set of central issuers. This allows the vendor to make decisions about the honesty of a client without having to associate a behaviour profile with the identity of the user. The objective is to prevent against fraudulent activity from users who are not trusted by the central issuer set. Checking trust attestations with central issuers would be possible using similar redemption APIs to the one that <a href="https://privacypass.cloudflare.com">we have introduced</a>.</p><p>A <a href="https://engineering.fb.com/security/partially-blind-signatures/">separate piece of work from Facebook</a> details a similar application for preventing fraudulent behavior that may also be compatible with the Privacy Pass protocol. Finally, other applications have arisen in the areas of providing access to <a href="https://medium.com/least-authority/the-path-from-s4-to-privatestorage-ae9d4a10b2ae">private storage</a> and <a href="https://github.com/brave/brave-browser/wiki/Security-and-privacy-model-for-ad-confirmations">establishing security and privacy models in advertisement confirmations</a>.</p>
    <div>
      <h3>A new draft</h3>
      <a href="#a-new-draft">
        
      </a>
    </div>
    <p>With the applications above in mind, we have recently started collaborative work on a <a href="https://github.com/alxdavids/draft-privacy-pass">new IETF draft</a> that specifically lays out the required functionality provided by the Privacy Pass protocol as a whole. Our aim is to develop, alongside wider industrial partners and the academic community, a functioning specification of the Privacy Pass protocol. We hope that by doing this we will be able to design a base-layer protocol, that can then be used as a cryptographic primitive in wider applications that require some form of lightweight authorization. Our plan is to present the first version of this draft at the upcoming <a href="https://www.ietf.org/how/meetings/106/">IETF 106 meeting</a> in Singapore next month.</p><p>The draft is still in the early stages of development and we are actively looking for people who are interested in helping to shape the protocol specification. We would be grateful for any help that contributes to this process. See <a href="https://github.com/alxdavids/draft-privacy-pass">the GitHub repository</a> for the current version of the document.</p><h1>Future avenues</h1><p>Finally, while we are actively working on a number of different pathways in the present, the future directions for the project are still open. We believe that there are many applications out there that we have not considered yet and we are excited to see where the protocol is used in the future. Here are some other ideas we have for novel applications and security properties that we think might be worth pursuing in future.</p>
    <div>
      <h2>Publicly verifiable tokens</h2>
      <a href="#publicly-verifiable-tokens">
        
      </a>
    </div>
    <p>One of the disadvantages of using a VOPRF is that redemption tokens are only verifiable by the original issuing server. If we used an underlying primitive that allowed public verification of redemption tokens, then anyone could verify that the issuing server had issued the particular token. Such a protocol could be constructed on top of so-called blind signature schemes, such as <a href="https://en.wikipedia.org/wiki/Blind_signature#Blind_RSA_signatures">Blind RSA</a>. Unfortunately, there are performance and security concerns arising from the usage of blind signature schemes in a browser environment. Existing schemes (especially RSA-based variants) require cryptographic computations that are much heavier than the construction used in our VOPRF protocol.</p>
    <div>
      <h2>Post-quantum VOPRF alternatives</h2>
      <a href="#post-quantum-voprf-alternatives">
        
      </a>
    </div>
    <p>The only constructions of VOPRFs exist in pre-quantum settings, usually based on the hardness of well-known problems in group settings such as the <a href="https://en.wikipedia.org/wiki/Decisional_Diffie%E2%80%93Hellman_assumption">discrete-log assumption</a>. No constructions of VOPRFs are known to provide security against adversaries that can run <a href="http://staging.blog.mrk.cfdata.org/the-quantum-menace/">quantum computational algorithms</a>. This means that the Privacy Pass protocol is only believed to be secure against adversaries running  on classical hardware.</p><p>Recent developments suggest that quantum computing may arrive <a href="https://www.nature.com/articles/s41586-019-1666-5">sooner than previously thought</a>. As such, we believe that investigating the possibility of <a href="http://staging.blog.mrk.cfdata.org/introducing-circl/">constructing practical post-quantum alternatives</a> for our current cryptographic toolkit is a task of great importance for ourselves and the wider community. In this case, devising performant post-quantum alternatives for VOPRF constructions would be an important theoretical advancement. Eventually this would lead to a Privacy Pass protocol that still provides privacy-preserving authorization in a post-quantum world.</p>
    <div>
      <h2>VOPRF security and larger ciphersuites</h2>
      <a href="#voprf-security-and-larger-ciphersuites">
        
      </a>
    </div>
    <p>We mentioned previously that VOPRFs (or simply OPRFs) are susceptible to small amounts of possible leakage in the key. Here we will give a brief description of the actual attacks themselves, along with further details on our plans for implementing higher security ciphersuites to mitigate the leakage.</p><p>Specifically, malicious clients can interact with a VOPRF for creating something known as a <a href="https://eprint.iacr.org/2010/215.pdf">q-Strong-Diffie-Hellman</a> (q-sDH) sample. Such samples are created in mathematical groups (usually in the elliptic curve setting). For any group there is a public element <code>g</code> that is central to all <a href="https://en.wikipedia.org/wiki/Diffie%E2%80%93Hellman_key_exchange">Diffie-Hellman</a> type operations, along with the server key <code>K</code>, which is usually just interpreted as a randomly generated number from this group. A q-sDH sample takes the form:</p>
            <pre><code>( g, g^K, g^(K^2), … , g^(K^q) )</code></pre>
            <p>and asks the malicious adversary to create a pair of elements satisfying <code>(g^(1/(s+K)),s)</code>. It is possible for a client in the VOPRF protocol to create a q-SDH sample by just submitting the result of the previous VOPRF evaluation back to the server.</p><p>While this problem is believed to be hard to break, there are a number of past works that show that the problem is somewhat easier than the size of the group suggests (for example, see <a href="https://eprint.iacr.org/2004/306">here</a> and <a href="https://www.iacr.org/archive/eurocrypt2006/40040001/40040001.pdf">here</a>). Concretely speaking, the bit security implied by the group can be reduced by up to log<sub>2</sub>(q) bits. While this is not immediately fatal, even to groups that should provide 128 bits of security, it can lead to a loss of security that means that the setting is no longer future-proof. As a result, any group providing VOPRF functionality that is instantiated using an elliptic curve such as P-256 or Curve25519 provides weaker than advised security guarantees.</p><p>With this in mind, we have taken the recent decision to upgrade the ciphersuites that we recommend for OPRF usage to only those that provide &gt; 128 bits of security, as standard. For example, Curve448 provides 192 bits of security. To launch an attack that reduced security to an amount lower than 128 bits would require making 2^(68) client OPRF queries. This is a significant barrier to entry for any attacker, and so we regard these ciphersuites as safe for instantiating the OPRF functionality.</p><p>In the near future, it will be necessary to upgrade the ciphersuites that are used in our support of the Privacy Pass browser extension to the recommendations made in the current VOPRF draft. In general, with a more iterative release process, we hope that the Privacy Pass implementation will be able to follow the current draft standard more closely as it evolves during the standardization process.</p>
    <div>
      <h2>Get in touch!</h2>
      <a href="#get-in-touch">
        
      </a>
    </div>
    <p>You can now install v2.0 of the Privacy Pass extension in <a href="https://chrome.google.com/webstore/detail/privacy-pass/ajhmfdgkijocedmfjonnpjfojldioehi?hl=en">Chrome</a> or <a href="https://addons.mozilla.org/en-US/firefox/addon/privacy-pass/">Firefox</a>.</p><p>If you would like to help contribute to the development of this extension then you can do so on <a href="https://github.com/privacypass/challenge-bypass-extension">GitHub</a>. Are you a service provider that would like to integrate server-side support for the extension? Then we would be very interested in <a>hearing from you!</a></p><p>We will continue to work with the wider community in developing the standardization of the protocol; taking our motivation from the available applications that have been developed. We are always looking for new applications that can help to expand the Privacy Pass ecosystem beyond its current boundaries.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1eVrPb2t3hl5pVI97EtEu8/86eb63191b0cd299390f24162d51c54a/tales-from-the-crypto-team_2x--1-.png" />
            
            </figure><h1>Appendix</h1><p>Here are some extra details related to the topics that we covered above.</p>
    <div>
      <h2>A. Commitment format for key rotations</h2>
      <a href="#a-commitment-format-for-key-rotations">
        
      </a>
    </div>
    <p>Key commitments are necessary for the server to prove that they’re acting honestly during the Privacy Pass protocol. The commitments that Privacy Pass uses for the v2.0 release have a slightly different format from the previous release.</p>
            <pre><code>"2.00": {
  "H": "BPivZ+bqrAZzBHZtROY72/E4UGVKAanNoHL1Oteg25oTPRUkrYeVcYGfkOr425NzWOTLRfmB8cgnlUfAeN2Ikmg=",
  "expiry": "2020-01-11T10:29:10.658286752Z",
  "sig": "MEUCIQDu9xeF1q89bQuIMtGm0g8KS2srOPv+4hHjMWNVzJ92kAIgYrDKNkg3GRs9Jq5bkE/4mM7/QZInAVvwmIyg6lQZGE0="
}</code></pre>
            <p>First, the version of the server key is <code>2.00</code>, the server must inform the client which version it intends to use in the response to a client containing issued tokens. This is so that the client can always use the correct commitments when verifying the zero-knowledge proof that the server sends.</p><p>The value of the member <code>H</code> is the public key commitment to the secret key used by the server. This is base64-encoded elliptic curve point of the form <code>H=kG</code> where <code>G</code> is the fixed generator of the curve, and <code>k</code> is the secret key of the server. Since the discrete-log problem is believed to be hard to solve, deriving k from H is believed to be difficult. The value of the member <code>expiry</code> is an expiry date for the commitment that is used. The value of the member <code>sig</code> is an ECDSA signature evaluated using a long-term signing key associated with the server, and over the values of <code>H</code> and <code>expiry</code>.</p><p>When a client retrieves the commitment, it checks that it hasn’t expired and that the signature verifies using the corresponding verification key that is embedded into the configuration of the extension. If these checks pass, it retrieves <code>H</code> and verifies the issuance response sent by the server. Previous versions of these commitments did not include signatures, but these signatures will be validated from v2.0 onwards.</p><p>When a server wants to rotate the key, it simply generates a new key <code>k2</code> and appends a new commitment to <code>k2</code> with a new identifier such as <code>2.01</code>. It can then use <code>k2</code> as the secret for the VOPRF operations that it needs to compute.</p>
    <div>
      <h2>B. Example Redemption API request</h2>
      <a href="#b-example-redemption-api-request">
        
      </a>
    </div>
    <p>The redemption API at is available over HTTPS by sending POST requests to <a href="https://privacypass.cloudflare.com/api/redeem">https://privacypass.cloudflare.com/api/redeem</a>. Requests to this endpoint must specify Privacy Pass data using JSON-RPC 2.0 syntax in the body of the request. Let’s look at an example request:</p>
            <pre><code>{
  "jsonrpc": "2.0",
  "method": "redeem",
  "params": {
    "data": [
      "lB2ZEtHOK/2auhOySKoxqiHWXYaFlAIbuoHQnlFz57A=",
      "EoSetsN0eVt6ztbLcqp4Gt634aV73SDPzezpku6ky5w=",
      "eyJjdXJ2ZSI6InAyNTYiLCJoYXNoIjoic2hhMjU2IiwibWV0aG9kIjoic3d1In0="
    ],
    "bindings": [
      "string1",
      "string2"
    ],
    "compressed":"false"
  },
  "id": 1
}</code></pre>
            <p>In the above: <code>params.data[0]</code> is the client input data used to generate a token in the issuance phase; <code>params.data[1]</code> is the HMAC tag that the server uses to verify a redemption; and <code>params.data[2]</code> is a stringified, base64-encoded JSON object that specifies the hash-to-curve parameters used by the client. For example, the last element in the array corresponds to the object:</p>
            <pre><code>{
    curve: "p256",
    hash: "sha256",
    method: "swu",
}</code></pre>
            <p>Which specifies that the client has used the curve P-256, with hash function SHA-256, and the SSWU method for hashing to curve. This allows the server to verify the transaction with the correct ciphersuite. The client must bind the redemption request to some fixed information, which it stores as multiple strings in the array <code>params.bindings</code>. For example, it could send the Host header of the HTTP request, and the HTTP path that was used (this is what is used in the Privacy Pass browser extension). Finally, <code>params.compressed</code> is an optional boolean value (defaulting to false) that indicates whether the HMAC tag was computed over compressed or uncompressed point encodings.</p><p>Currently the only supported ciphersuites are the example above, or the same except with <code>method</code> equal to <code>increment</code> for the hash-and-increment method of hashing to a curve. This is the original method used in v1.0 of Privacy Pass, and is supported for backwards-compatibility only. See the <a href="https://privacypass.github.io/api-redeem">provided documentation</a> for more details.</p>
    <div>
      <h3>Example response</h3>
      <a href="#example-response">
        
      </a>
    </div>
    <p>If a request is sent to the redemption API and it is successfully verified, then the following response will be returned.</p>
            <pre><code>{
  "jsonrpc": "2.0",
  "result": "success",
  "id": 1
}</code></pre>
            <p>When an error occurs something similar to the following will be returned.</p>
            <pre><code>{
  "jsonrpc": "2.0",
  "error": {
    "message": &lt;error-message&gt;,
    "code": &lt;error-code&gt;,
  },
  "id": 1
}</code></pre>
            <p>The error codes that we provide are specified as JSON-RPC 2.0 codes, we document the types of errors that we provide in the <a href="https://privacypass.github.io/api-redeem">API documentation</a>.</p> ]]></content:encoded>
            <category><![CDATA[Crypto Week]]></category>
            <category><![CDATA[Privacy Pass]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[Research]]></category>
            <category><![CDATA[Cryptography]]></category>
            <guid isPermaLink="false">3Zpj2uDmshq6ssT51N9alr</guid>
            <dc:creator>Alex Davidson</dc:creator>
        </item>
        <item>
            <title><![CDATA[Tales from the Crypt(o team)]]></title>
            <link>https://blog.cloudflare.com/tales-from-the-crypt-o-team/</link>
            <pubDate>Sun, 27 Oct 2019 23:00:00 GMT</pubDate>
            <description><![CDATA[ Halloween season is upon us. This week we’re sharing a series of blog posts about work being done at Cloudflare involving cryptography, one of the spookiest technologies around. ]]></description>
            <content:encoded><![CDATA[ <figure><img /></figure>

<p>Halloween season is upon us. This week we’re sharing a series of blog posts about work being done at Cloudflare involving cryptography, one of the spookiest technologies around. So subscribe to this blog and come back every day for tricks, treats, and deep technical content.</p>
    <div>
      <h2>A long-term mission</h2>
      <a href="#a-long-term-mission">
        
      </a>
    </div>
    <p>Cryptography is one of the most powerful technological tools we have, and Cloudflare has been at the forefront of using cryptography to help build a better Internet. Of course, we haven’t been alone on this journey. Making meaningful changes to the way the Internet works requires time, effort, experimentation, momentum, and willing partners. Cloudflare has been involved with several multi-year efforts to leverage cryptography to help make the Internet better.</p><p>Here are some highlights to expect this week:</p><ul><li><p>We’re renewing Cloudflare’s commitment to privacy-enhancing technologies by sharing some of the recent work being done on <a href="/cloudflare-supports-privacy-pass/">Privacy Pass</a>: <a href="/supporting-the-latest-version-of-the-privacy-pass-protocol/">Supporting the latest version of the Privacy Pass Protocol</a></p></li><li><p>We’re helping forge a path to a quantum-safe Internet by sharing some of the results of the <a href="/towards-post-quantum-cryptography-in-tls/">Post-quantum Cryptography</a> experiment: <a href="/the-tls-post-quantum-experiment/">The TLS Post-Quantum Experiment</a></p></li><li><p>We’re sharing the rust-based software we use to power <a href="/secure-time/">time.cloudflare.com</a>: <a href="/announcing-cfnts/">Announcing cfnts: Cloudflare's implementation of NTS in Rust</a></p></li><li><p>We’re doing a deep dive into the technical details of <a href="https://developers.cloudflare.com/1.1.1.1/dns-over-https/">Encrypted DNS</a>: <a href="/dns-encryption-explained/">DNS Encryption Explained</a></p></li><li><p>We’re announcing support for a new technique we developed with industry partners to help keep TLS private keys more secure: <a href="/keyless-delegation/">Delegated Credentials for TLS</a>, and how we're keeping keys safe from memory disclosure attacks: <a href="/going-keyless-everywhere/">Going Keyless Everywhere</a></p></li></ul><p>The milestones we’re sharing this week would not be possible without partnerships with companies, universities, and individuals working in good faith to help build a better Internet together. Hopefully, this week provides a fun peek into the future of the Internet.</p> ]]></content:encoded>
            <category><![CDATA[Crypto Week]]></category>
            <category><![CDATA[Cryptography]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[Product News]]></category>
            <guid isPermaLink="false">5Bgs7HuC6VeCOjcHUOVF0o</guid>
            <dc:creator>Nick Sullivan</dc:creator>
        </item>
        <item>
            <title><![CDATA[Introducing time.cloudflare.com]]></title>
            <link>https://blog.cloudflare.com/secure-time/</link>
            <pubDate>Fri, 21 Jun 2019 13:00:00 GMT</pubDate>
            <description><![CDATA[ Cloudflare has always been a leader in deploying secure versions of insecure Internet protocols and making them available for free for anyone to use. In 2014, we launched one of the world’s first free, secure HTTPS service (Universal SSL) to go along with our existing free HTTP plan.  ]]></description>
            <content:encoded><![CDATA[ 
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2XxIRAe21RUGtSKyasiJ8G/85495f1052ed287174c46f6687a5451f/time-service_3x-1.png" />
            
            </figure><p>Cloudflare has always been a leader in deploying secure versions of insecure Internet protocols and making them available for free for anyone to use. In 2014, we launched one of the world’s first free, secure HTTPS service (<a href="/introducing-universal-ssl/">Universal SSL</a>) to go along with our existing free HTTP plan. When we launched the <a href="/announcing-1111/">1.1.1.1 DNS resolver</a>, we also supported the new secure versions of DNS (<a href="https://developers.cloudflare.com/1.1.1.1/dns-over-https/">DNS over HTTPS</a> and <a href="https://developers.cloudflare.com/1.1.1.1/dns-over-tls/">DNS over TLS</a>). Today, as part of <a href="/welcome-to-crypto-week-2019/">Crypto Week 2019</a>, we are doing the same thing for the Network Time Protocol (NTP), the dominant protocol for obtaining time over the Internet.</p><p>This announcement is personal for me. I've spent the last four years identifying and fixing vulnerabilities in time protocols. Today I’m proud to help introduce a service that would have made my life from 2015 through 2019 a whole lot harder: <a href="https://cloudflare.com/time">time.cloudflare.com</a>, a free time service that supports both NTP and the emerging Network Time Security (NTS) protocol for securing NTP. Now, anyone can get time securely from all our datacenters in 180 cities around the world.</p><p>You can use time.cloudflare.com as the source of time for all your devices today with <a href="https://cloudflare.com/time">NTP</a>, while NTS clients are still under development. <a href="https://blog.ntpsec.org/2019/01/02/starting-nts.html">NTPsec</a> includes experimental support for NTS. If you’d like to get updates about NTS client development, email us asking to join at <a>time-services@cloudflare.com</a>. To use NTS to secure time synchronization, reach out to your vendors and inquire about NTS support.</p>
    <div>
      <h3>A small tale of “time” first</h3>
      <a href="#a-small-tale-of-time-first">
        
      </a>
    </div>
    <p>Back in 2015, as a fresh graduate student interested in Internet security, I came across this mostly esoteric Internet protocol called the <a href="https://tools.ietf.org/html/rfc5905">Network Time Protocol</a> (NTP). NTP was designed to synchronize time between computer systems communicating over unreliable and variable-latency network paths. I was actually studying Internet routing security, in particular attacks against the Resource Public Key Infrastructure (<a href="/rpki/">RPKI</a>), and kept hitting a dead end because of a cache-flushing issue. As a last-ditch effort I decided to roll back the time on my computer manually, and the attack worked.</p><p>I had discovered the importance of time to computer security. Most cryptography uses timestamps to limit certificate and signature validity periods. When connecting to a website, knowledge of the correct time ensures that the certificate you see is current and is not compromised by an attacker. When looking at logs, time synchronization makes sure that events on different machines can be correlated accurately. Certificates and logging infrastructure can break with minutes, hours or months of time difference. Other applications like caching and Bitcoin are sensitive to even very small differences in time on the order of seconds.</p><p>Two factor authentication using rolling numbers also rely on accurate clocks. This then creates the need for computer clocks to have access to reasonably accurate time that is securely delivered. NTP is the most commonly used protocol for time synchronization on the Internet. If an attacker can leverage vulnerabilities in NTP to manipulate time on computer clocks, they can undermine the security guarantees provided by these systems.</p><p>Motivated by the severity of the issue, I decided to look deeper into NTP and its security. Since the need for synchronizing time across networks was visible early on, NTP is a very old protocol. The first standardized version of NTP dates back to 1985, while the latest NTP version 4 was completed in 2010 (see <a href="https://tools.ietf.org/html/rfc5905">RFC5905</a>).</p><p>In its most common mode, NTP works by having a client send a query packet out to an NTP server that then responds with its clock time. The client then computes an estimate of the difference between its clock and the remote clock and attempts to compensate for network delay in this. NTP client queries multiple servers and implements algorithms to select the best estimate, and rejects clearly wrong answers.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4xjsZtf4GN97mYhOpT69oY/9c7f6886c164edae191ec749ce652e15/VGltZSBoYW5kc2hha2UgLnBuZw--.png" />
            
            </figure><p>Surprisingly enough, research on NTP and its security was not very active at the time. Before this, in late 2013 and early 2014, high-profile Distributed Denial of Service (DDoS) attacks were carried out by amplifying traffic from NTP servers; attackers able to spoof a victim’s IP address were able to funnel copious amounts of traffic overwhelming the targeted domains. This caught the attention of some researchers. However, these attacks did not exploit flaws in the fundamental protocol design. The attackers simply used NTP as a boring bandwidth multiplier. Cloudflare wrote extensively about these attacks and you can read about it <a href="/understanding-and-mitigating-ntp-based-ddos-attacks/">here</a>, <a href="/technical-details-behind-a-400gbps-ntp-amplification-ddos-attack/">here</a>, and <a href="/good-news-vulnerable-ntp-servers-closing-down/">here</a>.</p><p>I found several flaws in the core NTP protocol design and its implementation that can be exploited by network attackers to launch much more devastating attacks by shifting time or denying service to NTP clients. What is even more concerning was that these attackers <i>do not need to be a Monster-In-The-Middle (MITM),</i> where an attacker can modify traffic between the client and the server, to mount these attacks. A set of recent <a href="http://www.cs.bu.edu/~goldbe/papers/NTPattack.pdf">papers</a> authored by one of us showed that an off-path attacker present anywhere on the network can shift time or deny service to NTP clients. One of the ways this is done is by abusing IP fragmentation.</p><p>Fragmentation is a feature of the IP layer where a large packet is chopped into several smaller fragments so that they can pass through the networks that do not support large packets. Basically, any random network element on the path between the client and the server can send a special “<a href="https://en.wikipedia.org/wiki/Path_MTU_Discovery">ICMP fragmentation needed</a>” packet to the server telling them to fragment the packet to say X bytes. Since the server is not expected to know the IP addresses of all the network elements on its path, this packet can be sent from any source IP.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/39sOnNckJNlTnfltzsnQE/d4502e981fcdbed63f2e86db6290e2c4/time-fragmentation-attack-_2x.png" />
            
            </figure><p>Fragmentation attack against NTP</p><p>In our attack, the attacker exploits this feature to make the NTP server fragment its NTP response packet for the victim NTP client. The attacker then spoofs carefully crafted overlapping response fragments from off-path that contain the attacker’s timestamp values. By further exploiting the reassembly policies for overlapping fragments the attacker fools the client into assembling a packet with legitimate fragments and the attacker’s insertions. This evades the authenticity checks that rely on values in the original parts of the packet.</p>
    <div>
      <h3>NTP’s past and future</h3>
      <a href="#ntps-past-and-future">
        
      </a>
    </div>
    <p>At the time of NTP’s creation back in 1985, there were two main design goals for the service provided by NTP. First, they wanted it to be robust enough to handle networking errors and equipment failures. So it was designed as a service where client can gather timing samples from multiple peers over multiple communication paths and then average them to get more accurate measurement.</p><p>The second goal was load distribution. While every client would like to talk to time servers which are directly attached to high precision time-keeping devices like atomic clocks, GPS, etc, and thus have more accurate time, the capacity of those devices is only so much. So, to reduce protocol load on the network, the service was designed in a hierarchical manner. At the top of the hierarchy are servers connected to non-NTP time sources, that distribute time to other servers, that further distribute time to even more servers. Most computers connect to either these second or third level servers.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1zpb9Mrl4Fq8eLFHu3s2Uh/3be6e29efad172ee0cb46b5e17e7f49b/time-stratum-_2x.png" />
            
            </figure><p>The stratum hierarchy of NTP</p><p>The original specification (<a href="https://tools.ietf.org/html/rfc958">RFC 958</a>) also states the "non-goals" of the protocol, namely peer authentication and data integrity. Security wasn’t considered critical in the relatively small and trusting early Internet, and the protocols and applications that rely on time for security didn’t exist then. Securing NTP came second to improving the protocol and implementation.</p><p>As the Internet has grown, more and more core Internet protocols have been secured through cryptography to protect against abuse: TLS, <a href="https://www.cloudflare.com/learning/dns/dnssec/ecdsa-and-dnssec/">DNSSEC</a>, RPKI are all steps toward ensuring the security of all communications on the Internet. These protocols use “time” to provide security guarantees. Since security of Internet hinges on the security of NTP, it becomes even more important to secure NTP.</p><p>This research perspicuously showed the need for securing NTP. As a result, there was more work in the standards body for Internet Protocols, the Internet Engineering Task Force (IETF) towards cryptographically authenticating NTP. At the time, even though NTPv4 supported both symmetric and asymmetric cryptographic authentication, it was rarely used in practice due to limitations of both approaches.</p><p>NTPv4’s symmetric approach to securing synchronization doesn’t scale as the symmetric key must be pre-shared and configured manually: imagine if every client on earth needed a special secret key with the servers they wanted to get time from, the organizations that run those servers would have to do a great deal of work managing keys. This makes this solution quite cumbersome for public servers that must accept queries from arbitrary clients. For context, NIST operates important public time servers and distributes symmetric keys only to users that register, once per year, via US mail or facsimile; the US Naval Office does something similar.</p><p>The first attempt to solve the problem of key distribution was the Autokey protocol, described in <a href="https://tools.ietf.org/html/rfc5906">RFC 5906</a>. Many public NTP servers do not support Autokey (e.g., the NIST and USNO time servers, and many servers in pool.ntp.org). The protocol is badly <a href="https://www.semanticscholar.org/paper/Analysis-of-the-NTP-Autokey-Procedures-R%C3%B6ttger/a1781712cec129d5c7311a915e4d0076117ee33f">broken</a> as any network attacker can trivially retrieve the secret key shared between the client and server. The authentication mechanisms are non-standard and quite idiosyncratic.</p><p>The future of the Internet is a secure Internet, which means an authenticated and encrypted Internet. But until now NTP remains mostly insecure, despite continuing protocol development. In the meantime more and more services depended on it.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1I67AnK8FPq2Cg7n2HXbke/e6dea09ce609b8b82bbddcb93008bf10/timeline-_2x.png" />
            
            </figure><p>Timeline of NTP development</p>
    <div>
      <h3>Fixing the problem</h3>
      <a href="#fixing-the-problem">
        
      </a>
    </div>
    <p>Following the release of our paper, there was a lot more enthusiasm in the NTP community at standards body for Internet Protocols, the Internet Engineering Task Force (IETF) and outside for improving the state of NTP security. As a short-term fix, the ntpd reference implementation software was patched for several vulnerabilities that we found. And for a long-term solution, the community realized the dire need for a secure, authenticated time synchronization protocol based on public-key cryptography, which enables encryption and authentication without requiring the sharing of key material beforehand. Today we have a <a href="https://tools.ietf.org/html/draft-ietf-ntp-using-nts-for-ntp-19">Network Time Security</a> (NTS) draft at the IETF, thanks to the work of dozens of dedicated individuals at the NTP working group.</p><p>In a nutshell, the NTS protocol is divided into two-phases. The first phase is the NTS key exchange that establishes the necessary key material between the NTP client and the server. This phase uses the Transport Layer Security (TLS) handshake and relies on the same public key infrastructure as the web. Once the keys are exchanged, the TLS channel is closed and the protocol enters the second phase. In this phase the results of that TLS handshake are used to authenticate NTP time synchronization packets via extension fields. The interested reader can find more information in the Internet <a href="https://datatracker.ietf.org/doc/draft-ietf-ntp-using-nts-for-ntp/">draft</a>.</p>
    <div>
      <h3>Cloudflare’s new service</h3>
      <a href="#cloudflares-new-service">
        
      </a>
    </div>
    <p>Today, Cloudflare announces its free time service to anyone on the Internet. We intend to solve the limitations with the existing public time services, in particular by increasing availability, robustness and security.</p><p>We use our global network to provide an advantage in latency and accuracy. Our 180 locations around the world all use anycast to automatically route your packets to our closest server. All of our servers are synchronized with stratum 1 time service providers, and then offer NTP to the general public, similar to how other public NTP providers function. The biggest source of inaccuracy for time synchronization protocols is the network asymmetry, leading to a difference in travel times between the client and server and back from the server to the client. However, our servers’ proximity to users means there will be less jitter — a measurement of variance in latency on the network — and possible asymmetry in packet paths. We also hope that in regions with a dearth of NTP servers our service significantly improves the capacity and quality of the NTP ecosystem.</p><p>Cloudflare servers obtain authenticated time by using a shared symmetric key with our stratum 1 upstream servers. These upstream servers are geographically spread and ensure that our servers have accurate time in our datacenters. But this approach to securing time doesn’t scale. We had to exchange emails individually with the organizations that run stratum 1 servers, as well as negotiate permission to use them. While this is a solution for us, it isn’t a solution for everyone on the Internet.</p><p>As a secure time service provider Cloudflare is proud to announce that we are among the first to offer a free and secure public time service based on Network Time Security. We have implemented the latest <a href="https://datatracker.ietf.org/doc/draft-ietf-ntp-using-nts-for-ntp/">NTS IETF draft</a>. As this draft progresses through the Internet standards process we are committed to keeping our service current.</p><p>Most NTP implementations are currently working on NTS support, and we expect that the next few months will see broader introduction as well as advancement of the current draft protocol to an RFC. Currently we have interoperability with NTPsec who have implemented <a href="https://tools.ietf.org/html/draft-ietf-ntp-using-nts-for-ntp-18">draft 18</a> of NTS. We hope that our service will spur faster adoption of this important improvement to Internet security. Because this is a new service with no backwards compatibility requirements, we are requiring the use of TLS v1.3 with it to promote adoption of the most secure version of TLS.</p>
    <div>
      <h3>Use it</h3>
      <a href="#use-it">
        
      </a>
    </div>
    <p>If you have an NTS client, point it at time.cloudflare.com:1234. Otherwise point your NTP client at time.cloudflare.com. More details on configuration are available in the <a href="https://developers.cloudflare.com/time-services/">developer docs</a>.</p>
    <div>
      <h3>Conclusion</h3>
      <a href="#conclusion">
        
      </a>
    </div>
    <p>From our <a href="https://developers.cloudflare.com/roughtime/docs/">Roughtime</a> service to <a href="/introducing-universal-ssl/">Universal SSL</a> Cloudflare has played a role in expanding the availability and use of secure protocols. Now with our free public time service we provide a trustworthy, widely available alternative to another insecure legacy protocol. It’s all a part of our mission to help make a faster, reliable, and more secure Internet for everyone.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4HXFb98c49ePa7hFEf6eTO/9c1447f75f5547c56a1afa49043269bc/crypto-week-2019-header-circle_2x-3.png" />
            
            </figure><p><b>Thanks to the many other engineers who worked on this project, including </b><a href="https://github.com/wbl/"><b>Watson Ladd</b></a><b>, </b><a href="/author/gabbi/"><b>Gabbi Fisher</b></a><b>, and </b><a href="/author/dina/"><b>Dina Kozlov</b></a></p><hr /><p><i>This post is by </i><a href="https://scholar.google.com/citations?user=WVff3wsAAAAJ"><i>Aanchal Malhotra</i></a><i>, a Graduate Research Assistant at Boston University and former Cloudflare intern on the Cryptography team.</i></p> ]]></content:encoded>
            <category><![CDATA[Crypto Week]]></category>
            <category><![CDATA[Cryptography]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[Research]]></category>
            <guid isPermaLink="false">3OjdPo2rSxdmfOcgEzev6c</guid>
            <dc:creator>Aanchal Malhotra</dc:creator>
        </item>
        <item>
            <title><![CDATA[The Quantum Menace]]></title>
            <link>https://blog.cloudflare.com/the-quantum-menace/</link>
            <pubDate>Thu, 20 Jun 2019 13:02:00 GMT</pubDate>
            <description><![CDATA[ The impact of quantum computing on cryptography conducts research and development towards a Post-Quantum era. ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Over the last few decades, the word ‘quantum’ has become increasingly popular. It is common to find articles, reports, and many people interested in quantum mechanics and the new capabilities and improvements it brings to the scientific community. This topic not only concerns physics, since the development of quantum mechanics impacts on several other fields such as chemistry, economics, <a href="https://www.cloudflare.com/learning/ai/what-is-artificial-intelligence/">artificial intelligence</a>, operations research, and undoubtedly, cryptography.</p><p>This post begins a trio of blogs describing the impact of quantum computing on cryptography, and how to use stronger algorithms resistant to the power of quantum computing.</p><ul><li><p>This post introduces quantum computing and describes the main aspects of this new computing model and its devastating impact on security standards; it summarizes some approaches to securing information using quantum-resistant algorithms.</p></li><li><p>Due to the relevance of this matter, we present <a href="/towards-post-quantum-cryptography-in-tls">our experiments</a> on a large-scale deployment of quantum-resistant algorithms.</p></li><li><p>Our third <a href="/introducing-circl">post introduces CIRCL</a>, open-source Go library featuring optimized implementations of quantum-resistant algorithms and elliptic curve-based primitives.</p></li></ul><p>All of this is part of Cloudflare’s <a href="/welcome-to-crypto-week-2019/">Crypto Week 2019</a>, now fasten your seat-belt and get ready to make a quantum leap.</p>
    <div>
      <h3>What is Quantum Computing?</h3>
      <a href="#what-is-quantum-computing">
        
      </a>
    </div>
    <p>Back in 1981, <a href="https://link.springer.com/article/10.1007/BF02650179">Richard Feynman</a> raised the question about what kind of computers can be used to simulate physics. Although some physical systems can be simulated in a classical computer, the amount of resources used by such a computer can grow exponentially. Then, he conjectured the existence of a computer model that behaves under quantum mechanics rules, which opened a field of research now called <i>quantum computing</i>. To understand the basics of quantum computing, it is necessary to recall how classical computers work, and from that shine a spotlight on the differences between these computational models.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6wIvGjwtPsLAbIreCKjZ8X/117e122adf81acd22ad6724fbec528a4/pasted-image-0--2--1.png" />
            
            </figure><p>Fellows of the <a href="http://blogs.royalsociety.org/history-of-science/2012/07/24/art-of-finding-inspiration/">Royal Society</a>: John Maynard Smith, Richard Feynman &amp; Alan Turing</p><p>In 1936, Alan Turing and Emil Post independently described models that gave rise to the foundation of the computing model known as the Post-Turing machine, which describes how computers work and allowed further determination of limits for solving problems.</p><p>In this model, the units of information are <i>bits</i>, which store one of two possible values, usually denoted by 0 and 1. A computing machine contains a set of bits and performs operations that modify the values of the bits, also known as the machine’s state. Thus, a machine with <i>N</i> bits can be in one of _2_ᴺ possible states. With this in mind, the Post-Turing computing model can be abstractly described as a machine of states, in which running a program is translated as machine transitions along the set of states.</p><p>A <a href="https://royalsocietypublishing.org/doi/abs/10.1098/rspa.1985.0070">paper</a> David Deutsch published in 1985 describes a computing model that extends the capabilities of a Turing machine based on the theory of quantum mechanics. This computing model introduces several advantages over the Turing model for processing large volumes of information. It also presents unique properties that deviate from the way we understand classical computing. Most of these properties come from the nature of quantum mechanics. We’re going to dive into these details before approaching the concept of quantum computing.</p>
    <div>
      <h3>Superposition</h3>
      <a href="#superposition">
        
      </a>
    </div>
    <p>One of the most exciting properties of quantum computing that provides an advantage over the classical computing model is <i>superposition</i>. In physics, superposition is the ability to produce valid states from the addition or superposition of several other states that are part of a system.</p><p>Applying these concepts to computing information, it means that there is a system in which it is possible to generate a machine state that represents a (weighted) sum of the states 0 and 1; in this case, the term <i>weighted</i> means that the state can keep track of “the quantity of” 0 and 1 present in the state. In the classical computation model, one bit can only store either the state of 0 or 1, not both; even using two bits, they cannot represent the weighted sum of these states. Hence, to make a distinction from the basic states, quantum computing uses the concept of a <i>quantum bit</i> (<i>qubit</i>) -- a unit of information to denote the superposition of two states. This is a cornerstone concept of quantum computing as it provides a way of tracking more than a single state per unit of information, making it a powerful tool for processing information.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1I6axLkTlVksOUICoYw8qj/627dd13efb4c253315cabd05493757a9/switch-vs-dimmer_3x-1.png" />
            
            </figure><p>Classical computing – A bit stores only one of two possible states: ON or OFF.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2AUfinBsQ8DGXSaPeeOR7b/3334eb6183daca9b89b3184043210d2c/switch-vs-dimmer_3x-copy.png" />
            
            </figure><p>Quantum computing – A qubit stores a combination of two or more states.</p><p>So, a qubit represents the sum of two parts: the 0 or 1 state plus the amount each 0/1 state contributes to produce the state of the qubit.</p><p>In mathematical notation, qubit \( | \Psi \rangle \) is an explicit sum indicating that a qubit represents the superposition of the states 0 and 1. This is the Dirac notation used to describe the value of a qubit \( | \Psi \rangle =  A | 0 \rangle +B | 1 \rangle \), where, A and B are complex numbers known as the <i>amplitude</i> of the states 0 and 1, respectively. The value of the basic states is represented by qubits as \( | 0 \rangle =  1 | 0 \rangle + 0 | 1 \rangle \)  and \( | 1 \rangle =  0 | 0 \rangle + 1 | 1 \rangle \), respectively. The right side of the term contains the abbreviated notation for these special states.</p>
    <div>
      <h3>Measurement</h3>
      <a href="#measurement">
        
      </a>
    </div>
    <p>In a classical computer, the values 0 and 1 are implemented as digital signals. Measuring the current of the signal automatically reveals the status of a bit. This means that at any moment the value of the bit can be observed or <i>measured</i>.</p><p>The state of a qubit is maintained in a physically closed system, meaning that the properties of the system, such as superposition, require no interaction with the environment; otherwise any interaction, like performing a measurement, can cause interference on the state of a qubit.</p><p>Measuring a qubit is a probabilistic experiment. The result is a bit of information that depends on the state of the qubit. The bit, obtained by measuring \( | \Psi \rangle =  A | 0 \rangle +B | 1 \rangle \), will be equal to 0 with probability \( |A|^2 \),  and equal to 1 with probability \( |B|^2 \), where \( |x| \) represents the <a href="https://en.wikipedia.org/wiki/Absolute_value#Complex_numbers">absolute value</a> of \(x\).</p><p>From Statistics, we know that the sum of probabilities of all possible events is always equal to 1, so it must hold that \( |A|^2 +|B|^2 =1 \). This last equation motivates to represent qubits as the points of a circle of radius one, and more generally, as the points on the surface of a sphere of radius one, which is known as the <a href="https://en.wikipedia.org/wiki/Bloch_sphere">Bloch Sphere</a>.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6NmMTlriFkeL9Jb1td5SfO/53735ec341c68f97057099281563e1ee/unit_circl-1.png" />
            
            </figure><p>The qubit state is analogous to a point on a unitary circle.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7qMFBDnHzzZwOl9AFmnReZ/61dc703ac2c850e36363b1972a1a5100/Bloch_sphere.png" />
            
            </figure><p>The Bloch Sphere by <a href="https://commons.wikimedia.org/w/index.php?curid=5829358">Smite-Meister</a> - Own work, CC BY-SA 3.0.</p><p>Let’s break it down: If you measure a qubit you also destroy the superposition of the qubit, resulting in a superposition state collapse, where it assumes one of the basics states, providing your final result.</p><p>Another way to think about superposition and measurement is through the coin tossing experiment.</p><p>Toss a coin in the air and you give people a random choice between two options: heads or tails. Now, don't focus on the randomness of the experiment, instead note that while the coin is rotating in the air, participants are uncertain which side will face up when the coin lands. Conversely, once the coin stops with a random side facing up, participants are 100% certain of the status.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1JLa4qxwkVC68ygOK96p4j/e6530dd234ddc2dd80a0578d1701b57d/coin.png" />
            
            </figure><p>How does it relate? Qubits are similar to the participants. When a qubit is in a superposition of states, it is tracking the probability of heads or tails, which is the participants’ uncertainty quotient while the coin is in the air. However, once you start to measure the qubit to retrieve its value, the superposition vanishes, and a classical bit value sticks: heads or tails. Measurement is that moment when the coin is static with only one side facing up.</p><p>A fair coin is a coin that is not biased. Each side (assume 0=heads and 1=tails) of a fair coin has the same probability of sticking after a measurement is performed. The qubit \( \tfrac{1}{\sqrt{2}}|0\rangle + \tfrac{1}{\sqrt{2}}|1\rangle \) describes the probabilities of tossing a fair coin. Note that squaring either of the amplitudes results in ½, indicating that there is a 50% chance either heads or tails sticks.</p><p>It would be interesting to be able to charge a fair coin at will while it is in the air. Although this is the magic of a professional illusionist, this task, in fact, can be achieved by performing operations over qubits. So, get ready to become the next quantum magician!</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/532YwSvZO7qW1yBi7Up29t/99f327467f63ec00bb0070e67ff535c4/coinMagic.png" />
            
            </figure>
    <div>
      <h3>Quantum Gates</h3>
      <a href="#quantum-gates">
        
      </a>
    </div>
    <p>A logic gate represents a Boolean function operating over a set of inputs (on the left) and producing an output (on the right). A logic circuit is a set of connected logic gates, a convenient way to represent bit operations.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3r5DwkDJpXFX5z9c2TLwQf/26349d4859e4f85ac36e8bc05f4d0a79/notGate-1.png" />
            
            </figure><p>The NOT gate is a single-bit operation that flips the value of the input bit.</p><p>Other gates are AND, OR, XOR, and NAND, and more. A set of gates is universal if it can generate other gates. For example, NOR and NAND gates are universal since any circuit can be constructed using only these gates.</p><p>Quantum computing also admits a description using circuits. Quantum gates operate over qubits, modifying the superposition of the states. For example, there is a quantum gate analogous to the NOT gate, the X gate.</p><p>The X quantum gate interchanges the amplitudes of the states of the input qubit.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2uLztfzrwMTgv3EgzcCh68/67a0aa788e99a27da20dc88b904eb0f9/Xgate-1.png" />
            
            </figure><p>The Z quantum gate flips the sign’s amplitude of state 1:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1v01TxJtUeYWdKWfI6JO5g/a6e7b5c411191a8733cbb761999af3bb/Zgate-1.png" />
            
            </figure><p>Another quantum gate is the Hadamard gate, which generates an equiprobable superposition of the basic states.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/41HRIfWHpFQBIFfqrgUMg1/94769cbdae4034d9d8bcf2326c4ac6e2/hadamard-1.png" />
            
            </figure><p>Using our coin tossing analogy, the Hadamard gate has the action of tossing a fair coin to the air. In quantum circuits, a triangle represents measuring a qubit, and the resulting bit is indicated by a double-wire.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7byLmFB5x7RfBSLPFGVu0S/1f2db2197514575f7a5ecad7769effeb/measureGate-1.png" />
            
            </figure><p>Other gates, such as the CNOT gate, Pauli’s gates, Toffoli gate, Deutsch gate, are slightly more advanced. <a href="https://algassert.com/quirk">Quirk</a>, the open-source playground, is a fun sandbox where you can construct quantum circuits using all of these gates.</p>
    <div>
      <h3>Reversibility</h3>
      <a href="#reversibility">
        
      </a>
    </div>
    <p>An operation is reversible if there exists another operation that rolls back the output state to the initial state. For instance, a NOT gate is reversible since applying a second NOT gate recovers the initial input.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1P37tXMONI1EmSBEn4cizm/26b63188cb18bc769789e33cc8d6d526/notReversibleGate-1.png" />
            
            </figure><p>In contrast, AND, OR, NAND gates are not reversible. This means that some classical computations cannot be reversed by a classic circuit that uses only the output bits. However, if you insert additional bits of information, the operation can be reversed.</p><p>Quantum computing mainly focuses on reversible computations, because there’s always a way to construct a reversible circuit to perform an irreversible computation. The reversible version of a circuit could require the use of <i>ancillary</i> qubits as auxiliary (but not temporary) variables.</p><p>Due to the nature of composed systems, it could be possible that these <i>ancillas</i> (extra qubits) correlate to qubits of the main computation. This correlation makes it infeasible to reuse ancillas since any modification could have the side-effect on the operation of a reversible circuit. This is like memory assigned to a process by the operating system: the process cannot use memory from other processes or it could cause memory corruption, and processes cannot release their assigned memory to other processes. You could use garbage collection mechanisms for ancillas, but performing reversible computations increases your qubit budget.</p>
    <div>
      <h3>Composed Systems</h3>
      <a href="#composed-systems">
        
      </a>
    </div>
    <p>In quantum mechanics, a single qubit can be described as a single closed system: a system that has no interaction with the environment nor other qubits. Letting qubits interact with others leads to a <i>composed system</i> where more states are represented. The state of a 2-qubit composite system is denoted as \(A_0|00\rangle+A_1|01\rangle+A_2|10\rangle+A_3|11\rangle \), where, \( A_i \) values correspond to the amplitudes of the four basic states 00, 01, 10, and 11. This qubit \( \tfrac{1}{2}|00\rangle+\tfrac{1}{2}|01\rangle+\tfrac{1}{2}|10\rangle+\tfrac{1}{2}|11\rangle \) represents the superposition of these basic states, both having the same probability obtained after measuring the two qubits.</p><p>In the classical case, the state of N bits represents only <b>one</b> of 2ᴺ possible states, whereas a composed state of N qubits represents <b>all</b> the 2ᴺ states <i>but</i> in superposition. This is one big difference between these computing models as it carries two important properties: entanglement and quantum parallelism.</p>
    <div>
      <h3>Entanglement</h3>
      <a href="#entanglement">
        
      </a>
    </div>
    <p>According to the theory behind quantum mechanics, some composed states can be described through the description of its constituents. However, there are composed states where no description is possible, known as <i>entangled states</i>.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6AjGsPw33QpE8Dtpz2vCSU/ccef8278f241951a758a81239ba19793/bellStates-2.png" />
            
            </figure><p><a href="https://en.wikipedia.org/wiki/Bell_state">Bell states</a> are entangled qubit examples</p><p>The entanglement phenomenon was pointed out by Einstein, Podolsky, and Rosen in the so-called <a href="https://en.wikipedia.org/wiki/EPR_paradox">EPR paradox</a>. Suppose there is a composed system of two entangled qubits, in which by performing a measurement in one qubit causes interference in the measurement of the second. This interference occurs even when qubits are separated by a long distance, which means that some information transfer happens faster than the speed of light. This is how quantum entanglement conflicts with the theory of relativity, where information cannot travel faster than the speed of light. The EPR paradox motivated further investigation for deriving new interpretations about quantum mechanics and aiming to resolve the paradox.</p><p>Quantum entanglement can help to transfer information at a distance by following a communication protocol. The following protocol examples rely on the fact that Alice and Bob separately possess one of two entangled qubits:</p><ul><li><p>The superdense coding protocol allows Alice to communicate a 2-bit message \(m_0,m_1\) to Bob using a quantum communication channel, for example, using fiber optics to transmit photons. All Alice has to do is operate on her qubit according to the value of the message and send the resulting qubit to Bob. Once Bob receives the qubit, he measures both qubits, noting that the collapsed 2-bit state corresponds to Alice’s message.</p></li></ul>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3B27r5nXlznlvOEeM8H6pm/8bce22506622f7ccacfdf17981b8a2aa/superdenseCoding-3.png" />
            
            </figure><p>Superdense coding protocol.</p><ul><li><p>The quantum teleportation protocol allows Alice to transmit a qubit to Bob without using a quantum communication channel. Alice measures the qubit to send Bob and her entangled qubit resulting in two bits. Alice sends these bits to Bob, who operates on his entangled qubit according to the bits received and notes that the result state matches the original state of Alice’s qubit.</p></li></ul>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3UFNkskWzfhAdyxS4jrAKH/15d547196c7a13e4d721dc454c0451ae/teleportation-2.png" />
            
            </figure><p>Quantum teleportation protocol.</p>
    <div>
      <h3>Quantum Parallelism</h3>
      <a href="#quantum-parallelism">
        
      </a>
    </div>
    <p>Composed systems of qubits allow representation of more information per composed state. Note that operating on a composed state of N qubits is equivalent to operating over a set of 2ᴺ states in superposition. This procedure is <i>quantum parallelism</i>. In this setting, operating over a large volume of information gives the intuition of performing operations in parallel, like in the parallel computing paradigm; one big caveat is that superposition is not equivalent to parallelism.</p><p>Remember that a composed state is a superposition of several states so, a computation that takes a composed state of inputs will result in a composed state of outputs. The main divergence between classical and quantum parallelism is that quantum parallelism can obtain only <b>one</b> of the processed outputs. Observe that a measurement in the output of a composed state causes that the qubits collapse to only <b>one</b> of the outputs, making it unattainable to calculate all computed values.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2sgYPfYBGUWfrFMQOlq3Fw/e1b5ef233c303c2a7d76e62297e3496a/parallel-3.png" />
            
            </figure><p>Although quantum parallelism does not match precisely with the traditional notion of parallel computing, you can still leverage this computational power to get related information.</p><p><i>Deutsch-Jozsa Problem</i>: Assume \(F\) is a function that takes as input N bits, outputs one bit, and is either constant (always outputs the same value for all inputs) or balanced (outputs 0 for half of the inputs and 1 for the other half). The problem is to determine if \(F\) is constant or balanced.</p><p>The quantum algorithm that solves the Deutsch-Jozsa problem uses quantum parallelism. First, N qubits are initialized in a superposition of 2ᴺ states. Then, in a single shot, it evaluates \(F\) for all of these states.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2i53rGbCSX6mPwXVbvij17/26f3b116ccca9d78085a61dafeb47940/deutsch.png" />
            
            </figure><p>(note that some factors were omitted for simplicity)</p><p>The result of applying \(F\) appears (in the exponent) of the amplitude of the all-zero state. Note that only when \(F\) is constant is this amplitude, either +1 or -1. If the result of measuring the N qubit is an all-zeros bitstring, then there is a 100% certainty that \(F\) is constant. Any other result indicates that \(F\) is balanced.</p><p>A deterministic classical algorithm solves this problem using \( 2^{N-1}+1\) evaluations of \(F\) in the worst case. Meanwhile, the quantum algorithm requires only <b>one</b> evaluation. The Deutsch-Jozsa problem exemplifies the exponential advantage of a quantum algorithm over classical algorithms.</p>
    <div>
      <h2>Quantum Computers</h2>
      <a href="#quantum-computers">
        
      </a>
    </div>
    <p>The theory of quantum computing is supported by investigations in the field of quantum mechanics. However, constructing a quantum machine requires a physical system that allows representing qubits and manipulation of states in a reliable and precise way.</p><p>The <a href="https://onlinelibrary.wiley.com/doi/abs/10.1002/1521-3978%28200009%2948%3A9/11%3C771%3A%3AAID-PROP771%3E3.0.CO%3B2-E">DiVincenzo Criteria</a> require that a physical implementation of a quantum computer must:</p><ol><li><p>Be scalable and have well-defined qubits.</p></li><li><p>Be able to initialize qubits to a state.</p></li><li><p>Have long decoherence times to apply quantum error-correcting codes. Decoherence of a qubit happens when the qubit interacts with the environment, for example, when a measurement is performed.</p></li><li><p>Use a universal set of quantum gates.</p></li><li><p>Be able to measure single qubits without modifying others.</p></li></ol><p>Quantum computer physical implementations face huge engineering obstacles to satisfy these requirements. The most important challenge is to guarantee low error rates during computation and measurement. Lowering these rates require techniques for error correction, which add a significant number of qubits specialized on this task. For this reason, the number of qubits of a quantum computer should not be regarded as for classical systems. In a classical computer, the bits of a computer are all effective for performing a calculation, whereas the number of qubits is the sum of the effective qubits (those used to make calculations) plus the ancillas (used for reversible computations) plus the error correction qubits.</p><p>Current implementations of quantum computers partially satisfy the DiVincenzo criteria. Quantum adiabatic computers fit in this category since they do not operate using quantum gates. For this reason, they are not considered to be universal quantum computers.</p>
    <div>
      <h3>Quantum Adiabatic Computers</h3>
      <a href="#quantum-adiabatic-computers">
        
      </a>
    </div>
    <p>A recurrent problem in optimization is to find the global minimum of an objective function. For example, a route-traffic control system can be modeled as a function that reduces the cost of routing to a minimum. Simulated annealing is a heuristic procedure that provides a good solution to these types of problems. Simulated annealing finds the solution state by slowly introducing changes (<a href="https://en.wikipedia.org/wiki/Adiabatic_theorem">the adiabatic process</a>) on the variables that govern the system.</p><p><a href="https://doi.org/10.1103/PhysRevA.57.2403">Quantum annealing</a> is the analogous quantum version of simulated annealing. A qubit is initialized into a superposition of states representing all possible solutions to the problem. Here is used the <a href="https://en.wikipedia.org/wiki/Hamiltonian_(quantum_mechanics)">Hamiltonian operator</a>, which is the sum of vectors of potential and kinetic energies of the system. Hence, the objective function is encoded using this operator describing the evolution of the system in correspondence with time. Then, if the system is allowed to evolve very slowly, it will eventually land on a final state representing the optimal value of the objective function.</p><p>Currently, there exist adiabatic computers in the market, such as the D-Wave and IBM Q systems, featuring hundreds of qubits; however, their capabilities are somewhat limited to some problems that can be modeled as optimization problems. The limits of adiabatic computers were studied by <a href="https://people.eecs.berkeley.edu/~vazirani/pubs/adiabatic.pdf">van Dam et al</a>, showing that despite solving local searching problems and even some instances of the <a href="https://doi.org/10.1080/00107514.2018.1450720">max-SAT problem</a>, there exists harder searching problems this computing model cannot efficiently solve.</p>
    <div>
      <h3>Nuclear Magnetic Resonance</h3>
      <a href="#nuclear-magnetic-resonance">
        
      </a>
    </div>
    <p>Nuclear Magnetic Resonance (NMR) is a physical phenomena that can be used to represent qubits. The spin of atomic nucleus of molecules is perturbed by an oscillating magnetic field. A 2001 <a href="https://www.nature.com/articles/414883a">report</a> describes successful implementation of Shor’s algorithm in a 7-qubit NMR quantum computer. An iconic result since this computer was able to factor the number 15.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4nhODbPubqocd6duEMvzui/9330f1a27684d959443bfae20fd4eb19/NMR_EPR.gif" />
            
            </figure><p>Nucleus spinning induced by a magnetic field, <a href="https://commons.wikimedia.org/w/index.php?curid=20398517">Darekk2</a> - <a href="https://creativecommons.org/licenses/by-sa/3.0/us/">CC BY-SA 3.0</a></p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5HwOLCsA3LUrJwOid1a5Bd/d134ec31fbdbeec162a1d37bafb70205/pasted-image-0--9-.png" />
            
            </figure><p>NRM Spectrometer by <a href="http://web.physics.ucsb.edu/~msteffen/nmrqc.htm">UCSB</a></p>
    <div>
      <h3>Superconducting Quantum Computers</h3>
      <a href="#superconducting-quantum-computers">
        
      </a>
    </div>
    <p>One way to physically construct qubits is based on superconductors, materials that conduct electric current with zero resistance when exposed to temperatures close to absolute zero.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4EbOKUEsS0c1kzEqJ8ygGs/3dad6346018a3800ac3bdfe9bafdd5e4/thermo.png" />
            
            </figure><p>The Josephson effect, in which current flows across the junction of two superconductors separated by a non-superconducting material, is used to physically implement a superposition of states.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6Nu6SWJ97k1yjuOCnmwjaK/035e2519ed36af9d0721e3048cfd7851/pasted-image-0--11-.png" />
            
            </figure><p>A Josephson junction - <a href="https://commons.wikimedia.org/w/index.php?curid=319467">Public Domain</a></p><p>When a magnetic flux is applied to this junction, the current flows continuously in one direction. But, depending on the quantity of magnetic flux applied, the current can also flow in the opposite direction. There exists a quantum superposition of currents going both clockwise and counterclockwise leading to a physical implementation of a qubit called <i>flux qubit</i>. The complete device is known as the Superconducting Quantum Interference Device (SQUID) and can be easily coupled scaling the number of qubits. Thus, SQUIDs are like the transistors of a quantum computer.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4GUTS2ji1u31Laa9Ki7qsa/4a040661686bf49e679529362f0fab69/pasted-image-0--12-.png" />
            
            </figure><p>SQUID: Superconducting Quantum Interference Device. Image by <a href="https://www.kurzweilai.net/lockheed-martin-buys-first-d-wave-quantum-computing-system">Kurzweil Network</a> and <a href="https://www.dwavesys.com/tutorials/background-reading-series/introduction-d-wave-quantum-hardware">original</a> source.</p><p>Examples of superconducting computers are:</p><ul><li><p>D-wave’s <a href="https://www.dwavesys.com/d-wave-two-system">adiabatic computers</a> process quantum annealing for solving diverse optimization problems.</p></li><li><p>Google’s <a href="https://ai.googleblog.com/2018/03/a-preview-of-bristlecone-googles-new.html">72-qubit computer</a> was recently announced and also several <a href="https://ai.googleblog.com/2019/02/on-path-to-cryogenic-control-of-quantum.html">engineering issues</a> such as achieving lower temperatures.</p></li><li><p>IBM’s <a href="https://www.research.ibm.com/ibm-q/technology/devices/">IBM-Q Tokyo</a>, a 20-qubit adiabatic computer, and IBM Q Experience, a cloud-based system for exploring quantum circuits.</p></li></ul>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2KGpruf4BLxzcZWmIdEygU/dd7fbc74a3ccaf89d9dbb70a99ebf3d2/pasted-image-0--13-.png" />
            
            </figure><p>D-Wave Cooling System by <a href="https://www.dwavesys.com/tutorials/background-reading-series/introduction-d-wave-quantum-hardware">D-Wave Systems Inc.</a></p><p>IBM Q System</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2TWZIHj4mFrE5mrF8eod4Y/1fbee6190611e6e85d96c35c646ad5d7/pasted-image-0--14-.png" />
            
            </figure><p><a href="https://www.forbes.com/sites/tiriasresearch/2019/01/17/ibm-lattice-cryptography-is-needed-now-to-defend-against-quantum-computing-future/#38383853c42e">IBM Q System One</a> cryostat at CES.</p>
    <div>
      <h2>The Imminent Threat of Quantum Algorithms</h2>
      <a href="#the-imminent-threat-of-quantum-algorithms">
        
      </a>
    </div>
    <p>The <a href="https://quantumalgorithmzoo.org">quantum zoo website</a> tracks problems that can be solved using quantum algorithms. As of mid-2018, more than 60 problems appear on this list, targeting diverse applications in the area of number theory, approximation, simulation, and searching. As terrific as it sounds, some easily-solvable problems by quantum computing are surrounding the security of information.</p>
    <div>
      <h3>Grover’s Algorithm</h3>
      <a href="#grovers-algorithm">
        
      </a>
    </div>
    <blockquote><p><b>Tales of a quantum detective (fragment)</b><i>.</i> A couple of detectives have the mission of finding one culprit in a group of suspects that always respond to this question honestly: “are you guilty?”.The detective C follows a classic interrogative method and interviews every person one at a time, until finding the first one that confesses.The detective Q proceeds in a different way, First gather all suspects in a completely dark room, and after that, the detective Q asks them -- are you guilty? -- A steady sound comes from the room saying “No!” while at the same time, a single voice mixed in the air responds “Yes!.” Since everybody is submerged in darkness, the detective cannot see the culprit. However, detective Q knows that, as long as the interrogation advances, the culprit will feel desperate and start to speak louder and louder, and so, he continues asking the same question. Suddenly, detective Q turns on the lights, enters into the room, and captures the culprit. How did he do it?</p></blockquote><p>The task of the detective can be modeled as a searching problem. Given a Boolean function \( f\) that takes N bits and produces one bit, to find the unique input \(x\) such that \( f(x)=1\).</p><p>A classical algorithm (detective C) finds \(x\) using \(2^N-1\) function evaluations in the worst case. However, the quantum algorithm devised by Grover, corresponding to detective Q, searches quadratically faster using around \(2^{N/2}\) function evaluations.</p><p>The key intuition of Grover’s algorithm is increasing the amplitude of the state that represents the solution while maintaining the other states in a lower amplitude. In this way, a system of N qubits, which is a superposition of 2ᴺ possible inputs, can be continuously updated using this intuition until the solution state has an amplitude closer to 1. Hence, after updating the qubits many times, there will be a high probability to measure the solution state.</p><p>Initially, a superposition of 2ᴺ states (horizontal axis) is set, each state has an amplitude (vertical axis) close to 0. The qubits are updated so that the amplitude of the solution state increases more than the amplitude of other states. By repeating the update step, the amplitude of the solution state gets closer to 1, which boosts the probability of collapsing to the solution state after measuring.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6Db09DeqNScBPD5IBrFSwH/45b2bac441bee6ba789779707600a487/grover.gif" />
            
            </figure><p>Image taken from D. Bernstein’s <a href="https://cr.yp.to/talks/2019.02.05/slides-djb-20190205-walks-4x3.pdf">slides</a>.</p><p><b>Grover’s Algorithm</b> (pseudo-code):</p><ol><li><p>Prepare an N qubit \(|x\rangle \) as a uniform superposition of 2ᴺ states.</p></li><li><p>Update the qubits by performing this core operation. $$ |x\rangle \mapsto (-1)^{f(x)} |x\rangle $$ The result of \( f(x) \) only flips the amplitude of the searched state.</p></li><li><p>Negate the N qubit over the average of the amplitudes.</p></li><li><p>Repeat Step 2 and 3 for \( (\tfrac{\pi}{4})  2^{ N/2} \) times.</p></li><li><p>Measure the qubit and return the bits obtained.</p></li></ol><p>Alternatively, the second step can be better understood as a conditional statement:</p>
            <pre><code>IF f(x) = 1 THEN
     Negate the amplitude of the solution state.
ELSE
     /* nothing */
ENDIF</code></pre>
            <p>Grover’s algorithm considers function \(f\) a black box, so with slight modifications, the algorithm can also be used to find collisions on the function. This implies that Grover’s algorithm can find a collision using an asymptotically less number of operations than using a brute-force algorithm.</p><p>The power of Grover’s algorithm can be turned against cryptographic hash functions. For instance, a quantum computer running Grover’s algorithm could find a collision on SHA256 performing only 2¹²⁸ evaluations of a reversible circuit of SHA256. The natural protection for hash functions is to increase the output size to double. More generally, most of symmetric key encryption algorithms will survive to the power of Grover’s algorithm by doubling the size of keys.</p><p>The scenario for public-key algorithms is devastating in face of Peter Shor’s algorithm.</p>
    <div>
      <h3>Shor’s Algorithm</h3>
      <a href="#shors-algorithm">
        
      </a>
    </div>
    <p>Multiplying integers is an easy task to accomplish, however, finding the factors that compose an integer is difficult. The <i>integer factorization</i> problem is to decompose a given integer number into its prime factors. For example, 42 has three factors 2, 3, and 7 since \( 2\times 3\times 7 = 42\). As the numbers get bigger, integer factorization becomes more difficult to solve, and the hardest instances of integer factorization are when the factors are only two different large primes. Thus, given an integer number \(N\), to find primes \(p\) and \(q\) such that \( N = p \times q\), is known as <i>integer splitting</i>.</p><p>Factoring integers is like cutting wood, and the specific task of splitting integers is analogous to using an axe for splitting the log in two parts. There exist many different tools (algorithms) for accomplishing each task.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3xu3bHkkIKDkHCIGmFpMHW/933aebe8dc06d560b6046b2a717ed22c/split-logs-_2x-1.png" />
            
            </figure><p>For integer factorization, trial division, the Rho method, the elliptic curve method are common algorithms. Fermat's method, the quadratic- and rational-sieve, leads to the (general) number field sieve (NFS) algorithm for integer splitting. The latter relies on finding a congruence of squares, that is, splitting \(N\) as a product of squares such that $$ N = x^2 - y^2 = (x+y)\times(x-y) $$ The complexity of NFS is mainly attached to the number of pairs \((x, y)\) that must be examined before getting a pair that factors \(N\). The NFS algorithm has subexponential complexity on the size of \(N\), meaning that the time required for splitting an integer increases significantly as the size of \(N\) grows. For large integers, the problem becomes intractable for classical computers.</p>
    <div>
      <h3>The Axe of Thor Shor</h3>
      <a href="#the-axe-of-thor-shor">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2GxTLE4NstVADFSD1GrMYJ/7ad0d018f58e6e19ee9d227085c16a63/pasted-image-0--5-.png" />
            
            </figure><p>Olaf Tryggvason - <a href="https://be.m.wikipedia.org/wiki/%D0%A4%D0%B0%D0%B9%D0%BB:Olaf_Trygvason_struck_the_god_Thor_down_from_his_seat.gif">Public Domain</a></p><p>The many different guesses of the NFS algorithm are analogous to hitting the log using a dulled axe; after subexponential many tries, the log is cut by half. However, using a sharper axe allows you to split the log faster. This sharpened axe is the quantum algorithm proposed by Shor in 1994.</p><p>Let \(x\) be an integer less than \(N\) and of the order \(k\). Then, if \(k\) is even, there exists an integer \(q\) so \(qN\) can be factored as follows.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7t1jfVt9FExCaJg1ECBEPe/b780f2d2beac0c0166e03b3cd8f7b15d/GDU9PfKYGcI6l4uvKFH_wk_yZjX55rFpFFR0cLHr1ZCQmtkHlU8EO2K9VQWerbwwaIEdej8CLlJh9M3P1pCJWJ6SwGtRSIWoqNqll5enRkvvIv_nhj6uTWojWy0b.png" />
            
            </figure><p>This approach has some issues. For example, the factorization could correspond to \(q\) not \(N\) and the order of \(x\) is unknown, and here is where Shor’s algorithm enters the picture, finding the order of \(x\).</p><p>The internals of Shor’s algorithm rely on encoding the order \(k\) into a periodic function, so that its period can be obtained using the quantum version of the Fourier transform (QFT). The order of \(x\) can be found using a polynomial number quantum evaluations of Shor’s algorithm. Therefore, splitting integers using this quantum approach has polynomial complexity on the size of \(N\).</p><p>Shor’s algorithm carries strong implications on the security of the RSA encryption scheme because its security relies on integer factorization. A large-enough quantum computer can efficiently break RSA for current instances.</p><p>Alternatively, one may recur to elliptic curves, used in cryptographic protocols like <a href="/ecdsa-the-digital-signature-algorithm-of-a-better-internet/">ECDSA</a> or ECDH. Moreover, all <a href="/staying-on-top-of-tls-attacks/">TLS ciphersuites</a> use a combination of elliptic curve groups, large prime groups, and RSA and DSA signatures. Unfortunately, these algorithms all succumb to Shor’s algorithm. It only takes a few modifications for Shor’s algorithm to solve the discrete logarithm problem on finite groups. This sounds like a catastrophic story where all of our encrypted data and privacy are no longer secure with the advent of a quantum computer, and in some sense this is true.</p><p>On one hand, it is a fact that the quantum computers constructed as of 2019 are not large enough to run, for instance, Shor’s algorithm for the RSA key sizes used in standard protocols. For example, a 2018 <a href="https://doi.org/10.1038/s41598-018-36058-z">report</a> shows experiments on the factorization of a 19-bit number using 94 qubits, they also estimate that 147456 qubits are needed for factoring a 768-bit number. Hence, there numbers indicates that we are still far from breaking RSA.</p><p>What if we increment RSA key sizes to be resistant to quantum algorithms, just like for symmetric algorithms?</p><p><a href="https://cr.yp.to/papers/pqrsa-20170419.pdf">Bernstein et al</a>. estimated that RSA public keys should be as large as 1 terabyte to maintain secure RSA even in the presence of quantum factoring algorithms. So, for public-key algorithms, increasing the size of keys does not help.</p><p>A recent investigation by <a href="https://arxiv.org/abs/1905.09749">Gidney and Ekerá</a> shows improvements that accelerate the evaluation of quantum factorization. In their report, the cost of factoring 2048-bit integers is estimated to take a few hours using a quantum machine of 20 million qubits, which is far from any current development. Something worth noting is that the number of qubits needed is two orders of magnitude smaller than the estimated numbers given in previous works developed in this decade. Under these estimates, current encryption algorithms will remain secure several more years; however, consider the following not-so-unrealistic situation.</p><p>Information currently encrypted with for example, RSA, can be easily decrypted with a quantum computer in the future. Now, suppose that someone records encrypted information and stores them until a quantum computer is able to decrypt ciphertexts. Although this could be as far as 20 years from now, the forward-secrecy principle is violated. A 20-year gap to the future is sometimes difficult to imagine. So, let’s think backwards, what would happen if all you did on the Internet at the end of the 1990s can be revealed 20 years later -- today. How does this impact the security of your personal information? What if the ciphertexts were company secrets or business deals? In 1999, most of us were concerned about the effects of the <a href="https://en.wikipedia.org/wiki/Year_2000_problem">Y2K problem</a>, now we’re facing Y2Q (<i>years to quantum</i>): the advent of quantum computers.</p>
    <div>
      <h3>Post-Quantum Cryptography</h3>
      <a href="#post-quantum-cryptography">
        
      </a>
    </div>
    <p>Although the current capacity of the physical implementation of quantum computers is far from a real threat to secure communications, a transition to use stronger problems to protect information has already started. This wave emerged as <a href="https://www.cloudflare.com/learning/ssl/quantum/what-is-post-quantum-cryptography/">post-quantum cryptography</a> (PQC). The core idea of PQC is finding algorithms difficult enough that no quantum (and classical) algorithm can solve them.</p><p>A recurrent question is: How does it look like a problem that even a quantum computer can not solve?</p><p>These so-called quantum-resistant algorithms rely on different hard mathematical assumptions; some of them as old as RSA, others more recently proposed. For example, McEliece cryptosystem, formulated in the late 70s, relies on the hardness of decoding a linear code (in the sense of coding theory). The practical use of this cryptosystem didn’t become widespread, since with the passing of time, other cryptosystems superseded in efficiency. Fortunately, McEliece cryptosystem <a href="https://www.iacr.org/archive/crypto2011/68410758/68410758.pdf">remains immune</a> to Shor’s algorithm, gaining it relevance in the post-quantum era.</p><p>Post-quantum cryptography presents alternatives:</p><ol><li><p><a href="https://en.wikipedia.org/wiki/Lattice-based_cryptography">Lattice-based Cryptography</a></p></li><li><p><a href="https://en.wikipedia.org/wiki/Hash-based_cryptography">Hash-based Cryptography</a></p></li><li><p><a href="https://en.wikipedia.org/wiki/Supersingular_isogeny_key_exchange">Isogeny-based Cryptography</a></p></li><li><p><a href="https://en.wikipedia.org/wiki/Linear_code">Code-based Cryptography</a></p></li><li><p><a href="https://en.wikipedia.org/wiki/Multivariate_cryptography">Multivariate-based Cryptography</a></p></li></ol>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4psB57ri3b4jm3c9wLRyEo/25da1355cbeb382b1e8ac6653d4e7007/postq.png" />
            
            </figure><p>As of 2017, the <a href="https://csrc.nist.gov/projects/post-quantum-cryptography/post-quantum-cryptography-standardization">NIST</a> started an evaluation process that tracks possible alternatives for next-generation secure algorithms. From a practical perspective, all candidates present different trade-offs in implementation and usage. The time and space requirements are diverse; at this moment, it’s too early to define which will succeed RSA and elliptic curves. An initial round collected 70 algorithms for deploying key encapsulation mechanisms and digital signatures. As of early 2019, 28 of these survive and are currently in the analysis, investigation, and experimentation phase.</p><p>Cloudflare's mission is to help build a better Internet. As a proactive action, our cryptography team is preparing experiments on the deployment of post-quantum algorithms at Cloudflare scale. Watch our <a href="/towards-post-quantum-cryptography-in-TLS">blog post</a> for more details.</p> ]]></content:encoded>
            <category><![CDATA[Crypto Week]]></category>
            <category><![CDATA[Research]]></category>
            <category><![CDATA[Cryptography]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[Deep Dive]]></category>
            <category><![CDATA[Post-Quantum]]></category>
            <guid isPermaLink="false">kpmRInAtlxRgOqKXv2WPp</guid>
            <dc:creator>Armando Faz-Hernández</dc:creator>
        </item>
        <item>
            <title><![CDATA[Towards Post-Quantum Cryptography in TLS]]></title>
            <link>https://blog.cloudflare.com/towards-post-quantum-cryptography-in-tls/</link>
            <pubDate>Thu, 20 Jun 2019 13:01:00 GMT</pubDate>
            <description><![CDATA[ In anticipation of wide-spread quantum computing, the transition from classical public-key cryptography primitives to post-quantum (PQ) alternatives has started. ]]></description>
            <content:encoded><![CDATA[ <p></p><p>We live in a completely connected society. A society connected by a variety of devices: laptops, mobile phones, wearables, self-driving or self-flying <i>things</i>. We have standards for a common language that allows these devices to communicate with each other. This is critical for wide-scale deployment – especially in cryptography where the smallest detail has great importance.</p><p>One of the most important standards-setting organizations is the National Institute of Standards and Technology (NIST), which is hugely influential in determining which standardized cryptographic systems see worldwide adoption. At the end of 2016, NIST announced it would hold a multi-year open project with the goal of standardizing new <a href="https://www.cloudflare.com/learning/ssl/quantum/what-is-post-quantum-cryptography/">post-quantum (PQ) cryptographic algorithms</a> secure against both quantum and classical computers.</p><p>Many of our devices have very different requirements and capabilities, so it may not be possible to select a “one-size-fits-all” algorithm during the process. NIST mathematician, Dustin Moody, indicated that institute will likely select more than one algorithm:</p><blockquote><p><i>“There are several systems in use that could be broken by a quantum computer - public-key encryption and digital signatures, to take two examples - and we will need different solutions for each of those systems.”</i></p></blockquote><p>Initially, NIST selected 82 candidates for further consideration from all submitted algorithms. At the beginning of 2019, this process entered its second stage. Today, there are 26 algorithms still in contention.</p>
    <div>
      <h3>Post-quantum cryptography: what is it really and why do I need it?</h3>
      <a href="#post-quantum-cryptography-what-is-it-really-and-why-do-i-need-it">
        
      </a>
    </div>
    <p>In 1994, Peter Shor made a significant <a href="https://arxiv.org/abs/quant-ph/9508027">discovery</a> in quantum computation. He found an algorithm for integer factorization and computing discrete logarithms, both believed to be hard to solve in classical settings. Since then it has become clear that the 'hard problems' on which cryptosystems like RSA and elliptic curve cryptography (ECC) rely – integer factoring and computing discrete logarithms, respectively – are efficiently solvable with quantum computing.</p><p>A quantum computer can help to solve some of the problems that are intractable on a classical computer. In theory, they could efficiently solve some <a href="https://www.quantamagazine.org/finally-a-problem-that-only-quantum-computers-will-ever-be-able-to-solve-20180621/">fundamental problems</a> in mathematics. This amazing computing power would be highly beneficial, which is why companies are actually trying to build quantum computers. At first, Shor’s algorithm was merely a theoretical result – quantum computers powerful enough to execute it did not exist – but this is quickly changing. In March 2018, Google announced a 72-qubit universal quantum computer. While this is not enough to break say RSA-2048 (still <a href="https://arxiv.org/abs/1905.09749">more is needed</a>), many fundamental problems have already been solved.</p><p>In anticipation of wide-spread quantum computing, we must start the transition from classical public-key cryptography primitives to post-quantum (PQ) alternatives. It may be that consumers will never get to hold a quantum computer, but a few powerful attackers who will get one can still pose a serious threat. Moreover, under the assumption that current TLS handshakes and ciphertexts are being captured and stored, a future attacker could crack these stored individual session keys and use those results to decrypt the corresponding individual ciphertexts. Even strong security guarantees, like <a href="/staying-on-top-of-tls-attacks/">forward secrecy</a>, do not help out much there.</p><p>In 2006, the academic research community launched a conference series dedicated to finding alternatives to RSA and ECC. This so-called <i>post-quantum cryptography</i> should run efficiently on a classical computer, but it should also be secure against attacks performed by a quantum computer. As a research field, it has grown substantially in popularity.</p><p>Several companies, including Google, Microsoft, Digicert and Thales, are already testing the impact of deploying PQ cryptography. Cloudflare is involved in some of this, but we want to be a company that leads in this direction. The first thing we need to do is understand the real costs of deploying PQ cryptography, and that’s not obvious at all.</p>
    <div>
      <h3>What options do we have?</h3>
      <a href="#what-options-do-we-have">
        
      </a>
    </div>
    <p>Many submissions to the NIST project are still under study. Some are very new and little understood; others are more mature and already standardized as RFCs. Some have been broken or withdrawn from the process; others are more conservative or <a href="https://eprint.iacr.org/2017/351.pdf">illustrate</a> how far classical cryptography would need to be pushed so that a quantum computer could not crack it within a reasonable cost. Some are very slow and big; others are not. But most cryptographic schemes can be categorized into these families: <a href="https://web.eecs.umich.edu/~cpeikert/pubs/lattice-survey.pdf">lattice-based</a>, <a href="http://www.cryptosystem.net/hfe.pdf">multivariate</a>, <a href="https://link.springer.com/content/pdf/10.1007%2F0-387-34805-0_21.pdf">hash-based</a> (signatures only), <a href="https://ipnpr.jpl.nasa.gov/progress_report2/42-44/44N.PDF">code-based</a> and <a href="https://eprint.iacr.org/2011/506.pdf">isogeny-based</a>.</p><p>For some algorithms, nevertheless, there is a fear they may be too inconvenient to use with today’s Internet. We must also be able to integrate new cryptographic schemes with existing protocols, such as <a href="https://www.cloudflare.com/learning/access-management/what-is-ssh/">SSH</a> or TLS. To do that, designers of PQ cryptosystems must consider these characteristics:</p><ul><li><p>Latency caused by encryption and decryption on both ends of the communication channel, assuming a variety of devices from big and fast servers to slow and memory constrained IoT (Internet of Things) devices</p></li><li><p>Small public keys and signatures to minimize bandwidth</p></li><li><p>Clear design that allows cryptanalysis and determining weaknesses that could be exploited</p></li><li><p>Use of existing hardware for fast implementation</p></li></ul><p>The work on post-quantum public key cryptosystems must be done in a full view of organizations, governments, cryptographers, and the public. Emerging ideas must be properly vetted by this community to ensure widespread support.</p>
    <div>
      <h3>Helping Build a Better Internet</h3>
      <a href="#helping-build-a-better-internet">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3oHEFoSgnIAeuq111tpnEw/18189d2954df7b8c991258f1439a2e2f/pasted-image-0-5.png" />
            
            </figure><p>To better understand the post-quantum world, Cloudflare began experimenting with these algorithms and used them to provide confidentiality in TLS connections.</p><p>With Google, we are proposing a wide-scale experiment that combines client- and server-side data collection to evaluate the performance of key-exchange algorithms on actual users’ devices. We hope that this experiment helps choose an algorithm with the best characteristics for the future of the Internet. With Cloudflare’s highly distributed network of access points and Google’s Chrome browser, both companies are in a very good position to perform this experiment.</p><p>Our goal is to understand how these algorithms act when used by real clients over real networks, particularly candidate algorithms with significant differences in public-key or ciphertext sizes. Our focus is on how different key sizes affect handshake time in the context of Transport Layer Security (TLS) as used on the web over HTTPS.</p><p>Our primary candidates are an NTRU-based construction called HRSS-SXY (by <b>H</b>ülsing - <b>R</b>ijneveld - <b>S</b>chanck - <b>S</b>chwabe, and Tsunekazu <b>S</b>aito - Keita <b>X</b>agawa - Takashi <b>Y</b>amakawa) and an isogeny-based Supersingular Isogeny Key Encapsulation (SIKE). Details of both algorithms are described in more detail below in section "Dive into post-quantum cryptography". This table shows a few characteristics for both algorithms. Performance timings were obtained by running the BoringSSL speed test on an Intel Skylake CPU.</p><table> <tr> 
    <th>KEM</th> 
    <th>Public Key size (bytes)</th> 
    <th>Ciphertext (bytes)</th>
    <th>Secret size (bytes)</th> <th>KeyGen (op/sec)</th> <th>Encaps (op/sec)</th> <th>Decaps (op/sec)</th> <th>NIST level</th> </tr> 
<tr><td>HRSS-SXY</td> <td>1138</td> <td>1138</td> <td>32</td> <td>3952.3</td> <td>76034.7</td> <td>21905.8</td> <td>1</td> </tr> 
    
<tr> <td>SIKE/p434</td> <td>330</td> <td>346</td> <td>16</td> <td>367.1</td> <td>228.0</td> <td>209.3</td> <td>1</td> </tr> </table><p>Currently the most commonly used key exchange algorithm (according to Cloudflare’s data) is the non-quantum X25519. Its public keys are 32 bytes and BoringSSL can generate 49301.2 key pairs, and is able to perform 19628.6 key agreements every second on my Skylake CPU.</p><p>Note that HRSS-SXY shows a significant speed advantage, while SIKE has a size advantage. In our experiment, we will deploy these two algorithms on both the server side using Cloudflare’s infrastructure, and the client side using Chrome Canary; both sides will collect telemetry information about TLS handshakes using these two PQ algorithms to see how they perform in practice.</p>
    <div>
      <h3>What do we expect to find?</h3>
      <a href="#what-do-we-expect-to-find">
        
      </a>
    </div>
    <p>In 2018, Adam Langley conducted an <a href="https://www.imperialviolet.org/2018/04/11/pqconftls.html">experiment</a> with the goal of evaluating the likely latency impact of a post-quantum key exchange in TLS. Chrome was augmented with the ability to include a dummy, arbitrarily-sized extension in the TLS ClientHello (fixed number of bytes of random noise). After taking into account the performance and key size offered by different types key-exchange schemes, he concluded that constructs based on structured lattices may be most suitable for future use in TLS.</p><p>However, Langley also observed a peculiar phenomenon; client connections measured at 95th percentile had much higher latency than the median. It means that in those cases, isogeny-based systems may be a better choice. In the "Dive into post-quantum cryptography", we describe the difference between isogeny-based SIKE and lattice-based NTRU cryptosystems.</p><p>In our experiment, we want to more thoroughly evaluate and ascribe root causes to these unexpected latency increases. We would particularly like to learn more about the characteristics of those networks: what causes increased latency? how does the performance cost of isogeny-based algorithms impact the TLS handshake? We want to answer key questions, like:</p><ul><li><p>What is a good ratio for speed-to-key size (or how much faster could SIKE get to achieve the client-perceived performance of HRSS)?</p></li><li><p>How do network middleboxes behave when clients use new PQ algorithms, and which networks have problematic middleboxes?</p></li><li><p>How do the different properties of client networks affect TLS performance with different PQ key exchanges? Can we identify specific autonomous systems, device configurations, or network configurations that favor one algorithm over another? How is performance affected in the long tail?</p></li></ul>
    <div>
      <h3>Experiment Design</h3>
      <a href="#experiment-design">
        
      </a>
    </div>
    <p>Our experiment will involve both server- and client-side performance statistics collection from real users around the world (all the data is anonymized). Cloudflare is operating the server-side TLS connections. We will enable the <a href="https://www.imperialviolet.org/2018/12/12/cecpq2.html">CECPQ2</a> (HRSS + X25519) and <a href="https://tools.ietf.org/html/draft-kiefer-tls-ecdhe-sidh-00">CECPQ2b</a> (SIKE + X25519) key-agreement algorithms on all TLS-terminating edge servers.</p><p>In this experiment, the ClientHello will contain a CECPQ2 or CECPQ2b public key (but never both). Additionally, Chrome will always include X25519 for servers that do not support post-quantum key exchange. The post-quantum key exchange will only be negotiated in TLS version 1.3 when both sides support it.</p><p>Since Cloudflare only measures the server side of the connection, it is impossible to determine the time it takes for a ClientHello sent from Chrome to reach Cloudflare’s edge servers; however, we can measure the time it takes for the TLS ServerHello message containing post-quantum key exchange, to reach the client and for the client to respond.</p><p>On the client side, Chrome Canary will operate the TLS connection. Google will enable either CECPQ2 or CECPQ2b in Chrome for the following mix of architecture and OSes:</p><ul><li><p>x86-64: Windows, Linux, macOS, ChromeOS</p></li><li><p>aarch64: Android</p></li></ul><p>Our high-level expectation is to get similar results as Langley’s original experiment in 2018 — slightly increased latency for the 50th percentile and higher latency for the 95th. Unfortunately, data collected purely from real users’ connections may not suffice for diagnosing the root causes of why some clients experience excessive slowdown. To this end, we will perform follow-up experiments based on per-client information we collect server-side.</p><p>Our primary hypothesis is that excessive slowdowns, like those Langley observed, are largely due to in-network events, such as middleboxes or bloated/lossy links. As a first-pass analysis, we will investigate whether the slowed-down clients share common network features, like common ASes, common transit networks, common link types, and so on. To determine this, we will run a traceroute from vantage points close to our servers back toward the clients (not overloading any particular links or hosts) and study whether some client locations are subject to slowdowns for all destinations or just for some.</p>
    <div>
      <h3>Dive into post-quantum cryptography</h3>
      <a href="#dive-into-post-quantum-cryptography">
        
      </a>
    </div>
    <p>Be warned: the details of PQ cryptography may be quite complicated. In some cases it builds on classical cryptography, and in other cases it is completely different math. It would be rather hard to describe details in a single blog post. Instead, we are giving you an intuition of post-quantum cryptography, rather than provide deep academic-level descriptions. We’re skipping a lot of details for the sake of brevity. Nevertheless, settle in for a bit of an epic journey because we have a lot to cover.</p>
    <div>
      <h3>Key encapsulation mechanism</h3>
      <a href="#key-encapsulation-mechanism">
        
      </a>
    </div>
    <p>NIST requires that all key-agreement algorithms have a form of key-encapsulation mechanism (KEM). The KEM is a simplified form of public key encryption (PKE). As PKE, it also allows agreement on a secret, but in a slightly different way. The idea is that the session key is an output of the encryption algorithm, conversely to public key encryption schemes where session key is an input to the algorithm. In a KEM, Alice generates a random key and uses the pre-generated public key from Bob to encrypt (encapsulate) it. This results in a ciphertext sent to Bob. Bob uses his private key to decrypt (decapsulate) the ciphertext and retrieve the random key. The idea was initially introduced by <a href="https://eprint.iacr.org/2001/108">Cramer and Shoup</a>. Experience shows that such constructs are easier to design, analyze, and implement as the scheme is limited to communicating a fixed-size session key. Leonardo Da Vinci said, “Simplicity is the ultimate sophistication,” which is very true in cryptography.</p><p>The key exchange (KEX) protocol, like <a href="https://en.wikipedia.org/wiki/Diffie%E2%80%93Hellman_key_exchange">Diffie-Hellman</a>, is yet a different construct: it allows two parties to agree on a shared secret that can be used as a symmetric encryption key. For example, Alice generates a key pair and sends a public key to Bob. Bob does the same and uses his own key pair with Alice’s public key to generate the shared secret. He then sends his public key to Alice who can now generate the same shared secret. What’s worth noticing is that both Alice and Bob perform exactly the same operations.</p><p>KEM construction can be converted to KEX. Alice performs key generation and sends the public key to Bob. Bob uses it to encapsulate a symmetric session key and sends it back to Alice. Alice decapsulates the ciphertext received from Bob and gets the symmetric key. This is actually what we do in our experiment to make integration with the TLS protocol less complicated.</p>
    <div>
      <h3>NTRU Lattice-based Encryption  </h3>
      <a href="#ntru-lattice-based-encryption">
        
      </a>
    </div>
    <p>We will enable the CECPQ2 implemented by Adam Langley from Google on our servers. He described this implementation in detail <a href="https://www.imperialviolet.org/2018/12/12/cecpq2.html">here</a>. This key exchange uses the HRSS algorithm, which is based on the NTRU (<b>N</b>-Th Degree <b>TRU</b>ncated Polynomial Ring) algorithm. Foregoing too much detail, I am going to explain how NTRU works and give simplified examples, and finally, compare it to HRSS.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3mw4chMg1bmziLfBeRywAo/f8353de8b49dee0704f5a813c4609230/polynomial-wheel_3x-1.png" />
            
            </figure><p>NTRU is a cryptosystem based on a polynomial ring. This means that we do not operate on numbers modulo a prime (like in RSA), but on polynomials of degree \( N \) , where the <i>degree</i> of a polynomial is the highest exponent of its variable. For example, $x^7 + 6x^3 + 11x^2$ has degree of 7.</p><p>One can add polynomials in the ring in the usual way, by simply adding theirs coefficients modulo some integer. In NTRU this integer is called \( q \). Polynomials can also be multiplied, but remember, you are operating in the ring, therefore the result of a multiplication is always a polynomial of degree less than \(N\). It basically means that exponents of the resulting polynomial are added to modulo \(N\).</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4fjc31dekJkPOBmLSlnJPS/de486d978d6740055c65deadd39ed3b2/1feQoHE5ePfzDfdzw01q3eQDZz6gCs6znHlzj24ThULscNMsvS6kdmoN0diX9cGbcpcB6T7WbUk5v1vdfLB1vSWizK3LFpW3b_4lNY6i-BwIEAtVdp1wCz-D85gC.png" />
            
            </figure><p>In other words, polynomial ring arithmetic is very similar to modular arithmetic<a href="https://betterexplained.com/articles/fun-with-modular-arithmetic/">,</a> but instead of working with a set of numbers less than <i>N</i>, you are working with a set of polynomials with a degree less than <i>N</i>.</p><p>To instantiate the NTRU cryptosystem, three domain parameters must be chosen:</p><ul><li><p>\(N\) - degree of the polynomial ring, in NTRU the principal objects are polynomials of degree \(N-1\).</p></li><li><p>\(p\) - small modulus used during key generation and decryption for reducing message coefficients.</p></li><li><p>\(q\) - large modulus used during algorithm execution for reducing coefficients of the polynomials.</p></li></ul><p>First, we generate a pair of public and private keys. To do that, two polynomials \(f\) and \(g\) are chosen from the ring in a way that their randomly generated coefficients are much smaller than \(q\). Then key generation computes two inverses of the polynomial: $$ f_p= f^{-1} \bmod{p}   \\  f_q= f^{-1} \bmod{q} $$</p><p>The last step is to compute $$ pk = p\cdot f_q\cdot g \bmod q $$, which we will use as public key <i>pk</i>. The private key consists of \(f\) and \(f_p\). The \(f_q\) is not part of any key, however it must remain secret.</p><p>It might be the case that after choosing \(f\), the inverses modulo \(p\) and \( q \) do not exist. In this case, the algorithm has to start from the beginning and generate another \(f\). That’s unfortunate because calculating the inverse of a polynomial is a costly operation. HRSS brings an improvement to this issue since it ensures that those inverses always exist, making key generation faster than as proposed initially in NTRU.</p><p>The encryption of a message \(m\) proceeds as follows. First, the message \(m\) is converted to a ring element \(pt\) (there exists an algorithm for performing this conversion in both directions). During encryption, NTRU randomly chooses one polynomial \(b\) called <i>blinder.</i> The goal of the blinder is to generate different ciphertexts per encyption. Thus, the ciphetext \(ct\) is obtained as $$ ct = (b\cdot pk + pt ) \bmod q $$ Decryption looks a bit more complicated but it can also be easily understood. Decryption uses both the secret value \(f\) and to recover the plaintext as $$ v =  f \cdot ct \bmod q \\ pt = v \cdot f_p \bmod p $$</p><p>This diagram demonstrates why and how decryption works.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3m4bNrzq0NQFYpTObXuHYj/631873550a519f0bb4552f3343db00af/image-24.png" />
            
            </figure><p>Step-by-step correctness of decryption procedure.</p><p>After obtaining \(pt\), the message \(m\) is recovered by inverting the conversion function.</p><p>The underlying hard assumption is that given two polynomials: \(f\) and \(g\) whose coefficients are short compared to the modulus \(q\), it is difficult to distinguish \(pk = \frac{f}{g} \) from a random element in the ring. It means that it’s hard to find \(f\) and \(g\) given only public key <i>pk</i>.</p>
    <div>
      <h3>Lattices</h3>
      <a href="#lattices">
        
      </a>
    </div>
    <p>NTRU cryptosystem is a grandfather of lattice-based encryption schemes. The idea of using  difficult problems for cryptographic purposes was due to <a href="http://citeseerx.ist.psu.edu/viewdoc/download;jsessionid=144BBB9F0E87EF0D471151F0EACC7DB8?doi=10.1.1.40.2489&amp;rep=rep1&amp;type=pdf">Ajtai</a>. His work evolved into a whole area of research with the goal of creating more practical, lattice-based cryptosystems.</p>
    <div>
      <h3>What is a lattice and why it can be used for post-quantum crypto?</h3>
      <a href="#what-is-a-lattice-and-why-it-can-be-used-for-post-quantum-crypto">
        
      </a>
    </div>
    <p>The picture below visualizes lattice as points in a two-dimensional space. A lattice is defined by the origin \(O\) and base vectors \( \{ b_1 , b_2\} \). Every point on the lattice is represented as a linear combination of the base vectors, for example  \(V = -2b_1+b_2\).</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/40Fg69tL7QYjTIzHk5s0z2/d6807ffa9ba6a103c4d8d1ac845c2f14/pasted-image-0--1--3.png" />
            
            </figure><p>There are two classical NP-hard problems in lattice-based cryptography:</p><ol><li><p><b>Shortest Vector Problem</b> (SVP): Given a lattice, to find the shortest non-zero vector in the lattice. In the graph, the vector \(s\) is the shortest one. The SVP problem is NP-hard only under some assumptions.</p></li><li><p><b>Closest Vector Problem</b> (CVP). Given a lattice and a vector \(V\) (not necessarily in the lattice), to find the closest vector to \(V\)<i>.</i> For example, the closest vector to \(t\) is \(z\).</p></li></ol><p>In the graph above, it is easy for us to solve SVP and CVP by simple inspection. However, the lattices used in cryptography have higher dimensions, say above 1000, as well as highly non-orthogonal basis vectors. On these instances, the problems get extremely hard to solve. It’s even believed future quantum computers will have it tough.</p>
    <div>
      <h3>NTRU vs HRSS</h3>
      <a href="#ntru-vs-hrss">
        
      </a>
    </div>
    <p>HRSS, which we use in our experiment, is based on NTRU, but a slightly better instantiation. The main improvements are:</p><ul><li><p>Faster key generation algorithm.</p></li><li><p>NTRU encryption can produce ciphertexts that are impossible to decrypt (true for many lattice-based schemes). But HRSS fixes this problem.</p></li><li><p>HRSS is a key encapsulation mechanism.</p></li></ul>
    <div>
      <h3>CECPQ2b - Isogeny-based Post-Quantum TLS</h3>
      <a href="#cecpq2b-isogeny-based-post-quantum-tls">
        
      </a>
    </div>
    <p>Following CECPQ2, we have integrated into BoringSSL another hybrid key exchange mechanism relying on SIKE. It is called CECPQ2b and we will use it in our experimentation in TLS 1.3. <a href="https://sike.org">SIKE</a> is a key encapsulation method based on Supersingular Isogeny Diffie-Hellman (SIDH). Read more about <a href="/sidh-go/">SIDH</a> in our previous post. The math behind SIDH is related to elliptic curves. A comparison between SIDH and the classical Elliptic Curve Diffie-Hellman (ECDH) is given.</p><p>An elliptic curve is a set of points that satisfy a specific mathematical equation. The equation of an elliptic curve may have multiple forms, the standard form is called the <i>Weierstrass</i> equation $$ y^2 = x^3 +ax +b  $$ and its shape can look like the red curve.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/26PHEagKtiaocOpyT6e85j/4e0b5a3e2f40bdf6fa77b4a14b44b88d/pasted-image-0--2--3.png" />
            
            </figure><p>An interesting fact about elliptic curves is have a group structure. That is, the set of points on the curve have associated a binary operation called <i>point addition</i>. The set of points on the elliptic curve is closed under addition. Thus, adding two points results in another point that is also on the elliptic curve.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/65C2RgCyQ7TG84QUa4kYkd/14197ecc324cb64112614ec33fe06ba1/ecc.gif" />
            
            </figure><p>If we can add two different points on a curve, then we can also add one point to itself. And if we do it multiple times, then the resulting operations is known as a <i>scalar multiplication</i> and denoted as  <i>\(Q = k\cdot P = P+P+\dots+P\)</i> for an integer \(k\).</p><p>Multiplication of scalars is <i>commutative</i>. It means that two scalar multiplications can be evaluated in any order \( \color{darkred}{k_a}\cdot\color{darkgreen}{k_b} =   \color{darkgreen}{k_b}\cdot\color{darkred}{k_a} \); this an important property that makes ECDH possible.</p><p>It turns out that carefully if choosing an elliptic curve "correctly", scalar multiplication is easy to compute but extremely hard to reverse. Meaning, given two points \(Q\) and \(P\) such that \(Q=k\cdot P\), finding the integer k is a difficult task known as the Elliptic Curve Discrete Logarithm problem (ECDLP). This problem is suitable for cryptographic purposes.</p><p>Alice and Bob agree on a secret key as follows. Alice generates a private key \( k_a\). Then, she uses some publicly known point \(P\) and calculates her public key as \( Q_a = k_a\cdot P\). Bob proceeds in similar fashion and gets \(k_b\) and \(Q_b = k_b\cdot P\). To agree on a shared secret, each party multiplies their private key with the public key of the other party. The result of this is the shared secret. Key agreement as described above, works thanks to the fact that scalars can commute:$$  \color{darkgreen}{k_a} \cdot Q_b = \color{darkgreen}{k_a} \cdot  \color{darkred}{k_b} \cdot P \iff \color{darkred}{k_b} \cdot \color{darkgreen}{k_a} \cdot P = \color{darkred}{k_b} \cdot Q_a $$</p><p>There is a vast theory behind elliptic curves. An introduction to <a href="/a-relatively-easy-to-understand-primer-on-elliptic-curve-cryptography/">elliptic curve cryptography</a> was posted before and more details can be found in this <a href="https://doi.org/10.1007/b97644">book</a>. Now, lets describe SIDH and compare with ECDH.</p>
    <div>
      <h3>Isogenies on Elliptic Curves</h3>
      <a href="#isogenies-on-elliptic-curves">
        
      </a>
    </div>
    <p>Before explaining the details of SIDH key exchange, I’ll explain the 3 most important concepts, namely: <b><i>j-invariant, isogeny</i></b> and its <b><i>kernel.</i></b></p><p>Each curve has a number that can be associated to it. Let’s call this number a <b><i>j-invariant.</i></b> This number is not unique per curve, meaning many curves have the same value of <b><i>j-invariant</i></b>, but it can be viewed as a way to group multiple elliptic curves into disjoint sets. We say that two curves are <b><i>isomorphic</i></b> if they are in the same set, called the <i>isomorphism class</i>. The j-invariant is a simple criterion to determine whether two curves are isomorphic. The j-invariant of a curve \(E\) in Weierstrass form \( y^2 = x^3 + ax + b\) is given as $$ j(E) = 1728\frac{4a^3}{4a^3 +27b^2} $$</p><p>When it comes to <b><i>isogeny</i></b>, think about it as a map between two curves. Each point on some curve \( E \) is mapped by isogeny to the point on isogenous curve \( E' \). We denote mapping from curve \( E \) to \( E' \) by isogeny \( \phi \) as:</p><p>$$\phi: E \rightarrow E' $$</p><p>It depends on the map if those two curves are isomorphic or not. Isogeny can be visualised as:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4RSE4DdJxSwP5IspNUpc8D/689054d8939ceb6af436869a3dc352fb/pasted-image-0--3--2.png" />
            
            </figure><p>There may exist many of those mappings, each curve used in SIDH has small number of isogenies to other curves. Natural question is how do we compute such isogeny. Here is where the <b><i>kernel</i></b> of an isogeny comes. The <b><i>kernel</i></b> uniquely determines isogeny (up to <i>isomorphism class</i>). Formulas for calculating isogeny from its kernel were initially <a href="https://www.researchgate.net/publication/246557704_Isogenies_entre_courbes_elliptiques">given by J. Vélu</a> and the idea of calculating them efficiently was <a href="https://eprint.iacr.org/2017/504.pdf">extended</a>.</p><p>To finish, I will summarize what was said above with a picture.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/240s5FQy6PHpVMswhLAnAg/8af4f4386706c1abf312a3c58f429892/pasted-image-0--4--2.png" />
            
            </figure><p>There are two <b>isomorphism classes</b> on the picture above. Both curves \(E_1\) and \(E_2\) are <b>isomorphic</b> and have  j-invariant = 6. As curves \(E_3\) and \(E_4\) have j-invariant=13, they are in a different isomorphism class. There exists an <b>isogeny</b> \(\phi_2\) between curve \(E_3\) and \(E_2\), so they both are <b>isogeneous</b>. Curves \( \phi_1 \) and \( E_2 \) are isomorphic and there is isogeny \( \phi_1 \) between them. Curves \( E_1\) and \(E_4\) are not isomorphic.</p><p>For brevity I’m skipping many important details, like details of the <i>finite field,</i> the fact that isogenies must be <i>separable</i> and that the kernel is <i>finite.</i> But curious readers can find a number of academic research papers available on the Internet.</p>
    <div>
      <h3>Big picture: similarities with ECDH</h3>
      <a href="#big-picture-similarities-with-ecdh">
        
      </a>
    </div>
    <p>Let’s generalize the ECDH algorithm described above, so that we can swap some elements and try to use Supersingular Isogeny Diffie-Hellman.</p><p>Note that what actually happens during an ECDH key exchange is:</p><ul><li><p>We have a set of points on elliptic curve, set <i>S</i></p></li><li><p>We have another group of integers used for point multiplication, G</p></li><li><p>We use an element from <i>Z</i> to act on an element from <i>S</i> to get another element from <i>S</i>:</p></li></ul><p>$$ G \cdot S \rightarrow S $$</p><p>Now the question is: what is our <i>G</i> and <i>S</i> in an SIDH setting? For SIDH to work, we need a big set of elements and something secret that will act on the elements from that set. This “group action” must also be resistant to attacks performed by quantum computers.</p><p>In the SIDH setting, those two sets are defined as:</p><ul><li><p>Set <i>S</i> is a set (graph) of j-invariants, such that all the curves are supersingular: \( S = [j(E_1), j(E_2), j(E_3), .... , j(E_n)]\)</p></li><li><p>Set <i>G</i> is a set of isogenies acting on elliptic curves and transforming, for example, the elliptic curve \(E_1\) into \(E_n\):</p></li></ul>
    <div>
      <h3>Random walk on supersingular graph</h3>
      <a href="#random-walk-on-supersingular-graph">
        
      </a>
    </div>
    <p>When we talk about <i>Isogeny Based Cryptography</i>, as a topic distinct from <i>Elliptic Curve Cryptography</i>, we usually mean algorithms and protocols that rely fundamentally on the structure of isogeny graphs. An example of such a (small) graph is pictured below.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3VatBy9An0gtSVOCUZBdjG/33747b7af909ae94861f7c30e981b875/isogeny-based-crypt0-1-.gif" />
            
            </figure><p>Animation based on Chloe Martindale <a href="https://2017.pqcrypto.org/school/slides/Isogeny_based_crypto.pdf">slide deck</a></p><p>Each vertex of the graph represents a different j-invariant of a set of supersingular curves. The edges between vertices represent isogenies converting one elliptic curve to another. As you can notice, the graph is strongly connected, meaning every vertex can be reached from every other vertex. In the context of isogeny-based crypto, we call such a graph a <a href="https://en.wikipedia.org/wiki/Supersingular_isogeny_graph"><i>supersingular isogeny graph</i></a>. I’ll skip some technical details about the construction of this graph (look for those <a href="https://eprint.iacr.org/2011/506.pdf">here</a> or <a href="http://iml.univ-mrs.fr/~kohel/pub/thesis.pdf">here</a>), but instead describe ideas about how it can be used.</p><p>As the graph is strongly connected, it is possible to <i>walk</i> a whole graph by starting from any vertex, randomly choosing an edge, following it to the next vertex and then start the process again on a new vertex. Such a way of visiting edges of this graph is called a <i>random walk.</i></p><p>The random walk is a key concept that makes isogeny based crypto feasible. When you look closely at the graph, you can notice that each vertex has a small number of edges incident to it, this is why we can compute the isogenies efficiently. But also for any vertex there is only a limited number of isogenies to choose from, which doesn’t look like good base for a cryptographic scheme. The key question is - where does the security of the scheme come from exactly? In order to get it, it is necessary to visit a couple hundred vertices. What it means in practice is that secret isogeny (of <i>large degree</i>) is constructed as a composition of multiple isogenies (of <i>small, prime degree</i>).  Which means, the secret isogeny is:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7pTiHpzM9xu84YbuXZozv8/40cb2052e3850cbc622f8569ac4d5ab2/CtdsWhxZs8vr7nj6LJCQhh4Mu3mP5VF5U2ayMvqZ_LVXIRBsVuduM2QwTsuy8V1izo53I5JcAFc0z6eA1LYP4FsviY4g_fqiDJc2-s8vZE_eHQGp6aQQrlhsdDDu.png" />
            
            </figure><p>This property and properties of the isogeny graph are <b>what makes</b> some of us believe that <b>scheme</b> has a good chance to be <b>secure</b><b><i>.</i></b> More specifically, there is no efficient way of finding a path that connects \( E_0 \) with \( E_n \), even with quantum computer at hand. The security level of a system depends on value <i>n</i> - the number of steps taken during the walk.</p><p>The random walk is a core process used when both generating public keys and computing shared secrets. It starts with party generating random value <i>m</i> (see more below), starting curve \(E_0\) and points P and Q on this curve. Those values are used to compute the kernel of an isogeny \( R_1 \) in the following way:</p><p>$$ R_1 = P + m \cdot Q $$</p><p>Thanks to formulas given by <a href="https://www.researchgate.net/publication/246557704_Isogenies_entre_courbes_elliptiques">Vélu</a> we can now use the point \( R_1 \) to compute the isogeny, the party will choose to move from a vertex to another one. After the isogeny \( \phi_{R_1} \) is calculated it is applied to \( E_0 \)  which results in a new curve \( E_1 \):</p><p>$$ \phi_{R_1}: E_0 \rightarrow E_1 $$</p><p>Isogeny is also applied to points P and Q. Once on \( E_1 \) the process is repeated. This process is applied <i>n</i> times, and at the end a party ends up on some curve \( E_n \) which defines isomorphism class, so also j-invariant.</p>
    <div>
      <h3>Supersingular Isogeny Diffie-Hellman</h3>
      <a href="#supersingular-isogeny-diffie-hellman">
        
      </a>
    </div>
    <p>The core idea in SIDH is to compose two random walks on an isogeny graph of elliptic curves in such a way that the end node of both ways of composing is the same.</p><p>In order to do it, scheme sets public parameters - starting curve \( E_0 \) and 2 pairs of base points on this curve <i>\( (PA,QA) \)</i> , <i>\( (PB,QB) \)</i>. Alice generates her random secret keys <i>m,</i> and calculates a secret isogeny \( \phi_q \) by performing a <i>random walk</i> as described above. The walk finishes with 3 values: elliptic curve \( E_a \) she has ended up with and pair of points \( \phi_a(PB) \) and \( \phi_a(QB) \) after pushing through Alice’s secret isogeny. Bob proceeds analogously which results in the triple \( {E_b, \phi_b(PA), \phi_b(QA)} \). The triple forms a public key which is exchanged between parties.</p><p>The picture below visualizes the operation. The black dots represent curves grouped in the same <i>isomorphism classes</i> represented by light blue circles. Alice takes the orange path ending up on a curve \( E_a \) in a separate isomorphism class than Bob after taking his dark blue path ending on \( E_b \). SIDH is parametrized in a way that Alice and Bob will always end up in different isomorphism classes.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4coOPO3i8W2RxZpT8aS4Th/8a7f4006dec8c675ef7c81ffb2702143/pasted-image-0--5--2.png" />
            
            </figure><p>Upon receipt of triple \( { E_a, \phi_a(PB), \phi_a(QB) } \)  from Alice, Bob will use his secret value <i>m</i> to calculate a new kernel - but instead of using point \(PA\) and \(QA\) to calculate an isogeny kernel, he will now use images \( \phi_a(PB) \) and \( \phi_a(QB) \) received from Alice:</p><p>$$ R’_1 = \phi_a(PB) + m \cdot \phi_a(QB) $$</p><p>Afterwards, he uses \( R’_1 \) to start the walk again resulting in the isogeny \( \phi’_b: E_a \rightarrow E_{ab} \). Allice proceeds analogously resulting in the isogeny \(\phi’_a: E_b \rightarrow E_{ba} \). With isogenies calculated this way, both Alice and Bob will converge in the same isomorphism class. The math math may seem complicated, hopefully the picture below makes it easier to understand.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2o7VmACLF5SYhrydaRyBRo/49b090ed60c4abdccbd6df456f833147/pasted-image-0--6--2.png" />
            
            </figure><p>Bob computes a new isogeny and starts his random walk from \( E_a \) received from Alice. He ends up on some curve \(E_{ba}\). Similarly, Alice calculates a new isogeny, applies it on \( E_b \) received from Bob and her random walk ends on some curve \(E_{ab}\). Curves \(E_{ab}\) and \(E_{ba}\) are not likely to be the same, but construction guarantees that they are isomorphic_._ As mentioned earlier, isomorphic curves have the same value of j-invariant,  hence the shared secret is a value of j-invariant \(j(E_{ab})\).</p><p>Coming back to differences between SIDH and ECDH - we can split them into four categories: the elements of the group we are operating on, the cornerstone computation required to agree on a shared secret, the elements representing secret values, and the difficult problem on which the security relies.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4JOy8niIRMJP7yIY0hAcoE/a842ac7d3b49ff90dea78b98940f3cde/pasted-image-0--7--2.png" />
            
            </figure><p>Comparison based on Craig Costello’ s <a href="http://www.craigcostello.com.au/wp-content/uploads/Craig-isogenies-tutorial.pdf">slide deck</a>.</p><p>In ECDH there is a secret key which is an integer scalar, in case of SIDH it is a secret isogeny, which also is generated from an integer scalar. In the case of ECDH one multiplies a point on a curve by a scalar, in the case of SIDH it is a random walk in an isogeny graph. In the case of ECDH, the public key is a point on a curve, in the case of SIDH, the public part is a curve itself and the image of some points after applying isogeny. The shared secret in the case of ECDH is a point on a curve, in the case of SIDH it is a j-invariant.</p>
    <div>
      <h3>SIKE: Supersingular Isogeny Key Encapsulation</h3>
      <a href="#sike-supersingular-isogeny-key-encapsulation">
        
      </a>
    </div>
    <p>SIDH could potentially be used as a drop-in replacement of the ECDH protocol. We have actually implemented a proof-of-concept and added it to our implementation of TLS 1.3 in the <a href="http://github.com/cloudflare/tls-tris">tls-tris</a> library and described (together with Mozilla) implementation details in this <a href="https://tools.ietf.org/html/draft-kiefer-tls-ecdhe-sidh-00">draft</a>. Nevertheless, there is a problem with SIDH - the keys can be used only once. In 2016, a few researchers came up with an active <a href="https://eprint.iacr.org/2016/859">attack</a> on SIDH which works only when public keys are reused. In the context of TLS, it is not a big problem, because for each session a fresh key pair is generated (ephemeral keys), but it may not be true for other applications.</p><p>SIKE is an isogeny key encapsulation which solves this problem. Bob can generate SIKE keys, upload the public part somewhere in the Internet and then anybody can use it whenever he wants to communicate with Bob securely. SIKE reuses SIDH - internally both sides of the connection always perform SIDH key generation, SIDH key agreement and apply some other cryptographic primitives in order to convert SIDH to KEM. SIKE is implemented in a few variants - each variant corresponds to the security levels using 128-, 192- and 256-bit secret keys. Higher security level means longer running time. More details about SIKE can be found <a href="https://sike.org/">here</a>.</p><p>SIKE is also one of the candidates in NIST post-quantum "<a href="https://csrc.nist.gov/Projects/Post-Quantum-Cryptography">competition</a>".</p><p>I’ve skipped many important details to give a brief description of how isogeny based crypto works. If you’re curious and hungry for details, look at either of these Cloudflare <a href="https://www.youtube.com/watch?v=ctP24WKusX0">meetups</a>, where Deirdre Connolly talked about isogeny-based cryptography or this <a href="https://videos.2017.pqcrypto.org/school/#martindale1">talk</a> by Chloe Martindale during PQ Crypto School 2017. And if you would like to know more about quantum attacks on this scheme, I highly recommend <a href="https://eprint.iacr.org/2019/103.pdf">this</a> work.</p>
    <div>
      <h2>Conclusion</h2>
      <a href="#conclusion">
        
      </a>
    </div>
    <p>Quantum computers that can break meaningful cryptographic parameter settings do not exist, yet. They won't be built for at least the next few years. Nevertheless, they have already changed the way we look at current cryptographic deployments. There are at least two reasons it’s worth investing in PQ cryptography:</p><ul><li><p>It takes a lot of time to build secure cryptography and we don’t actually know when today’s classical cryptography will be broken. There is a need for a good mathematical base: an initial idea of what may be secure against something that doesn't exist yet. If you have an idea, you also need good implementation, constant time, resistance to things like <a href="https://www.paulkocher.com/doc/TimingAttacks.pdf">time</a> and <a href="https://www.youtube.com/watch?v=fLEjSU1a748">cache</a> <a href="https://en.wikipedia.org/wiki/Side-channel_attack">side-channels</a>, <a href="https://link.springer.com/content/pdf/10.1007%2FBFb0052259.pdf">DFA</a>, <a href="https://link.springer.com/content/pdf/10.1007%2Fs13389-011-0006-y.pdf">DPA</a>, <a href="https://en.wikipedia.org/wiki/Electromagnetic_attack">EM</a>, and a bunch of other abbreviations indicating <a href="https://csrc.nist.gov/csrc/media/events/physical-security-testing-workshop/documents/papers/physecpaper19.pdf">side-channel</a> resistance. There is also deployment of, for example, algorithms based on elliptic curves were introduced in '85, but started to really be used in production only during the last decade, 20 or so years later. Obviously, the implementation must be blazingly fast! Last, but not least, integration: we need time to develop standards to allow integration of PQ cryptography with protocols like TLS.</p></li><li><p>Even though efficient quantum computers probably won't exist for another few years, the threat is real. Data encrypted with current cryptographic algorithms can be recorded now with hopes of being broken in the future.</p></li></ul><p>Cloudflare is motivated to help build the Internet of tomorrow with the tools at hand today. Our interest is in cryptographic techniques that can be integrated into existing protocols and widely deployed on the Internet as seamlessly as possible. PQ cryptography, like the rest of cryptography, includes many cryptosystems that can be used for communications in today’s Internet; Alice and Bob need to perform some computation, but they do not need to buy new hardware to do that.</p><p>Cloudflare sees great potential in those algorithms and believes that some of them can be used as a safe replacement for classical public-key cryptosystems. Time will tell if we’re justified in this belief!</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6eUX7xZqv0SKXlOXdRzbVM/a72e8969c4e1f1e5c6e8a3fc665af342/crypto-week-2019-header-circle_2x-2.png" />
            
            </figure><p></p> ]]></content:encoded>
            <category><![CDATA[Crypto Week]]></category>
            <category><![CDATA[Cryptography]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[TLS]]></category>
            <category><![CDATA[Deep Dive]]></category>
            <guid isPermaLink="false">3mpA8KK37HTvhRuZrsBvUy</guid>
            <dc:creator>Kris Kwiatkowski</dc:creator>
        </item>
        <item>
            <title><![CDATA[Introducing CIRCL: An Advanced Cryptographic Library]]></title>
            <link>https://blog.cloudflare.com/introducing-circl/</link>
            <pubDate>Thu, 20 Jun 2019 13:00:00 GMT</pubDate>
            <description><![CDATA[ Today we are proud to release the source code of a cryptographic library we’ve been working on:  a collection of cryptographic primitives written in Go, called CIRCL.  ]]></description>
            <content:encoded><![CDATA[ 
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6b0YmeXVekCfEaADp0Kcp3/b0cbfa18736d7ffce6e5e2b6d04126da/circl-header_2x-1.png" />
            
            </figure><p>As part of <a href="/welcome-to-crypto-week-2019/">Crypto Week 2019</a>, today we are proud to release the source code of a cryptographic library we’ve been working on: a collection of cryptographic primitives written in Go, called <a href="http://github.com/cloudflare/circl">CIRCL</a>. This library includes a set of packages that target cryptographic algorithms for post-quantum (PQ), elliptic curve cryptography, and hash functions for prime groups. Our hope is that it’s useful for a broad audience. Get ready to discover how we made CIRCL unique.</p>
    <div>
      <h3>Cryptography in Go</h3>
      <a href="#cryptography-in-go">
        
      </a>
    </div>
    <p>We use Go a lot at Cloudflare. It offers a good balance between ease of use and performance; the learning curve is very light, and after a short time, any programmer can get good at writing fast, lightweight backend services. And thanks to the possibility of implementing performance critical parts in <a href="https://golang.org/doc/asm">Go assembly</a>, we can try to ‘squeeze the machine’ and get every bit of performance.</p><p>Cloudflare’s cryptography team designs and maintains security-critical projects. It's not a secret that security is hard. That's why, we are introducing the Cloudflare Interoperable Reusable Cryptographic Library - CIRCL. There are multiple goals behind CIRCL. First, we want to concentrate our efforts to implement cryptographic primitives in a single place. This makes it easier to ensure that proper engineering processes are followed. Second, Cloudflare is an active member of the Internet community - we are trying to improve and propose standards to help make the Internet a better place.</p><p>Cloudflare's mission is to help build a better Internet. For this reason, we want CIRCL helps the cryptographic community to create proof of concepts, like the <a href="/towards-post-quantum-cryptography-in-TLS">post-quantum TLS experiments</a> we are doing. Over the years, lots of ideas have been put on the table by cryptographers (for example, homomorphic encryption, multi-party computation, and privacy preserving constructions). Recently, we’ve seen those concepts picked up and exercised in a variety of contexts. CIRCL’s implementations of cryptographic primitives creates a powerful toolbox for developers wishing to use them.</p><p>The Go language provides native packages for several well-known cryptographic algorithms, such as key agreement algorithms, hash functions, and digital signatures. There are also packages maintained by the community under <a href="http://golang.org/x/crypto"><i>golang.org/x/crypto</i></a> that provide a diverse set of algorithms for supporting <a href="https://en.wikipedia.org/wiki/Authenticated_encryption">authenticated encryption</a>, <a href="https://en.wikipedia.org/wiki/Stream_cipher">stream ciphers</a>, <a href="https://en.wikipedia.org/wiki/Key_derivation_function">key derivation functions</a>, and <a href="https://en.wikipedia.org/wiki/Pairing-based_cryptography">bilinear pairings</a>. CIRCL doesn’t try to compete with <a href="http://golang.org/x/crypto"><i>golang.org/x/crypto</i></a> in any sense. Our goal is to provide a complementary set of implementations that are more aggressively optimized, or may be less commonly used but have a good chance at being very useful in the future.</p>
    <div>
      <h3>Unboxing CIRCL</h3>
      <a href="#unboxing-circl">
        
      </a>
    </div>
    <p>Our cryptography team worked on a fresh proposal to augment the capabilities of Go users with a new set of packages.  You can get them by typing:</p><p><code>$ go get github.com/cloudflare/circl</code></p><p>The contents of CIRCL is split across different categories, summarized in this table:</p><table>
  <tr>
    <th>Category</th>
    <th>Algorithms</th> 
    <th>Description</th> 
    <th>Applications</th>
  </tr>
  <tr>
    <td>Post-Quantum Cryptography</td>
    <td>SIDH</td> 
    <td>Isogeny-based cryptography. </td>
    <td>SIDH provides key exchange mechanisms using ephemeral keys. </td>
  </tr>
  <tr>
    <td>SIKE</td> 
    <td>SIKE is a key encapsulation mechanism (KEM).</td> 
    <td>Key agreement protocols.</td>
  </tr>
  <tr>
    <td>Key Exchange</td>
    <td>X25519, X448</td> 
    <td><a href="https://tools.ietf.org/html/rfc7748">RFC-7748</a> provides new key exchange mechanisms based on Montgomery elliptic curves.</td> 
    <td><a href="http://staging.blog.mrk.cfdata.org/introducing-tls-1-3/">TLS 1.3.</a> Secure Shell.</td>
  </tr>
  <tr>
    <td>FourQ</td> 
    <td>One of the fastest elliptic curves at 128-bit security level.</td> 
    <td>Experimental for <a href="https://tools.ietf.org/id/draft-ladd-cfrg-4q-01.html">key agreement</a> and <a href="https://www.microsoft.com/en-us/research/wp-content/uploads/2016/07/SchnorrQ.pdf"> digital signatures</a>.</td>
  </tr>
  <tr>
    <td>Digital Signatures</td>
    <td>Ed25519</td> 
    <td><a href="https://tools.ietf.org/html/rfc8032">RFC-8032</a> provides new digital signature algorithms based on twisted Edwards curves.</td> 
    <td><a href="https://tools.ietf.org/html/rfc8410">Digital certificates</a> and authentication methods.</td>
  </tr>
  <tr>
    <td>Hash to Elliptic Curve Groups</td>
    <td>Several algorithms: Elligator2, Ristretto, SWU, Icart.</td> 
    <td>Protocols based on elliptic curves require hash functions that map bit strings to points on an elliptic curve. </td> 
    <td>Useful in protocols such as <a href="http://staging.blog.mrk.cfdata.org/privacy-pass-the-math/">Privacy Pass.</a> <a href="https://eprint.iacr.org/2018/163">OPAQUE.</a>
PAKE.
<a href="https://datatracker.ietf.org/doc/draft-irtf-cfrg-vrf/">Verifiable random functions.</a></td>
  </tr>
  <tr>
    <td>Optimization</td>
    <td>Curve P-384</td> 
    <td>Our optimizations reduce the burden when moving from P-256 to P-384.</td> 
    <td>ECDSA and ECDH using Suite B at top secret level.</td>
  </tr>
</table>
    <div>
      <h3>SIKE, a Post-Quantum Key Encapsulation Mechanism</h3>
      <a href="#sike-a-post-quantum-key-encapsulation-mechanism">
        
      </a>
    </div>
    <p>To better understand the post-quantum world, we started experimenting with post-quantum key exchange schemes and using them for key agreement in TLS 1.3. CIRCL contains the sidh <a href="https://github.com/cloudflare/circl/tree/master/dh/sidh">package</a>, an implementation of Supersingular Isogeny-based Diffie-Hellman (SIDH), as well as <a href="https://en.wikipedia.org/wiki/Ciphertext_indistinguishability">CCA2-secure</a> Supersingular Isogeny-based Key Encapsulation (SIKE), which is based on SIDH.</p><p>CIRCL makes playing with PQ key agreement very easy. Below is an example of the SIKE interface that can be used to establish a shared secret between two parties for use in symmetric encryption. The example uses a key encapsulation mechanism (KEM). For our example in this scheme, Alice generates a random secret key, and then uses Bob’s pre-generated public key to encrypt (encapsulate) it. The resulting ciphertext is sent to Bob. Then, Bob uses his private key to decrypt (decapsulate) the ciphertext and retrieve the secret key. See more details about SIKE in this Cloudflare <a href="/towards-post-quantum-cryptography-in-TLS">blog</a>.</p><p>Let's see how to do this with CIRCL:</p>
            <pre><code>// Bob's key pair
prvB := NewPrivateKey(Fp503, KeyVariantSike)
pubB := NewPublicKey(Fp503, KeyVariantSike)

// Generate private key
prvB.Generate(rand.Reader)
// Generate public key
prvB.GeneratePublicKey(pubB)

var publicKeyBytes = make([]array, pubB.Size())
var privateKeyBytes = make([]array, prvB.Size())

pubB.Export(publicKeyBytes)
prvB.Export(privateKeyBytes)

// Encode public key to JSON
// Save privateKeyBytes on disk</code></pre>
            <p>Bob uploads the public key to a location accessible by anybody. When Alice wants to establish a shared secret with Bob, she performs encapsulation that results in two parts: a shared secret and the result of the encapsulation, the ciphertext.</p>
            <pre><code>// Read JSON to bytes

// Alice's key pair
pubB := NewPublicKey(Fp503, KeyVariantSike)
pubB.Import(publicKeyBytes)

var kem := sike.NewSike503(rand.Reader)
kem.Encapsulate(ciphertext, sharedSecret, pubB)

// send ciphertext to Bob</code></pre>
            <p>Bob now receives ciphertext from Alice and decapsulates the shared secret:</p>
            <pre><code>var kem := sike.NewSike503(rand.Reader)
kem.Decapsulate(sharedSecret, prvB, pubA, ciphertext)  </code></pre>
            <p>At this point, both Alice and Bob can derive a symmetric encryption key from the secret generated.</p><p>SIKE implementation contains:</p><ul><li><p>Two different field sizes: Fp503 and Fp751. The choice of the field is a trade-off between performance and security.</p></li><li><p>Code optimized for AMD64 and ARM64 architectures, as well as generic Go code. For AMD64, we detect the micro-architecture and if it’s recent enough (e.g., it supports ADOX/ADCX and BMI2 instruction sets), we use different multiplication techniques to make an execution even faster.</p></li><li><p>Code implemented in constant time, that is, the execution time doesn’t depend on secret values.</p></li></ul><p>We also took care of low heap-memory footprint, so that the implementation uses a minimal amount of dynamically allocated memory. In the future, we plan to provide multiple implementations of post-quantum schemes. Currently, our focus is on algorithms useful for <a href="/towards-post-quantum-cryptography-in-TLS">key exchange in TLS</a>.</p><p>SIDH/SIKE are interesting because the key sizes produced by those algorithms are relatively small (comparing with other PQ schemes). Nevertheless, performance is not all that great yet, so we’ll continue looking. We plan to add lattice-based algorithms, such as <a href="https://ntru-hrss.org/">NTRU-HRSS</a> and <a href="https://pq-crystals.org/kyber/">Kyber</a>, to CIRCL. We will also add another more experimental algorithm called cSIDH, which we would like to try in other applications. CIRCL doesn’t currently contain any post-quantum signature algorithms, which is also on our to-do list. After our experiment with TLS key exchange completes, we’re going to look at post-quantum PKI. But that’s a topic for a future blog post, so stay tuned.</p><p>Last, we must admit that our code is largely based on the implementation from the <a href="https://csrc.nist.gov/CSRC/media/Projects/Post-Quantum-Cryptography/documents/round-1/submissions/SIKE.zip">NIST submission</a> along with the work of former intern <a href="/sidh-go/">Henry De Valence</a>, and we would like to thank both Henry and the SIKE team for their great work.</p>
    <div>
      <h3>Elliptic Curve Cryptography</h3>
      <a href="#elliptic-curve-cryptography">
        
      </a>
    </div>
    <p>Elliptic curve cryptography brings short keys sizes and faster evaluation of operations when compared to algorithms based on RSA. <a href="/a-relatively-easy-to-understand-primer-on-elliptic-curve-cryptography/">Elliptic curves</a> were standardized during the early 2000s, and have recently gained popularity as they are a more efficient way for securing communications.</p><p>Elliptic curves are used in almost every project at Cloudflare, not only for establishing TLS connections, but also for certificate validation, certificate revocation (OCSP), <a href="/privacy-pass-the-math/">Privacy Pass</a>, <a href="/introducing-certificate-transparency-and-nimbus/">certificate transparency</a>, and <a href="/real-urls-for-amp-cached-content-using-cloudflare-workers/">AMP Real URL</a>.</p><p>The Go language provides native support for NIST-standardized curves, the most popular of which is <a href="https://nvlpubs.nist.gov/nistpubs/FIPS/NIST.FIPS.186-4.pdf">P-256</a>. In a previous post, <a href="/go-crypto-bridging-the-performance-gap/">Vlad Krasnov</a> described the relevance of optimizing several cryptographic algorithms, including P-256 curve. When working at Cloudflare scale, little issues around performance are significantly magnified. This is one reason why Cloudflare pushes the boundaries of efficiency.</p><p>A similar thing happened on the chained <a href="/universal-ssl-encryption-all-the-way-to-the-origin-for-free/">validation</a> of certificates. For some certificates, we observed performance issues when validating a chain of certificates. Our team successfully diagnosed this issue: certificates which had signatures from the <a href="https://nvlpubs.nist.gov/nistpubs/FIPS/NIST.FIPS.186-4.pdf">P-384</a> curve, which is the curve that corresponds to the 192-bit security level, were taking up 99% of CPU time! It is common for certificates closer to the root of the chain of trust to rely on stronger security assumptions, for example, using larger elliptic curves. Our first-aid reaction comes in the form of an optimized implementation written by <a href="https://github.com/bren2010/p384">Brendan McMillion</a> that reduced the time of performing elliptic curve operations by a factor of 10. The code for P-384 is also available in CIRCL.</p><p>The latest developments in elliptic curve cryptography have caused a shift to use elliptic curve models with faster arithmetic operations. The best example is undoubtedly <a href="https://cr.yp.to/ecdh.html">Curve25519</a>; other examples are the Goldilocks and FourQ curves. CIRCL supports all of these curves, allowing instantiation of Diffie-Hellman exchanges and Edwards digital signatures. Although it slightly overlaps the Go native libraries, CIRCL has architecture-dependent optimizations.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/01zfosDYyaqzR6Lqu0Tshi/f5d0b4947d44ac2457a45fb5002e2268/imageLikeEmbed--3-.png" />
            
            </figure>
    <div>
      <h3>Hashing to Groups</h3>
      <a href="#hashing-to-groups">
        
      </a>
    </div>
    <p>Many cryptographic protocols rely on the hardness of solving the Discrete Logarithm Problem (DLP) in special groups, one of which is the integers reduced modulo a large integer. To guarantee that the DLP is hard to solve, the modulus must be a large prime number. Increasing its size boosts on security, but also makes operations more expensive. A better approach is using elliptic curve groups since they provide faster operations.</p><p>In some cryptographic protocols, it is common to use a function with the <a href="https://en.wikipedia.org/wiki/Cryptographic_hash_function">properties</a> of a cryptographic hash function that maps bit strings into elements of the group. This is easy to accomplish when, for example, the group is the set of integers modulo a large prime. However, it is not so clear how to perform this function using elliptic curves. In cryptographic literature, several methods have been proposed using the terms <i>hashing to curves</i> or <i>hashing to point</i> indistinctly.</p><p>The main issue is that there is no general method for deterministically finding points on any elliptic curve, the closest available are methods that target special curves and parameters. This is a problem for implementers of cryptographic algorithms, who have a hard time figuring out on a suitable method for hashing to points of an elliptic curve. Compounding that, chances of doing this wrong are high. There are many different methods, elliptic curves, and security considerations to analyze. For example, a <a href="https://wpa3.mathyvanhoef.com/">vulnerability</a> on WPA3 handshake protocol exploited a non-constant time hashing method resulting in a recovery of keys. Currently, an <a href="https://datatracker.ietf.org/doc/draft-irtf-cfrg-hash-to-curve/">IETF draft</a> is tracking work in-progress that provides hashing methods unifying requirements with curves and their parameters.</p><p>Corresponding to this problem, CIRCL will include implementations of hashing methods for elliptic curves. Our development is accompanying the evolution of the IEFT draft. Therefore, users of CIRCL will have this added value as the methods implement a ready-to-go functionality, covering the needs of some cryptographic protocols.</p>
    <div>
      <h3>Update on Bilinear Pairings</h3>
      <a href="#update-on-bilinear-pairings">
        
      </a>
    </div>
    <p>Bilinear pairings are sometimes regarded as a tool for cryptanalysis, however pairings can also be used in a constructive way by allowing instantiation of advanced public-key algorithms, for example, identity-based encryption, attribute-based encryption, blind digital signatures, three-party key agreement, among others.</p><p>An efficient way to instantiate a bilinear pairing is to use elliptic curves. Note that only a special class of curves can be used, thus so-called <i>pairing-friendly</i> curves have specific properties that enable the efficient evaluation of a pairing.</p><p>Some families of pairing-friendly curves were introduced by Barreto-Naehrig (<a href="https://doi.org/10.1007/11693383_22">BN</a>), Kachisa-Schaefer-Scott (<a href="https://doi.org/10.1007/978-3-540-85538-5_9">KSS</a>), and Barreto-Lynn-Scott (<a href="https://doi.org/10.1007/3-540-36413-7_19">BLS</a>). BN256 is a BN curve using a 256-bit prime and is one of the fastest options for implementing a bilinear pairing. The Go native library supports this curve in the package <a href="https://godoc.org/golang.org/x/crypto/bn256">golang.org/x/crypto/bn256</a>. In fact, the BN256 curve is used by Cloudflare’s <a href="/geo-key-manager-how-it-works/">Geo Key Manager</a>, which allows distributing encrypted keys around the world. At Cloudflare, high-performance is a must and with this motivation, in 2017, we released an optimized implementation of the BN256 package that is <a href="https://github.com/cloudflare/bn256">8x faster</a> than the Go’s native package. The success of these optimizations reached several other projects such as the <a href="https://github.com/ethereum/go-ethereum/blob/master/core/vm/contracts.go">Ethereum protocol</a> and the <a href="/league-of-entropy/">Randomness</a> <a href="/inside-the-entropy/">Beacon</a> project.</p><p>Recent <a href="https://eprint.iacr.org/2015/1027">improvements</a> in solving the DLP over extension fields, GF(pᵐ) for p prime and m&gt;1, impacted the security of pairings, causing recalculation of the parameters used for pairing-friendly curves.</p><p>Before these discoveries, the BN256 curve provided a 128-bit security level, but now larger primes are needed to target the same security level. That does not mean that the BN256 curve has been broken, since BN256 gives a security of <a href="https://eprint.iacr.org/2017/334">100 bits</a>, that is, approximately 2¹⁰⁰ operations are required to cause a real danger, which is still unfeasible with current computing power.</p><p>With our CIRCL announcement, we want to announce our plans for research and development to obtain efficient curve(s) to become a stronger successor of BN256. According to the estimation by <a href="http://doi.org/10.1007/s00145-018-9280-5">Barbulescu-Duquesne</a>, a BN curve must use primes of at least 456 bits to match a 128-bit security level. However, the impact on the recalculation of parameters brings back to the main scene BLS and KSS curves as efficient alternatives. To this end a <a href="https://datatracker.ietf.org/doc/draft-yonezawa-pairing-friendly-curves/">standardization effort</a> at IEFT is in progress with the aim of defining parameters and pairing-friendly curves that match different security levels.</p><p>Note that regardless of the curve(s) chosen, there is an unavoidable performance downgrade when moving from BN256 to a stronger curve. Actual timings were presented by <a href="https://ecc2017.cs.ru.nl/slides/ecc2017-aranha.pdf">Aranha</a>, who described the evolution of the race for high-performance pairing implementations. The purpose of our continuous development of CIRCL is to minimize this impact through fast implementations.</p>
    <div>
      <h3>Optimizations</h3>
      <a href="#optimizations">
        
      </a>
    </div>
    <p>Go itself is a very easy to learn and use for system programming and yet makes it possible to use assembly so that you can stay close “to the metal”. We have blogged about improving performance in Go few times in the past (see these posts about <a href="/how-expensive-is-crypto-anyway/">encryption</a>, <a href="/go-crypto-bridging-the-performance-gap/">ciphersuites</a>, and <a href="/neon-is-the-new-black/">image encoding</a>).</p><p>When developing CIRCL, we crafted the code to get the best possible performance from the machine. We leverage the capabilities provided by the architecture and the architecture-specific instructions. This means that in some cases we need to get our hands dirty and rewrite parts of the software in Go assembly, which is not easy, but definitely worth the effort when it comes to performance. We focused on x86-64, as this is our main target, but we also think that it’s <a href="/arm-takes-wing/">worth looking at ARM architecture</a>, and in some cases (like SIDH or P-384), CIRCL has optimized code for this platform.</p><p>We also try to ensure that code uses memory efficiently - crafting it in a way that fast allocations on the stack are preferred over expensive heap allocations. In cases where heap allocation is needed, we tried to design the APIs in a way that, they allow pre-allocating memory ahead of time and reuse it for multiple operations.</p>
    <div>
      <h3>Security</h3>
      <a href="#security">
        
      </a>
    </div>
    <p>The CIRCL library is offered as-is, and without a guarantee. Therefore, it is expected that changes in the code, repository, and API occur in the future. We recommend to take caution before using this library in a production application since part of its content is experimental.</p><p>As new attacks and vulnerabilities arise over the time, security of software should be treated as a continuous process. In particular, the assessment of cryptographic software is critical, it requires the expertise of several fields, not only computer science. Cryptography engineers must be aware of the latest vulnerabilities and methods of attack in order to defend against them.</p><p>The development of CIRCL follows best practices on the secure development. For example, if time execution of the code depends on secret data, the attacker could leverage those irregularities and recover secret keys. In our code, we take care of writing constant-time code and hence prevent timing based attacks.</p><p>Developers of cryptographic software must also be aware of optimizations performed by the compiler and/or the <a href="https://meltdownattack.com/">processor</a> since these optimizations can lead to insecure binary codes in some cases. All of these issues could be exploited in real attacks aimed at compromising systems and keys. Therefore, software changes must be tracked down through thorough code reviews. Also static analyzers and automated testing tools play an important role on the security of the software.</p>
    <div>
      <h3>Summary</h3>
      <a href="#summary">
        
      </a>
    </div>
    <p>CIRCL is envisioned as an effective tool for experimenting with modern cryptographic algorithms yet providing high-performance implementations. Today is marked as the starting point of a continuous machinery of innovation and retribution to the community in the form of a cryptographic library. There are still several other applications such as homomorphic encryption, multi-party computation, and privacy-preserving protocols that we would like to explore.</p><p>We are team of cryptography, security, and software engineers working to improve and augment Cloudflare products. Our team keeps the communication channels open for receiving comments, including improvements, and merging contributions. We welcome opinions and contributions! If you would like to get in contact, you should check out our github repository for CIRCL <a href="https://github.com/cloudflare/circl">github.com/cloudflare/circl</a>. We want to share our work and hope it makes someone else’s job easier as well.</p><p>Finally, special thanks to all the contributors who has either directly or indirectly helped to implement the library - Ko Stoffelen, Brendan McMillion, Henry de Valence, Michael McLoughlin and all the people who invested their time in reviewing our code.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/58PegOxcmXcRhZLgq36KqR/14e8ebfa42a7b425cb055afd9a0ca8f0/crypto-week-2019-header-circle_2x-1.png" />
            
            </figure> ]]></content:encoded>
            <category><![CDATA[Crypto Week]]></category>
            <category><![CDATA[Cryptography]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[Elliptic Curves]]></category>
            <category><![CDATA[TLS]]></category>
            <category><![CDATA[Research]]></category>
            <guid isPermaLink="false">3JsbElNXCgx49YgvgUTvsL</guid>
            <dc:creator>Kris Kwiatkowski</dc:creator>
            <dc:creator>Armando Faz-Hernández</dc:creator>
        </item>
        <item>
            <title><![CDATA[Cloudflare's Ethereum Gateway]]></title>
            <link>https://blog.cloudflare.com/cloudflare-ethereum-gateway/</link>
            <pubDate>Wed, 19 Jun 2019 13:01:00 GMT</pubDate>
            <description><![CDATA[ Today, we are excited to announce Cloudflare's Ethereum Gateway, where you can interact with the Ethereum network without installing any software on your computer. ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Today, as part of <a href="/welcome-to-crypto-week-2019/">Crypto Week 2019</a>, we are excited to announce Cloudflare's Ethereum Gateway, where you can interact with the Ethereum network without installing any additional software on your computer.</p><p>This is another tool in Cloudflare’s Distributed Web Gateway tool set. Currently, Cloudflare lets you host content on the InterPlanetary File System (IPFS) and access it through your own custom domain. Similarly, the new Ethereum Gateway allows access to the Ethereum network, which you can provision through your custom hostname.</p><p>This setup makes it possible to add interactive elements to sites powered by <a href="https://blockgeeks.com/guides/smart-contracts/">Ethereum smart contracts</a>, a decentralized computing platform. And, in conjunction with the IPFS gateway, this allows hosting websites and resources in a decentralized manner, and has the extra bonus of the added speed, security, and reliability provided by the Cloudflare edge network. You can access our Ethereum gateway directly at <a href="https://cloudflare-eth.com">https://cloudflare-eth.com</a>.</p><p>This brief primer on how Ethereum and smart contracts work has examples of the many possibilities of using the Cloudflare Distributed Web Gateway.</p>
    <div>
      <h3><b>Primer on Ethereum</b></h3>
      <a href="#primer-on-ethereum">
        
      </a>
    </div>
    <p>You may have heard of Ethereum as a cryptocurrency. What you may not know is that Ethereum is so much more. Ethereum is a distributed virtual computing network that stores and enforces smart contracts.</p><p>So, what is a smart contract?</p><p>Good question. Ethereum smart contracts are simply a piece of code stored on the Ethereum blockchain. When the contract is triggered, it runs on the Ethereum Virtual Machine (EVM). The EVM is a distributed virtual machine that runs smart contract code and produces cryptographically verified changes to the state of the Ethereum blockchain as its result.</p><p>To illustrate the power of smart contracts, let's consider a little example.</p><p>Anna wants to start a VPN provider, but she lacks the capital. To raise funds for her venture she decides to hold an Initial Coin Offering (ICO). Rather than design an ICO contract from scratch Anna bases her contract off of <a href="https://ethereum.org/en/developers/docs/standards/tokens/erc-20/">ERC-20</a>. ERC-20 is a template for issuing fungible tokens, perfect for ICOs. Anna sends her ERC-20 compliant contract to the Ethereum network, and starts to sell stock in her new company, VPN Co.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/125yD5Cd5Q1meFIupmLr5M/77b5e343b8b0d2ac74eb701b9da71349/ico_2x.png" />
            
            </figure><p>Once she's sorted out funds, Anna sits down and starts to write a smart contract. Anna’s contract asks customers to send her their public key, along with some Ether (the coin product of Ethereum). She then authorizes the public key to access her VPN service. All without having to hold any secret information. Huzzah!</p><p>Next, rather than set up the infrastructure to run a VPN herself, Anna decides to use the blockchain again, but this time as a customer. Cloud Co. sells managed cloud infrastructure using their own smart contract. Anna programs her contract to send the appropriate amount of Ether to Cloud Co.'s contract. Cloud Co. then provisions the servers she needs to host her VPN. By automatically purchasing more infrastructure every time she has a new customer, her VPN company can scale totally autonomously.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2Ir87Td7jTHxaC3kL7oLOH/74246966422e55ddf4cd4e4b3922eeb5/VPN-co-_2x.png" />
            
            </figure><p>Finally, Anna pays dividends to her investors out of the profits, keeping a little for herself.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6tosVrjURmYmMFxqzRAn4t/5e0bc26c6915e1e67cdd6f963afee1a9/slice-of-pie-_2x.png" />
            
            </figure><p>And there you have it.</p><p>A decentralised, autonomous, smart VPN provider.</p><p>A smart contract stored on the blockchain has an associated account for storing funds, and the contract is triggered when someone sends Ether to that account. So for our VPN example, the provisioning contract triggers when someone transfers money into the account associated with Anna’s contract.</p><p>What distinguishes smart contracts from ordinary code?</p><p>The "smart" part of a smart contract is they run autonomously. The "contract" part is the guarantee that the code runs as written.</p><p>Because this contract is enforced cryptographically, maintained in the tamper-resistant medium of the blockchain and verified by the consensus of the network, these contracts are more reliable than regular contracts which can provoke dispute.</p>
    <div>
      <h3><b>Ethereum Smart Contracts vs. Traditional Contracts</b></h3>
      <a href="#ethereum-smart-contracts-vs-traditional-contracts">
        
      </a>
    </div>
    <p>A regular contract is enforced by the court system, litigated by lawyers. The outcome is uncertain; different courts rule differently and hiring more or better lawyers can swing the odds in your favor.</p><p>Smart contract outcomes are predetermined and are nearly incorruptible. However, here be dragons: though the outcome can be predetermined and incorruptible, a poorly written contract might not have the intended behavior, and because contracts are immutable, this is difficult to fix.</p>
    <div>
      <h3><b>How are smart contracts written?</b></h3>
      <a href="#how-are-smart-contracts-written">
        
      </a>
    </div>
    <p>You can write smart contracts in a number of languages, some of which are Turing complete, e.g. <a href="https://solidity.readthedocs.io">Solidity</a>. A Turing complete language lets you write code that can evaluate any computable function. This puts Solidity in the same class of languages as Python and Java. The compiled bytecode is then run on the EVM.</p><p>The EVM differs from a standard VM in a number of ways:</p>
    <div>
      <h5>The EVM is distributed</h5>
      <a href="#the-evm-is-distributed">
        
      </a>
    </div>
    <p>Each piece of code is run by numerous nodes. Nodes verify the computation before accepting a block, and therefore ensure that miners who want their blocks accepted must always run the EVM honestly. A block is only considered accepted when more than half of the network accepts it. This is the consensus part of Ethereum.</p><h6>The EVM is entirely deterministic</h6><p>This means that the same inputs to a function always produce the same outputs. Because regular VMs have access to file storage and the network, the results of a function call can be non-deterministic. Every EVM has the same start state, thus a given set of inputs always gives the same outputs. This makes the EVM more reliable than a standard VM.</p><p>There are two big gotchas that come with this determinism:</p><ul><li><p>EVM bytecode is Turing complete and therefore discerning the outputs without running the computation is not always possible.</p></li><li><p>Ethereum smart contracts can store state on the blockchain. This means that the output of the function can vary as the blockchain changes. Although, technically this is deterministic in that the blockchain is an input to the function, it may still be impossible to derive the output in advance.</p></li></ul><p>This however means that they suffer from the same problems as any piece of software – bugs. However, unlike normal code where the authors can issue a patch, code stored on the blockchain is immutable. More problematically, even if the author provides a new smart contract, the old one is always still available on the blockchain.</p><p>This means that when writing contracts authors must be especially careful to write secure code, and include a kill switch to ensure that if bugs do reside in the code, they can be squashed. If there is no kill switch and there are vulnerabilities in the smart contract that can be exploited, it can potentially lead to the theft of resources from the smart contract or from other individuals. EVM Bytecode includes a special <code>SELFDESTRUCT</code> opcode that deletes a contract, and sends all funds to the specified address for just this purpose.</p><p>The need to include a kill switch was brought into sharp focus during the <a href="https://en.wikipedia.org/wiki/The_DAO_(organization)">infamous DAO incident</a>. The DAO smart contract acted as a complex decentralized venture capital (VC) fund and held Ether worth 250 million dollars at its peack collected from a group of investors. Hackers exploited vulnerabilities in the smart contract and stole Ether wirth 50 million dollars.</p><p>Because there is no way to undo transactions in Ethereum, there was a highly controversial “hard fork,” where the majority of the community agreed to accept a block with an “irregular state change” that essentially drained all DAO funds into a special “WithdrawDAO” recovery contract. By convincing enough miners to accept this irregular block as valid, the DAO could return funds.</p><p>Not everyone agreed with the change. Those who disagreed rejected the irregular block and formed the Ethereum Classic network, with both branches of the fork growing independently.</p><p>Kill switches, however, can cause their own problems. For example, when a contract used as a library flips its kill switch, all contracts relying on this contract can no longer operate as intended, even though the underlying library code is immutable. This caused over 500,000 ETH to become <a href="https://www.parity.io/security-alert-2/">stuck in multi-signature wallets</a> when an attacker triggered the kill switch of an underlying library.</p><p>Users of the multi-signature library assumed the immutability of the code meant that the library would always operate as anticipated. But the smart contracts that interact with the blockchain are only deterministic when accounting for the state of the blockchain.</p><p>In the wake of the DAO, various tools were created that check smart contracts for bugs or enable bug bounties, for example <a href="https://securify.chainsecurity.com/">Securify</a> and <a href="https://thehydra.io/">The Hydra</a>.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/62DZdNA9leTphizaCEyR5k/6073b3bcfb18e38fc049cd06fdab7a57/bug_3x.png" />
            
            </figure><p>Come here, you ...</p><p>Another way smart contracts avoid bugs is using standardized patterns. For example, ERC-20 defines a standardized interface for producing tokens such as those used in ICOs, and ERC-721 defines a standardized interface for implementing non-fungible tokens. Non-fungible tokens can be used for trading-card games like <a href="https://www.cryptokitties.co/">CryptoKitties</a>. CryptoKitties is a trading-card style game built on the Ethereum blockchain. Players can buy, sell, and breed cats, with each cat being unique.</p><p>CryptoKitties is built on a collection of smart contracts that provides an <a href="https://github.com/cryptocopycats/awesome-cryptokitties/tree/master/contracts">open-source Application Binary Interface (ABI</a>) for interacting with the KittyVerse -- the virtual world of the CryptoKitties application. An ABI simply allows you to call functions in a contract and receive any returned data. The <code>KittyBase</code> code may look like this:</p>
            <pre><code>Contract KittyBase is KittyAccessControl {
	event Birth(address owner, uint256 kittyId, uint256 matronId, uint256 sireId, uint256 genes);
	event Transfer(address from, address to, uint256 tokenId);
    struct Kitty {
        uint256 genes;
        uint64 birthTime;
        uint64 cooldownEndBlock;
        uint32 matronId;
        uint32 sireId;
        uint32 siringWithId;
        uint16 cooldownIndex;
        uint16 generation;
    }
	[...]
    function _transfer(address _from, address _to, uint256 _tokenId) internal {
    ...
    }
    function _createKitty(uint256 _matronId, uint256 _sireId, uint256 _generation, uint256 _genes, address _owner) internal returns (uint) {
    ...
    }
	[...]
}</code></pre>
            <p>Besides defining what a Kitty is, this contract defines two basic functions for transferring and creating kitties. Both are internal and can only be called by contracts that implement <code>KittyBase</code>. The <code>KittyOwnership</code> contract implements both ERC-721 and <code>KittyBase</code>, and implements an external <code>transfer</code> function that calls the internal <code>_transfer</code> function. This code is compiled into bytecode written to the blockchain.</p><p>By implementing a standardised interface like ERC-721, smart contracts that aren’t specifically aware of CryptoKitties can still interact with the KittyVerse. The CryptoKitties ABI functions allow users to create distributed apps (dApps), of their own design on top of the KittyVerse, and allow other users to use their dApps. This extensibility helps demonstrate the potential of smart contracts.</p>
    <div>
      <h3><b>How is this so different?</b></h3>
      <a href="#how-is-this-so-different">
        
      </a>
    </div>
    <p>Smart contracts are, by definition, public. Everyone can see the terms and understand where the money goes. This is a radically different approach to providing transparency and accountability. Because all contracts and transactions are public and verified by consensus, trust is distributed between the people, rather than centralized in a few big institutions.</p><p>The trust given to institutions is historic in that we trust them because they have previously demonstrated trustworthiness.</p><p>The trust placed in consensus-based algorithms is based on the assumption that most people are honest, or more accurately, that no sufficiently large subset of people can collude to produce a malicious outcome. This is the democratisation of trust.</p><p>In the case of the DAO attack, a majority of nodes <i>agreed</i> to accept an “irregular” state transition. This effectively undid the damage of the attack and demonstrates how, at least in the world of blockchain, perception is reality. Because most people “believed” (accepted) this irregular block, it became a “real,” valid block. Most people think of the blockchain as immutable, and trust the power of consensus to ensure correctness, however if enough people agree to do something irregular, they don't have to keep the rules.</p>
    <div>
      <h3><b>So where does Cloudflare fit in?</b></h3>
      <a href="#so-where-does-cloudflare-fit-in">
        
      </a>
    </div>
    <p>Accessing the Ethereum network and its attendant benefits directly requires running complex software, including downloading and cryptographically verifying hundreds of gigabytes of data, which apart from producing technical barriers to entry for users, can also exclude people with low-power devices.</p><p>To help those users and devices access the Ethereum network, the Cloudflare Ethereum gateway allows any device capable of accessing the web to interact with the Ethereum network in a safe, reliable way.</p><p>Through our gateway, not only can you explore the blockchain, but if you give our gateway a signed transaction, we’ll push it to the network to allow miners to add it to their blockchain. This means that you can send Ether and even put new contracts on the blockchain without having to run a node.</p><p>"But Jonathan," I hear you say, "by providing a gateway aren't you just making Cloudflare a centralizing institution?"</p><p>That’s a fair question. Thankfully, Cloudflare won’t be alone in offering these gateways. We’re joining alongside organizations, such as <a href="https://infura.io">Infura</a>, to expand the constellation of gateways that already exist. We hope that, by providing a fast, reliable service, we can enable people who never previously used smart-contracts to do so, and in so doing bring the benefits they offer to billions of regular Internet users.</p><blockquote><p>"We're excited that Cloudflare is bringing their infrastructure expertise to the Ethereum ecosystem. Infura has always believed in the importance of standardized, open APIs and compatibility between gateway providers, so we look forward to collaborating with their team to build a better distributed web." - E.G. Galano, <a href="https://infura.io/">Infura</a> co-founder.</p></blockquote><p>By providing a gateway to the Ethereum network, we help users make the jump from general web-user to cryptocurrency native, and eventually make the distributed web a fundamental part of the Internet.</p>
    <div>
      <h3><b>What can you do with Cloudflare's Gateway?</b></h3>
      <a href="#what-can-you-do-with-cloudflares-gateway">
        
      </a>
    </div>
    <p>Visit <a href="https://cloudflare-eth.com">cloudflare-eth.com</a> to interact with our example app. But to really explore the Ethereum world, access the RPC API, where you can do anything that can be done on the Ethereum network itself, from examining contracts, to transferring funds.</p><p>Our Gateway accepts <code>POST</code> requests containing JSON. For a complete list of calls, visit the <a href="https://github.com/ethereum/wiki/wiki/JSON-RPC">Ethereum github page</a>. So, to get the block number of the most recent block, you could run:</p>
            <pre><code>curl https://cloudflare-eth.com -H "Content-Type: application/json" --data '{"jsonrpc":"2.0","method":"eth_blockNumber","params":[],"id":1}'</code></pre>
            <p>and you would get a response something like this:</p>
            <pre><code>{
  "jsonrpc": "2.0",
  "id": 1,
  "result": "0x780f17"
}</code></pre>
            <p>We also invite developers to build dApps based on our Ethereum gateway using our API. Our API allows developers to build websites powered by the Ethereum blockchain. Check out <a href="https://developers.cloudflare.com/distributed-web/ethereum-gateway/">developer docs</a> to get started. If you want to read more about how Ethereum works check out this <a href="https://medium.com/@preethikasireddy/how-does-ethereum-work-anyway-22d1df506369">deep dive</a>.</p>
    <div>
      <h3><b>The architecture</b></h3>
      <a href="#the-architecture">
        
      </a>
    </div>
    <p>Cloudflare is uniquely positioned to host an Ethereum gateway, and we have the utmost faith in the products we offer to customers. This is why the Cloudflare Ethereum gateway runs as a Cloudflare customer and we <a href="https://en.wikipedia.org/wiki/Eating_your_own_dog_food">dogfood</a> our own products to provide a fast and reliable gateway. The domain we run the gateway on (<a href="https://cloudflare-eth.com">https://cloudflare-eth.com</a>) uses <a href="https://www.cloudflare.com/products/cloudflare-workers/">Cloudflare Workers</a> to cache responses for popular queries made to the gateway. Responses for these queries are answered directly from the Cloudflare edge, which can result in a ~6x speed-up.</p><p>We also use <a href="/introducing-load-balancing-intelligent-failover-with-cloudflare/">Load balancing</a> and <a href="/argo-tunnel/">Argo Tunnel</a> for fast, redundant, and secure content delivery. With Argo Smart Routing enabled, requests and responses to our Ethereum gateway are tunnelled directly from our Ethereum node to the Cloudflare edge using the best possible routing.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4JQmJaXMtzmPq4OuuP2897/580a60e6443392ff0da26a94f7842dc2/imageLikeEmbed--1-.png" />
            
            </figure><p>Similar to our <a href="https://cloudflare.com/distributed-web-gateway">IPFS gateway</a>, <a href="https://cloudflare-eth.com">cloudflare-eth.com</a> is an <a href="https://www.cloudflare.com/ssl-for-saas-providers/">SSL for SaaS</a> provider. This means that anyone can set up the Cloudflare Ethereum gateway as a backend for access to the Ethereum network through their own registered domains. For more details on how to set up your own domain with this functionality, see the Ethereum tab on <a href="https://cloudflare.com/distributed-web-gateway">cloudflare.com/distributed-web-gateway</a>.</p><p>With these features, you can use Cloudflare’s Distributed Web Gateway to create a fully decentralized website with an interactive backend that allows interaction with the IPFS and Ethereum networks. For example, you can host your content on IPFS (using something like <a href="https://pinata.cloud">Pinata</a> to pin the files), and then host the website backend as a smart contract on Ethereum. This architecture does not require a centralized server for hosting files or the actual website. Added to the power, speed, and security provided by Cloudflare’s edge network, your website is delivered to users around the world with unparalleled efficiency.</p>
    <div>
      <h3>Embracing a distributed future</h3>
      <a href="#embracing-a-distributed-future">
        
      </a>
    </div>
    <p>At Cloudflare, we support technologies that help distribute trust. By providing a gateway to the Ethereum network, we hope to facilitate the growth of a decentralized future.</p><p>We thank the Ethereum Foundation for their support of a new gateway in expanding the distributed web:</p><blockquote><p>“Cloudflare's Ethereum Gateway increases the options for thin-client applications as well as decentralization of the Ethereum ecosystem, and I can't think of a better person to do this work than Cloudflare. Allowing access through a user's custom hostname is a particularly nice touch. Bravo.” - Dr. Virgil Griffith, Head of Special Projects, Ethereum Foundation.</p></blockquote><p>We hope that by allowing anyone to use the gateway as the backend for their domain, we make the Ethereum network more accessible for everyone; with the added speed and security brought by serving this content directly from Cloudflare’s global edge network.</p><p>So, go forth and build our vision – the distributed crypto-future!</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5fM3vAj1tUUkBVxyauQZQq/093ceaa41694d5b6493ac02f173ca5a0/crypto-week-2019-header-circle_2x.png" />
            
            </figure> ]]></content:encoded>
            <category><![CDATA[Crypto Week]]></category>
            <category><![CDATA[Cryptography]]></category>
            <category><![CDATA[IPFS]]></category>
            <category><![CDATA[Ethereum]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[Research]]></category>
            <guid isPermaLink="false">4oOx6ebFXMI1k3UBQBDy6j</guid>
            <dc:creator>Jonathan Hoyland</dc:creator>
        </item>
        <item>
            <title><![CDATA[Continuing to Improve our IPFS Gateway]]></title>
            <link>https://blog.cloudflare.com/continuing-to-improve-our-ipfs-gateway/</link>
            <pubDate>Wed, 19 Jun 2019 13:00:00 GMT</pubDate>
            <description><![CDATA[ When we launched our InterPlanetary File System (IPFS) gateway last year we were blown away by the positive reception.  ]]></description>
            <content:encoded><![CDATA[ <p></p><p>When we launched our InterPlanetary File System (IPFS) gateway <a href="/distributed-web-gateway/">last year</a> we were blown away by the positive reception. Countless people gave us valuable suggestions for improvement and made open-source contributions to make serving content through our gateway easy (many captured in our <a href="https://developers.cloudflare.com/distributed-web/">developer docs</a>). Since then, our gateway has grown to regularly handle over a thousand requests per second, and has become the primary access point for several IPFS websites.</p><p>We’re committed to helping grow IPFS and have taken what we have learned since our initial release to improve our gateway. So far, we’ve done the following:</p>
    <div>
      <h3>Automatic Cache Purge</h3>
      <a href="#automatic-cache-purge">
        
      </a>
    </div>
    <p>One of the ways we tried to improve the performance of our gateway when we initially set it up was by setting really high cache TTLs. After all, content on IPFS is largely meant to be static. The complaint we heard though, was that site owners were frustrated at wait times upwards of several hours for changes to their website to propagate.</p><p>The way an IPFS gateway knows what content to serve when it receives a request for a given domain is by looking up the value of a TXT record associated with the domain – the DNSLink record. The value of this TXT record is the hash of the <b>entire</b> site, which changes if any one bit of the website changes. So we wrote a <a href="https://www.cloudflare.com/products/cloudflare-workers/">Worker</a> script that makes a DNS-over-HTTPS query to 1.1.1.1 and bypasses cache if it sees that the DNSLink record of a domain is different from when the content was originally cached.</p><p>Checking DNS gives the illusion of a much lower cache TTL and usually adds less than 5ms to a request, whereas revalidating the cache with a request to the origin could take anywhere from 30ms to 300ms. And as an additional usability bonus, the 1.1.1.1 cache automatically purges when Cloudflare customers change their DNS records. Customers who don’t manage their DNS records with us can purge their cache using <a href="https://1.1.1.1/purge-cache/">this tool</a>.</p>
    <div>
      <h3>Beta Testing for Orange-to-Orange</h3>
      <a href="#beta-testing-for-orange-to-orange">
        
      </a>
    </div>
    <p>Our gateway was originally based on a feature called <a href="https://support.cloudflare.com/hc/en-us/articles/217371987-Managing-Custom-Hostnames-SSL-for-SaaS-">SSL for SaaS</a>. This tweaks the way our edge works to allow anyone, Cloudflare customers or not, to CNAME their own domain to a target domain on our network, and have us send traffic we see for their domain to the target domain’s origin. SSL for SaaS keeps valid certificates for these domains in the Cloudflare database (hence the name), and applies the target domain’s configuration to these requests (for example, enforcing Page Rules) before they reach the origin.</p><p>The great thing about SSL for SaaS is that it doesn’t require being on the Cloudflare network. New people can start serving their websites through our gateway with their existing DNS provider, instead of migrating everything over. All Cloudflare settings are inherited from the target domain. This is a huge convenience, but also means that the source domain can’t customize their settings even if they do migrate.</p><p>This can be improved by an experimental feature called Orange-to-Orange (O2O) from the Cloudflare Edge team. O2O allows one zone on Cloudflare to CNAME to another zone, and apply the settings of both zones in layers. For example, cloudflare-ipfs.com has <b>Always Use HTTPS</b> turned off for various reasons, which means that every site served through our gateway also does. O2O allows site owners to override this setting by enabling <b>Always Use HTTPS</b> just for their website, if they know it’s okay, as well as adding custom Page Rules and Worker scripts to embed all sorts of complicated logic.</p><p>If you are on an Enterprise plan and would like to try this out on your domain, please reach out to. your account team with this request and we'll enable it for you in the coming weeks.</p>
    <div>
      <h3>Subdomain-based Gateway</h3>
      <a href="#subdomain-based-gateway">
        
      </a>
    </div>
    <p>To host an application on IPFS it’s pretty much essential to have a custom domain for your app. We discussed all the reasons for this in our post, <a href="/e2e-integrity/">End-to-End Integrity with IPFS</a> – essentially saying that because browsers only sandbox websites at the domain-level, serving an app directly from a gateway’s URL is not secure because another (malicious) app could steal its data.</p><p>Having a custom domain gives apps a secure place to keep user data, but also makes it possible for whoever controls the DNS for the domain to change a website’s content without warning. To provide both a secure context to apps as well as eternal immutability, Cloudflare set up a subdomain-based gateway at cf-ipfs.com.</p><p>cf-ipfs.com doesn’t respond to requests to the root domain, only at subdomains, where it interprets the subdomain as the hash of the content to serve. This means a request to https://.cf-ipfs.com is the equivalent of going to <a href="https://cloudflare-ipfs.com/ipfs/">https://cloudflare-ipfs.com/ipfs/</a>. The only technicality is that because domain names are case-insensitive, the hash must be re-encoded from Base58 to Base32. Luckily, the standard IPFS client provides a utility for this!</p><p>As an example, we’ll take the classic Wikipedia mirror on IPFS:<a href="https://cloudflare-ipfs.com/ipfs/QmXoypizjW3WknFiJnKLwHCnL72vedxjQkDDP1mXWo6uco/wiki/">https://cloudflare-ipfs.com/ipfs/QmXoypizjW3WknFiJnKLwHCnL72vedxjQkDDP1mXWo6uco/wiki/</a></p><p>First, we convert the hash, <code>QmXoyp...6uco</code> to base32:</p>
            <pre><code>$ ipfs cid base32 QmXoypizjW3WknFiJnKLwHCnL72vedxjQkDDP1mXWo6uco
bafybeiemxf5abjwjbikoz4mc3a3dla6ual3jsgpdr4cjr3oz3evfyavhwq</code></pre>
            <p>which tells us we can go here instead:</p><p><a href="https://bafybeiemxf5abjwjbikoz4mc3a3dla6ual3jsgpdr4cjr3oz3evfyavhwq.cf-ipfs.com/wiki/">https://bafybeiemxf5abjwjbikoz4mc3a3dla6ual3jsgpdr4cjr3oz3evfyavhwq.cf-ipfs.com/wiki/</a></p><p>The main downside of the subdomain approach is that for clients without <a href="/encrypted-sni/">Encrypted SNI</a> support, the hash is leaked to the network as part of the TLS handshake. This can be bad for privacy and enable <a href="https://www.bleepingcomputer.com/news/security/south-korea-is-censoring-the-internet-by-snooping-on-sni-traffic/">network-level censorship</a>.</p>
    <div>
      <h3>Enabling Session Affinity</h3>
      <a href="#enabling-session-affinity">
        
      </a>
    </div>
    <p>Loading a website usually requires fetching more than one asset from a backend server, and more often than not, “more than one” is more like “more than a dozen.” When that website is being loaded over IPFS, it dramatically improves performance when the IPFS node can make one connection and re-use it for all assets.</p><p>Behind the curtain, we run several IPFS nodes to reduce the likelihood of an outage and improve throughput. Unfortunately, with the way it was originally setup, each request for a different asset on a website would likely go to a different IPFS node and all those connections would have to be made again.</p><p>We fixed this by replacing the original backend load balancer with our own <a href="https://www.cloudflare.com/load-balancing/">Load Balancing</a> product that supports Session Affinity and automatically directs requests from the same user to the same IPFS node, minimizing redundant network requests.</p>
    <div>
      <h3>Connecting with Pinata</h3>
      <a href="#connecting-with-pinata">
        
      </a>
    </div>
    <p>And finally, we’ve configured our IPFS nodes to maintain a persistent connection to the nodes run by <a href="https://pinata.cloud/">Pinata</a>, a company that helps people pin content to the IPFS network. Having a persistent connection significantly improves the performance and reliability of requests to our gateway, for content on their network. Pinata has written their own blog post, which you can find <a href="https://medium.com/pinata/how-to-easily-host-a-website-on-ipfs-9d842b5d6a01">here</a>, that describes how to upload a website to IPFS and keep it online with a combination of Cloudflare and Pinata.</p><p>As always, we look forward to seeing what the community builds on top of our work, and hearing about how else Cloudflare can improve the Internet.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7jxOs3MCZyCfTxalFujmdZ/90734a85f553cb3903e3b6338758811f/image2-5.png" />
            
            </figure> ]]></content:encoded>
            <category><![CDATA[Crypto Week]]></category>
            <category><![CDATA[Cryptography]]></category>
            <category><![CDATA[IPFS]]></category>
            <category><![CDATA[1.1.1.1]]></category>
            <category><![CDATA[Product News]]></category>
            <guid isPermaLink="false">2v3Nrp3CmcVKRsqyifM9un</guid>
            <dc:creator>Brendan McMillion</dc:creator>
        </item>
        <item>
            <title><![CDATA[Securing Certificate Issuance using Multipath Domain Control Validation]]></title>
            <link>https://blog.cloudflare.com/secure-certificate-issuance/</link>
            <pubDate>Tue, 18 Jun 2019 13:00:00 GMT</pubDate>
            <description><![CDATA[ Trust on the Internet is underpinned by the Public Key Infrastructure (PKI). PKI grants servers the ability to securely serve websites by issuing digital certificates, providing the foundation for encrypted and authentic communication.  ]]></description>
            <content:encoded><![CDATA[ <p></p><p>This blog post is part of <a href="/welcome-to-crypto-week-2019/">Crypto Week 2019</a>.</p><p>Trust on the Internet is underpinned by the Public Key Infrastructure (PKI). PKI grants servers the ability to securely serve websites by issuing digital certificates, providing the foundation for encrypted and authentic communication.</p><p>Certificates make HTTPS encryption possible by using the public key in the certificate to verify server identity. HTTPS is especially important for websites that transmit sensitive data, such as banking credentials or private messages. Thankfully, modern browsers, such as Google Chrome, flag websites not secured using HTTPS by marking them “Not secure,” allowing users to be more security conscious of the websites they visit.</p><p>This blog post introduces a new, free tool Cloudflare offers to CAs so they can further secure certificate issuance. But before we dive in too deep, let’s talk about where certificates come from.</p>
    <div>
      <h3>Certificate Authorities</h3>
      <a href="#certificate-authorities">
        
      </a>
    </div>
    <p>Certificate Authorities (CAs) are the institutions responsible for issuing certificates.</p><p>When issuing a certificate for any given domain, they use Domain Control Validation (DCV) to verify that the entity requesting a certificate for the domain is the legitimate owner of the domain. With DCV the domain owner:</p><ol><li><p>creates a DNS resource record for a domain;</p></li><li><p>uploads a document to the web server located at that domain; OR</p></li><li><p>proves ownership of the domain’s administrative email account.</p></li></ol><p>The DCV process prevents adversaries from obtaining private-key and certificate pairs for domains not owned by the requestor.  </p><p>Preventing adversaries from acquiring this pair is critical: if an incorrectly issued certificate and private-key pair wind up in an adversary’s hands, they could pose as the victim’s domain and serve sensitive HTTPS traffic. This violates our existing trust of the Internet, and compromises private data on a potentially massive scale.</p><p>For example, an adversary that tricks a CA into mis-issuing a certificate for gmail.com could then perform TLS handshakes while pretending to be Google, and exfiltrate cookies and login information to gain access to the victim’s Gmail account. The risks of certificate mis-issuance are clearly severe.</p>
    <div>
      <h3>Domain Control Validation</h3>
      <a href="#domain-control-validation">
        
      </a>
    </div>
    <p>To prevent attacks like this, CAs only issue a certificate after performing DCV. One way of validating domain ownership is through HTTP validation, done by uploading a text file to a specific HTTP endpoint on the webserver they want to secure.  Another DCV method is done using email verification, where an email with a validation code link is sent to the administrative contact for the domain.</p>
    <div>
      <h3>HTTP Validation</h3>
      <a href="#http-validation">
        
      </a>
    </div>
    <p>Suppose Alice <a href="https://www.cloudflare.com/learning/dns/how-to-buy-a-domain-name/">buys</a> the domain name aliceswonderland.com and wants to get a dedicated certificate for this domain. Alice chooses to use Let’s Encrypt as their certificate authority. First, Alice must generate their own private key and create a certificate signing request (CSR). She sends the CSR to Let’s Encrypt, but the CA won’t issue a certificate for that CSR and private key until they know Alice owns aliceswonderland.com. Alice can then choose to prove that she owns this domain through HTTP validation.</p><p>When Let’s Encrypt performs DCV over HTTP, they require Alice to place a randomly named file in the <code>/.well-known/acme-challenge</code> path for her website. The CA must retrieve the text file by sending an HTTP <code>GET</code> request to <code>http://aliceswonderland.com/.well-known/acme-challenge/&lt;random_filename&gt;</code>. An expected value must be present on this endpoint for DCV to succeed.</p><p>For HTTP validation, Alice would upload a file to <code>http://aliceswonderland.com/.well-known/acme-challenge/YnV0dHNz</code></p><p>where the body contains:</p>
            <pre><code>curl http://aliceswonderland.com/.well-known/acme-challenge/YnV0dHNz

GET /.well-known/acme-challenge/YnV0dHNz
Host: aliceswonderland.com

HTTP/1.1 200 OK
Content-Type: application/octet-stream

YnV0dHNz.TEST_CLIENT_KEY</code></pre>
            <p>The CA instructs them to use the Base64 token <code>YnV0dHNz</code>. <code>TEST_CLIENT_KEY</code> in an account-linked key that only the certificate requestor and the CA know. The CA uses this field combination to verify that the certificate requestor actually owns the domain. Afterwards, Alice can get her certificate for her website!</p>
    <div>
      <h3>DNS Validation</h3>
      <a href="#dns-validation">
        
      </a>
    </div>
    <p>Another way users can validate domain ownership is to add a DNS TXT record containing a verification string or <i>token</i> from the CA to their domain’s resource records. For example, here’s a domain for an enterprise validating itself towards Google:</p>
            <pre><code>$ dig TXT aliceswonderland.com
aliceswonderland.com.	 28 IN TXT "google-site-verification=COanvvo4CIfihirYW6C0jGMUt2zogbE_lC6YBsfvV-U"</code></pre>
            <p>Here, Alice chooses to create a TXT DNS resource record with a specific token value. A Google CA can verify the presence of this token to validate that Alice actually owns her website.</p>
    <div>
      <h3>Types of BGP Hijacking Attacks</h3>
      <a href="#types-of-bgp-hijacking-attacks">
        
      </a>
    </div>
    <p>Certificate issuance is required for servers to securely communicate with clients. This is why it’s so important that the process responsible for issuing certificates is also secure. Unfortunately, this is not always the case.</p><p>Researchers at Princeton University recently discovered that common DCV methods are vulnerable to attacks executed by network-level adversaries. If Border Gateway Protocol (BGP) is the “postal service” of the Internet responsible for delivering data through the most efficient routes, then Autonomous Systems (AS) are individual post office branches that represent an Internet network run by a single organization. Sometimes network-level adversaries advertise false routes over BGP to steal traffic, especially if that traffic contains something important, like a domain’s certificate.</p><p><a href="https://www.princeton.edu/~pmittal/publications/bgp-tls-usenix18.pdf"><i>Bamboozling Certificate Authorities with BGP</i></a> highlights five types of attacks that can be orchestrated during the DCV process to obtain a certificate for a domain the adversary does not own. After implementing these attacks, the authors were able to (ethically) obtain certificates for domains they did not own from the top five CAs: Let’s Encrypt, GoDaddy, Comodo, Symantec, and GlobalSign. But how did they do it?</p>
    <div>
      <h3>Attacking the Domain Control Validation Process</h3>
      <a href="#attacking-the-domain-control-validation-process">
        
      </a>
    </div>
    <p>There are two main approaches to attacking the DCV process with BGP hijacking:</p><ol><li><p>Sub-Prefix Attack</p></li><li><p>Equally-Specific-Prefix Attack</p></li></ol><p>These attacks create a vulnerability when an adversary sends a certificate signing request for a victim’s domain to a CA. When the CA verifies the network resources using an <code>HTTP GET</code>  request (as discussed earlier), the adversary then uses BGP attacks to hijack traffic to the victim’s domain in a way that the CA’s request is rerouted to the adversary and not the domain owner. To understand how these attacks are conducted, we first need to do a little bit of math.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/55IOsCmOIvDukAEqazUFq9/2a480249d3d70e5a1b271367f9eeb719/Domain-Control-Validation-Process_1.5x.png" />
            
            </figure><p>Every device on the Internet uses an IP (Internet Protocol) address as a numerical identifier. IPv6 addresses contain 128 bits and follow a slash notation to indicate the size of the prefix. So, in the network address <b>2001:DB8:1000::/48</b>, “<b>/48</b>” refers to how many bits the network contains. This means that there are 80 bits left that contain the host addresses, for a total of 10,240 host addresses. The smaller the prefix number, the more host addresses remain in the network. With this knowledge, let’s jump into the attacks!</p>
    <div>
      <h4>Attack one: Sub-Prefix Attack</h4>
      <a href="#attack-one-sub-prefix-attack">
        
      </a>
    </div>
    <p>When BGP announces a route, the router always prefers to follow the more specific route. So if <b>2001:DB8::/32</b> and <b>2001:DB8:1000::/48</b> are advertised, the router will use the latter as it is the more specific prefix. This becomes a problem when an adversary makes a BGP announcement to a specific IP address while using the victim’s domain IP address. Let’s say the IP address for our victim, leagueofentropy.com, is <b>2001:DB8:1000::1</b> and announced as <b>2001:DB8::/32</b>. If an adversary announces the prefix <b>2001:DB8:1000::/48</b>, then they will capture the victim’s traffic, launching a <i>sub-prefix hijack attack</i>.</p><p>In an IPv4 attack, such as the <a href="/bgp-leaks-and-crypto-currencies/">attack</a> during April 2018, this was /24 and /23 announcements, with the more specific /24 being announced by the nefarious entity. In IPv6, this could be a /48 and /47 announcement. In both scenarios, /24's and /48's are the smallest blocks allowed to be routed globally. In the diagram below, <b>/47</b> is Texas and <b>/48</b> is the more specific Austin, Texas. The new (but nefarious) routes overrode the existing routes for portions of the Internet. The attacker then ran a nefarious DNS server on the normal IP addresses with DNS records pointing at some new nefarious web server instead of the existing server. This attracted the traffic destined for the victim’s domain within the area the nefarious routes were being propagated. The reason this attack was successful was because a more specific prefix is always preferred by the receiving routers.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1u8kdog1lBPYcQsjARPOkV/03c3da0661363f6fe10e589673c9aeab/1-Traditional-Equally-specific-Sub-Prefix-Attack-KC-0A-_3x.png" />
            
            </figure>
    <div>
      <h4>Attack two: Equally-Specific-Prefix Attack</h4>
      <a href="#attack-two-equally-specific-prefix-attack">
        
      </a>
    </div>
    <p>In the last attack, the adversary was able to hijack traffic by offering a more specific announcement, but what if the victim’s prefix is <b>/48</b> and a sub-prefix attack is not viable? In this case, an attacker would launch an <b>equally-specific-prefix hijack</b>, where the attacker announces the same prefix as the victim. This means that the AS chooses the preferred route between the victim and the adversary’s announcements based on properties like path length. This attack only ever intercepts a portion of the traffic.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3BKdC4cWYQO25VBryOZey1/828259a83952227bffd27fc4b655bc94/2-Traditional-EquallyPrefix-Attack-0A-KC_3x.png" />
            
            </figure><p>There are more advanced attacks that are covered in more depth in the paper. They are fundamentally similar attacks but are more stealthy.</p><p>Once an attacker has successfully obtained a bogus certificate for a domain that they do not own, they can perform a convincing attack where they pose as the victim’s domain and are able to decrypt and intercept the victim’s TLS traffic. The ability to decrypt the TLS traffic allows the adversary to completely Monster-in-the-Middle (MITM) encrypted TLS traffic and reroute Internet traffic destined for the victim’s domain to the adversary. To increase the stealthiness of the attack, the adversary will continue to forward traffic through the victim’s domain to perform the attack in an undetected manner.</p>
    <div>
      <h3>DNS Spoofing</h3>
      <a href="#dns-spoofing">
        
      </a>
    </div>
    <p>Another way an adversary can gain control of a domain is by spoofing DNS traffic by using a source IP address that belongs to a DNS nameserver. Because anyone can modify their packets’ outbound IP addresses, an adversary can fake the IP address of any DNS nameserver involved in resolving the victim’s domain, and impersonate a nameserver when responding to a CA.</p><p>This attack is more sophisticated than simply spamming a CA with falsified DNS responses. Because each <a href="https://www.cloudflare.com/learning/dns/what-is-dns/">DNS query</a> has its own randomized query identifiers and source port, a fake DNS response must match the DNS query’s identifiers to be convincing. Because these query identifiers are random, making a spoofed response with the correct identifiers is extremely difficult.</p><p>Adversaries can fragment User Datagram Protocol (UDP) DNS packets so that identifying DNS response information (like the random DNS query identifier) is delivered in one packet, while the actual answer section follows in another packet. This way, the adversary spoofs the DNS response to a legitimate DNS query.</p><p>Say an adversary wants to get a mis-issued certificate for victim.com by forcing packet fragmentation and spoofing DNS validation. The adversary sends a DNS nameserver for victim.com a ICMP "fragmentation needed" packet with a small Maximum Transmission Unit, or maximum byte size. This gets the nameserver to start fragmenting DNS responses. When the CA sends a DNS query to a nameserver for victim.com asking for victim.com’s TXT records, the nameserver will fragment the response into the two packets described above: the first contains the query ID and source port, which the adversary cannot spoof, and the second one contains the answer section, which the adversary can spoof. The adversary can continually send a spoofed answer to the CA throughout the DNS validation process, in the hopes of sliding their spoofed answer in before the CA receives the real answer from the nameserver.</p><p>In doing so, the answer section of a DNS response (the important part!) can be falsified, and an adversary can trick a CA into mis-issuing a certificate.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7LqZ3us0OcigAJXbwPyIbf/656eba59808bab008b5694eb195525c2/DNS-Spoofing_3x.png" />
            
            </figure>
    <div>
      <h3>Solution</h3>
      <a href="#solution">
        
      </a>
    </div>
    <p>At first glance, one could think a Certificate Transparency log could expose a mis-issued certificate and allow a CA to quickly revoke it. CT logs, however, can take up to 24 hours to include newly issued certificates, and certificate revocation can be inconsistently followed among different browsers. We need a solution that allows CAs to proactively prevent this attacks, not retroactively address them.</p><p>We’re excited to announce that Cloudflare provides CAs a free API to leverage our global network to perform DCV from multiple vantage points around the world. This API bolsters the DCV process against BGP hijacking and off-path DNS attacks.</p><p>Given that Cloudflare runs 175+ datacenters around the world, we are in a unique position to perform DCV from multiple vantage points. Each datacenter has a unique path to DNS nameservers or HTTP endpoints, which means that successful hijacking of a BGP route can only affect a subset of DCV requests, further hampering BGP hijacks. And since we use RPKI, we actually sign and verify BGP routes.</p><p>This DCV checker additionally protects CAs against off-path, DNS spoofing attacks. An additional feature that we built into the service that helps protect against off-path attackers is DNS query source IP randomization. By making the source IP unpredictable to the attacker, it becomes more challenging to spoof the second fragment of the forged DNS response to the DCV validation agent.</p><p>By comparing multiple DCV results collected over multiple paths, our DCV API makes it virtually impossible for an adversary to mislead a CA into thinking they own a domain when they actually don’t. CAs can use our tool to ensure that they only issue certificates to rightful domain owners.</p><p>Our multipath DCV checker consists of two services:</p><ol><li><p>DCV agents responsible for performing DCV out of a specific datacenter, and</p></li><li><p>a DCV orchestrator that handles multipath DCV requests from CAs and dispatches them to a subset of DCV agents.</p></li></ol><p>When a CA wants to ensure that DCV occurred without being intercepted, it can send a request to our API specifying the type of DCV to perform and its parameters.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6yKDxJFYuvSllzdqYBDWfx/8c7e27f099e2bf94b54df5bf1810e9f6/Mulitpath-DCV_3x.png" />
            
            </figure><p>The DCV orchestrator then forwards each request to a random subset of over 20 DCV agents in different datacenters. Each DCV agent performs the DCV request and forwards the result to the DCV orchestrator, which aggregates what each agent observed and returns it to the CA.</p><p>This approach can also be generalized to performing multipath queries over DNS records, like Certificate Authority Authorization (CAA) records. CAA records authorize CAs to issue certificates for a domain, so spoofing them to trick unauthorized CAs into issuing certificates is another attack vector that multipath observation prevents.</p><p>As we were developing our multipath checker, we were in contact with the Princeton research group that introduced the proof-of-concept (PoC) of certificate mis-issuance through BGP hijacking attacks. Prateek Mittal, coauthor of the <i>Bamboozling Certificate Authorities with BGP</i> paper, wrote:</p><blockquote><p>“Our analysis shows that domain validation from multiple vantage points significantly mitigates the impact of localized BGP attacks. We recommend that all certificate authorities adopt this approach to enhance web security. A particularly attractive feature of Cloudflare’s implementation of this defense is that Cloudflare has access to a vast number of vantage points on the Internet, which significantly enhances the robustness of domain control validation.”</p></blockquote><p>Our DCV checker follows our belief that trust on the Internet must be distributed, and vetted through third-party analysis (like that provided by Cloudflare) to ensure consistency and security. This tool joins our <a href="/introducing-certificate-transparency-and-nimbus/">pre-existing Certificate Transparency monitor</a> as a set of services CAs are welcome to use in improving the accountability of certificate issuance.</p>
    <div>
      <h3>An Opportunity to Dogfood</h3>
      <a href="#an-opportunity-to-dogfood">
        
      </a>
    </div>
    <p>Building our multipath DCV checker also allowed us to <a href="https://en.wikipedia.org/wiki/Eating_your_own_dog_food"><i>dogfood</i></a> multiple Cloudflare products.</p><p>The DCV orchestrator as a simple fetcher and aggregator was a fantastic candidate for <a href="https://developers.cloudflare.com/workers/">Cloudflare Workers</a>. We <a href="/generating-documentation-for-typescript-projects/">implemented the orchestrator in TypeScript</a> using this post as a guide, and created a typed, reliable orchestrator service that was easy to deploy and iterate on. Hooray that we don’t have to maintain our own <code>dcv-orchestrator</code>  server!</p><p>We use <a href="https://developers.cloudflare.com/argo-tunnel/">Argo Tunnel</a> to allow Cloudflare Workers to contact DCV agents. Argo Tunnel allows us to easily and securely expose our DCV agents to the Workers environment. Since Cloudflare has approximately 175 datacenters running DCV agents, we expose many services through Argo Tunnel, and have had the opportunity to load test Argo Tunnel as a power user with a wide variety of origins. Argo Tunnel readily handled this influx of new origins!</p>
    <div>
      <h3>Getting Access to the Multipath DCV Checker</h3>
      <a href="#getting-access-to-the-multipath-dcv-checker">
        
      </a>
    </div>
    <p>If you and/or your organization are interested in trying our DCV checker, email <a>dcv@cloudflare.com</a> and let us know! We’d love to hear more about how multipath querying and validation bolsters the security of your certificate issuance.</p><p>As a new class of BGP and IP spoofing attacks threaten to undermine PKI fundamentals, it’s important that website owners advocate for multipath validation when they are issued certificates. We encourage all CAs to use multipath validation, whether it is Cloudflare’s or their own. Jacob Hoffman-Andrews, Tech Lead, Let’s Encrypt, wrote:</p><blockquote><p>“BGP hijacking is one of the big challenges the web PKI still needs to solve, and we think multipath validation can be part of the solution. We’re testing out our own implementation and we encourage other CAs to pursue multipath as well”</p></blockquote><p>Hopefully in the future, website owners will look at multipath validation support when selecting a CA.</p> ]]></content:encoded>
            <category><![CDATA[Crypto Week]]></category>
            <category><![CDATA[Cryptography]]></category>
            <category><![CDATA[DNS]]></category>
            <category><![CDATA[BGP]]></category>
            <category><![CDATA[TLS]]></category>
            <category><![CDATA[CFSSL]]></category>
            <category><![CDATA[HTTPS]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[Speed & Reliability]]></category>
            <guid isPermaLink="false">142PyPkCaDGbaxHJsIruoK</guid>
            <dc:creator>Dina Kozlov</dc:creator>
            <dc:creator>Gabbi Fisher</dc:creator>
        </item>
        <item>
            <title><![CDATA[League of Entropy: Not All Heroes Wear Capes]]></title>
            <link>https://blog.cloudflare.com/league-of-entropy/</link>
            <pubDate>Mon, 17 Jun 2019 13:01:00 GMT</pubDate>
            <description><![CDATA[ Everything from cryptography to big money lottery to quantum mechanics requires some form of randomness. But what exactly does it mean for a number to be randomly generated and where does the randomness come from? ]]></description>
            <content:encoded><![CDATA[ <p></p><p>To kick-off <a href="/welcome-to-crypto-week-2019/">Crypto Week 2019</a>, we are really excited to announce a new solution to a long-standing problem in cryptography. To get a better understanding of the technical side behind this problem, please refer to the next <a href="/inside-the-entropy">post</a> for a deeper dive.</p><p>Everything from cryptography to big money lottery to <a href="https://arxiv.org/pdf/1611.02176.pdf">quantum mechanics</a> requires some form of randomness. But what exactly does it mean for a number to be randomly generated and where does the randomness come from?</p><p>Generating randomness dates back three thousand years, when the ancients rolled “the bones” to determine their fate. Think of lotteries-- seems simple, right? Everyone buys their tickets, chooses six numbers, and waits for an official to draw them randomly from a basket. Sounds like a foolproof solution. And then in 1980, the host of the Pennsylvania lottery drawing was <a href="https://www.nytimes.com/1981/05/21/us/2-guilty-of-bid-to-rig-pennsylvania-lottery.html">busted for using weighted balls</a> to choose the winning number. This lesson, along with the need of other complex systems for generating random numbers spurred the creation of random number generators.</p><p>Just like a lottery game selects random numbers unpredictably, a random number generator is a device or software responsible for generating sequences of numbers in an unpredictable manner. As the need for randomness has increased, so has the need for constant generation of substantially large, unpredictable numbers. This is why organizations developed <a href="https://csrc.nist.gov/Projects/Interoperable-Randomness-Beacons">publicly available randomness beacons</a> -- servers generating completely unpredictable 512-bit strings (about 155-digit numbers) at regular intervals.</p><p>Now, you might think using a randomness beacon for random generation processes, such as those needed for lottery selection, would make the process resilient against adversarial manipulation, but that’s not the case. Single-source randomness has been exploited to generate biased results.</p><p>Today, randomness beacons generate numbers for lotteries and election audits -- both affect the lives and fortunes of millions of people. Unfortunately, exploitation of the single point of origin of these beacons have created dishonest results that benefited one <a href="https://www.nytimes.com/interactive/2018/05/03/magazine/money-issue-iowa-lottery-fraud-mystery.html">corrupt insider</a>. To thwart exploitation efforts, Cloudflare and other randomness-beacon providers have joined forces to bring users a <b>quorum of decentralized randomness beacons</b>. After all, eight independent globally distributed beacons can be much more trustworthy than one!</p><p>We’re happy to introduce you to ....</p><p>THE LEAGUE …. OF …. ENTROPY !!!!!!</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3u6pMN9g1KQu3ZvwMTwkmb/d90c484cfb789a728cd8fdcc0f002324/image8-1.png" />
            
            </figure>
    <div>
      <h3>What is a randomness beacon?</h3>
      <a href="#what-is-a-randomness-beacon">
        
      </a>
    </div>
    <p>A randomness beacon is a public service that provides unpredictable random numbers at regular intervals.</p><p><a href="https://github.com/dedis/drand">drand</a> (pronounced dee-rand) is a <i>distributed randomness</i> beacon developed by <a href="https://twitter.com/nikkolasg1">Nicolas Gailly</a>; with the help of <a href="https://twitter.com/daeinar">Philipp Jovanovic</a>, and <a href="https://github.com/PizzaWhisperer">Mathilde Raynal</a>. The drand project originated from the research paper <a href="https://www.ieee-security.org/TC/SP2017/papers/413.pdf">Scalable Bias-Resistant Distributed Randomness</a> published at the <a href="https://ieeexplore.ieee.org/abstract/document/7958592">2017 IEEE Symposium on Security and Privacy</a> by <a href="http://ewa.syta.us/">Ewa Syta</a>, <a href="https://philipp.jovanovic.io">Philipp Jovanovic</a>, <a href="https://lefteriskk.github.io/">Eleftherios Kokoris Kogias</a>, <a href="https://github.com/nikkolasg/">Nicolas Gailly</a>, <a href="https://people.epfl.ch/linus.gasser">Linus Gasser</a>, <a href="https://ismailkhoffi.com/">Ismail Khoffi</a>, <a href="http://www.cs.yale.edu/homes/fischer/">Michael J. Fischer</a>, <a href="https://bford.info/">Bryan Ford</a>, from the <a href="https://dedis.epfl.ch/">Decentralized/Distributed Systems (DEDIS) lab</a> at <a href="https://www.epfl.ch/">EPFL</a>, <a href="https://www.yale.edu/">Yale University</a>, and <a href="https://www.trincoll.edu/">Trinity College Hartford</a>, with support from <a href="https://researchinstitute.io/">Research Institute</a>.</p><p>For every randomness generation round, drand provides the following properties, as specified in the <a href="https://www.ieee-security.org/TC/SP2017/papers/413.pdf">research paper</a>:</p><ul><li><p><b>Availability</b> - The distributed randomness generation completes successfully with high probability.</p></li><li><p><b>Unpredictability</b> - No party learns anything about the random output of the current round, except with negligible probability, until a sufficient number of drand nodes reveals their contributions in the randomness generation protocol.</p></li><li><p><b>Unbiasability</b> - The random output represents an unbiased, uniformly random value, except with negligible probability.</p></li><li><p><b>Verifiability</b> - The random output is third-party verifiable against the collective public key computed during drand's setup. This serves as the unforgettable attestation that the documented set of drand nodes ran the protocol to produce the one-and-only random output, except with negligible probability.</p></li></ul><p><i>Entropy</i> measures the unpredictable nature of a number. For randomness, the more entropy the better, so naturally it’s where we got our name, the League of Entropy.</p><p>Our founding members are contributing their individual high-entropy sources to provide a more random and unpredictable beacon to generate publicly verifiable random values every sixty seconds. The fact that the drand beacon is decentralized and built using appropriate, provably-secure cryptographic primitives, increases our confidence that it possesses all the aforementioned properties.</p><p>This global network of servers generating randomness ensures that even if a few servers are offline, the beacon continues to produce new numbers by using the remaining online servers.  Even if one or two of the servers or their entropy sources were to be compromised, the rest will still ensure that the jointly-produced entropy is fully unpredictable and unbiasable.</p><p>Who exactly is running this beacon? Currently, The League of Entropy is a consortium of global organizations and individual contributors, including: Cloudflare, Protocol Labs researcher Nicolas Gailly, University of Chile, École polytechnique fédérale de Lausanne  (EPFL), Kudelski Security, and EPFL researchers, Philipp Jovanovic and Ludovic Barman.</p>
    <div>
      <h3>Meet the League of Entropy</h3>
      <a href="#meet-the-league-of-entropy">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/BWt5ZDGMydRLWX7xttadU/47daf5fed2017f4359b6354938faca5d/image4-3.png" />
            
            </figure><p>Cloudflare’s <b>LavaRand</b>: <a href="/lavarand-in-production-the-nitty-gritty-technical-details/">LavaRand</a> sources her high entropy from Cloudflare’s wall of lava lamps at our San Francisco Headquarters. The unpredictable flow of “lava” inside the lamps is used as an input to a camera feed into a CSPRNG (Cryptographically Secure PseudoRandom Number Generator) that generates the random value.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1rPgrKMIvEhDQcA9mb96Xg/69c52f8b49ab3d5f4a75097db174491c/3R-uEem_pwlr_yfXEVJJiub0eiVTPEX01rBof0SB1xopcgbzgfOFsH4BLRXKfdnwqpAkXlJbBFG6PUQRBK-UJBqTEGIFKQxQaRaq-5FoZG8ny6WhkahwAMjSTD9X.png" />
            
            </figure><p><a href="https://www.epfl.ch">EPFL</a>’s <b>URand</b>: URand’s power comes from the local randomness generator present on every computer at /dev/urandom. The randomness input is collected from inputs such as keyboard presses, mouse clicks, network traffic, etc. URand bundles these random inputs to produce a continuous stream of randomness.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2MG8iT5EJUNbvBMRnIs0jL/5e874cbd8522463cd841550a3ce6a735/image5-2.png" />
            
            </figure><p><a href="https://random.uchile.cl/en/">UChile</a>’s <b>Seismic Girl</b>: Seismic Girl extracts super verifiable randomness from five sources queried every minute. These sources include: seismic measurements of shakes and earthquakes in Chile; a stream from a local radio station; a selection of Twitter posts; data from the Ethereum blockchain; and their own off-the-shelf RNG card.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3yqHoYjeD9Bc2rqw64Z0AI/2b2913dc2ffbcf95bca4c91e7b43db40/Sg4AYv1L_bjeQyXri7ksWM9w13hQrghD-seBI2ErHx_k4XL5Cm0f2xXZsvDRNEu3ZQCX0klZevk8Y6U3BdS_XU7AaC1VWYeL34ZSjSa1fZXKYg1I7AP6IaxkmAOH.png" />
            
            </figure><p><a href="https://www.kudelskisecurity.com/">Kudelski Security</a>’s <b>ChaChaRand</b>: ChaChaRand uses a CRNG (Cryptographic Random Number Generator) based on the <a href="https://tools.ietf.org/html/rfc7539">ChaCha20</a> stream cipher.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6wjCZoEtdvwmp3n89CMvgM/7aaf481355bad193d251013c1b8466fc/image3-3.png" />
            
            </figure><p><a href="https://protocol.ai/">Protocol Labs</a>’ <b>InterplanetaryRand</b>: InterplanetaryRand uses the power of entropy to ensure protocol safety across space and time by using environmental noise and the Linux PRNG, supplemented by CPU-sourced randomness (<a href="/ensuring-randomness-with-linuxs-random-number-generator/">RdRand</a>).</p><p>Together, our heroes are committed to #savetheinternet by combining their randomness to form a globally distributed and cryptographically verifiable randomness beacon.  </p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/oq8NHvjmXXlKoLJrhmDq9/63750368c5a8c280e05365de5789bef2/pTiFdLtxsabqbVpczCBKVT4dzbP9e9WxeU4lOPbiUIecOR5ER9QlRZG196PfYvJvF_Sbm-DTlUphCBWBMIOtqiTTKmTmRDumoCUlges8uEmNxJh75unUdnP2xobX.png" />
            
            </figure>
    <div>
      <h3>Public versus Private Randomness</h3>
      <a href="#public-versus-private-randomness">
        
      </a>
    </div>
    <p>Different types of randomness are needed for different types of applications.</p><p>The trick to generating secure cryptographic keys is to use large, privately-generated random numbers that no one else can predict. With randomness beacons publicly generating and announcing random numbers, <b>users should NOT be using the output of a randomness beacon for their secret keys</b>, as these numbers are accessible by anyone. If an attacker can guess the random number that a user’s private cryptographic key was derived from, they can crack their system and decrypt confidential information. This simply means that random numbers generated by a public beacon are not safe to use for encryption keys: not because there’s anything wrong with the randomness, but simply because the randomness is public.</p><p>Clients using the drand beacon can request private randomness from some or all of the drand nodes if they would like to generate a random value that will not be publicly announced.  For more information on how to do this, check the <a href="https://developers.cloudflare.com/randomness-beacon/">developer docs.</a></p><p>On the other hand, public randomness is often employed by users requiring a randomness value that is not supposed to be secret but whose generation must be transparent, fair, and unbiased. This is perfect for many purposes such as games, lotteries, and election auditing, where the auditor and the public require transparency into when and how and how fairly the random value was generated.The League of Entropy provides <b>public randomness</b> that any user can retrieve from <a href="https://leagueofentropy.com">leagueofentropy.com</a>. Users will be able to view the 512-bit string value that is generated every 60 seconds. Why 60 seconds? No particular reason. Theoretically, the randomness generation can go as fast as the hardware allows, but it’s not necessary for most use cases. Values generated every 60 seconds give users 1440 random values in one 24-hour period.</p><p>*FRIENDLY REMINDER: THIS RANDOMNESS IS PUBLIC. DO NOT USE IT FOR PRIVATE CRYPTOGRAPHIC KEYS*</p>
    <div>
      <h3>Why does public randomness matter?</h3>
      <a href="#why-does-public-randomness-matter">
        
      </a>
    </div>
    
    <div>
      <h4>Election auditing</h4>
      <a href="#election-auditing">
        
      </a>
    </div>
    <p>In the US, most elections are followed by an audit to verify they were unbiased and conducted fairly. Robust auditing systems increase voter confidence by improving election officials’ ability to respond effectively to allegations of fraud, and to detect bugs in the system.</p><p>Currently, most election ballots and precincts are randomly chosen by election officials. This approach is potentially vulnerable to bias by a corrupt insider who might select certain precincts to present a preferred outcome. Even in a situation where every voter district was tampered with, by using a robust, distributed, and most importantly, <i>unpredictable and unbiasable</i> beacon, election auditors can trust that a small sample of districts is enough to audit, as long as an attacker cannot predict district selection.</p><p>In Chile, election poll workers are randomly selected from a pool of eligible voters. The University of Chile’s Random UChile <a href="https://random.uchile.cl/en/">project</a> has been working on a prototype that uses their randomness beacon for this process. Alejandro Hevia, leader of Random UChile, believes that for election auditing, public randomness is important for transparency and distributed randomness gives people the ability to trust the unlikeliness that <i>multiple</i> contributors to the beacon colluded, as opposed to trusting a single entity.</p>
    <div>
      <h4>Lotteries</h4>
      <a href="#lotteries">
        
      </a>
    </div>
    <p>From 2005 to 2014, the information security director for the Multi-State Lottery Association, Eddie Tipton, <a href="https://www.nytimes.com/interactive/2018/05/03/magazine/money-issue-iowa-lottery-fraud-mystery.html">rigged a random number generator</a> and won the lottery <b>six</b> times!</p><p>Tipton could predict the winning numbers by skipping the standard random seeding process. He was able to insert into the function of the random number generator code that checked the date, day of the week, and time. If these three variables did not align, the random number generator used radioactive material and a Geiger counter to generate a random seed. If the variables aligned as surreptitiously programmed, which usually only happened once a year, then it would generate the seed using a 7-variable formula fed into a <a href="https://en.wikipedia.org/wiki/Mersenne_Twister">Mersenne Twister</a>, a <i>pseudo</i> random-number generator.</p><p>Tipton knew these 7 variables. He knew the small pool of numbers that might be the seed. This knowledge allowed him to predict the results of the Mersenne Twister. This is a scam which a <b>distributed randomness beacon</b> can make substantially more difficult, if not impossible.</p><p>Rob Sand, the former Iowa Assistant Attorney General and current Iowa State Auditor who prosecuted the Tipton cases, is also an advocate for improved controls. He said:</p><p><i>“There is no excuse for an industry that rakes in $80 billion in annual revenue not to use the most sophisticated, truly random means available to ensure integrity.”</i></p>
    <div>
      <h4>Distributed ledger platforms</h4>
      <a href="#distributed-ledger-platforms">
        
      </a>
    </div>
    <p>In many cryptocurrencies and blockchain-based distributed computing platforms, such as <a href="https://en.wikipedia.org/wiki/Ethereum">Ethereum</a>, there is often a need for random selection at the application layer. One solution to prevent bias for such a random selection is to use a distributed randomness beacon like drand to generate the random value. Justin Drake, researcher at the <a href="https://www.ethereum.org/">Ethereum Foundation</a>, believes "randomness from a drand-type federation could be a particularly good match for real-time decentralized applications on Ethereum such as live gaming and gambling". This is due to the possibility to deliver ultra-low latency randomness applicable for a broad range of application where public randomness is required.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4hGB9Ryb87DWFPSKjFCiQQ/5bcdf5446f0344d2c0a123226dba867c/IzxDdb91VENIgVK05HqMqA6u2mwL-pMx3GLBdxJQjCeaPSD1bGTjE3mnOyInEOgq0lyovRBghGCcDbPpWSPO4ToT2tR4aftwcLdQNVwCKn3Lghh2FkK8UbEkC63J.png" />
            
            </figure>
    <div>
      <h3>Let’s get you on drand!</h3>
      <a href="#lets-get-you-on-drand">
        
      </a>
    </div>
    <p>To learn more about the League of Entropy and how to use the distributed randomness beacon, visit <a href="https://leagueofentropy.com">https://leagueofentropy.com</a>. The website periodically displays the randomness generated by the network, and you can even see previously generated values. Go ahead, try it out!</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/45QOeg172UTQopTpSz4urP/4dadd599bf2c2c1b3a447de62174eca1/30p6dmRd2omn-YRzrY7oPmYEVUwRFOo7izqymD6xvYQ9txl6hzOwachb_AkrHilkBKEr2kZu_lCGSXhJQrubJ_q1gPbf-NYu4V3o98cvIfa_bq2Ug1MLk6PrSysL.png" />
            
            </figure>
    <div>
      <h3>How to join the league:</h3>
      <a href="#how-to-join-the-league">
        
      </a>
    </div>
    <p>Want to join the league?? We’re not exclusive!</p><p>If you are an organization or an individual who is interested in contributing to the drand beacon, check out <a href="https://developers.cloudflare.com/randomness-beacon/">the developer docs</a> for more information regarding the requirements for setting up a server and joining the existing group. drand is currently in its beta release phase and an approval request must be sent to <a>leagueofentropy@googlegroups.com</a> in order to be approved as a contributing server.</p><p><b>UPDATE:</b> As of <a href="https://drand.love/blog/2020/08/10/drand-launches-v1-0/">August 2020</a>, drand is a production-grade network supporting critical use cases including Filecoin. For the most up-to-date information on the drand project and the League of Entropy, see <a href="https://drand.love/">drand.love</a>.</p>
    <div>
      <h3>Looking into the future</h3>
      <a href="#looking-into-the-future">
        
      </a>
    </div>
    <p>It only makes sense that the Internet of the future will demand unpredictable randomness beacons. The League of Entropy is out there now, creating the basis for future systems to leverage trustworthy public randomness. Our goal is to increase user trust and provide a one-stop shop for all your public entropy needs. Come, join us!</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5u1TlHjajsOzmkNeIqsGoH/b881384074d8b079c0234550377f87aa/image2-4.png" />
            
            </figure> ]]></content:encoded>
            <category><![CDATA[Crypto Week]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[Cryptography]]></category>
            <category><![CDATA[Entropy]]></category>
            <category><![CDATA[LavaRand]]></category>
            <guid isPermaLink="false">66MECAsvLf18mL0k238aK8</guid>
            <dc:creator>Dina Kozlov</dc:creator>
        </item>
        <item>
            <title><![CDATA[Inside the Entropy]]></title>
            <link>https://blog.cloudflare.com/inside-the-entropy/</link>
            <pubDate>Mon, 17 Jun 2019 13:00:00 GMT</pubDate>
            <description><![CDATA[ Generating random outcomes is an essential part of everyday life; from lottery drawings and constructing competitions, to performing deep cryptographic computations.  ]]></description>
            <content:encoded><![CDATA[ <p></p><blockquote><p>Randomness, randomness everywhere;Nor any verifiable entropy.</p></blockquote><p>Generating random outcomes is an essential part of everyday life; from lottery drawings and constructing competitions, to performing deep cryptographic computations. To use randomness, we must have some way to 'sample' it. This requires interpreting some natural phenomenon (such as a fair dice roll) as an event that generates some random output. From a computing perspective, we interpret random outputs as bytes that we can then use in algorithms (such as drawing a lottery) to achieve the functionality that we want.</p><p>The sampling of randomness securely and efficiently is a critical component of all modern computing systems. For example, nearly all public-key cryptography relies on the fact that algorithms can be seeded with bytes generated from genuinely random outcomes.</p><p>In scientific experiments, a random sampling of results is necessary to ensure that data collection measurements are not skewed. Until now, generating random outputs in a way that we can verify that they are indeed random has been very difficult; typically involving taking a variety of statistical measurements.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/CEsiU1gZNLRG4VZ8pKenE/4fe5ae39fa2c63683bf7db2540ad266a/image9-2.png" />
            
            </figure><p>During Crypto week, Cloudflare is releasing a new <a href="/league-of-entropy">public randomness beacon</a> as part of the launch of the <a href="https://leagueofentropy.com">League of Entropy</a>. The League of Entropy is a network of beacons that produces <i>distributed</i>, <i>publicly verifiable</i> random outputs for use in applications where the nature of the randomness must be publicly audited. The underlying cryptographic architecture is based on the <a href="https://github.com/dedis/drand">drand project</a>.</p><p>Verifiable randomness is essential for ensuring trust in various institutional decision-making processes such as <a href="/league-of-entropy">elections and lotteries</a>. There are also cryptographic applications that require verifiable randomness. In the land of decentralized consensus mechanisms, the <a href="https://dfinity.org/static/dfinity-consensus-0325c35128c72b42df7dd30c22c41208.pdf">DFINITY approach</a> uses random seeds to decide the outcome of leadership elections. In this setting, it is essential that the randomness is publicly verifiable so that the outcome of the leadership election is trustworthy. Such a situation arises more generally in <a href="https://en.wikipedia.org/wiki/Sortition">Sortitions</a>: an election where leaders are selected as a random individual (or subset of individuals) from a larger set.</p><p>In this blog post, we will give a technical overview behind the cryptography used in the distributed randomness beacon, and how it can be used to generate publicly verifiable randomness. We believe that distributed randomness beacons have a huge amount of utility in realizing the <a href="/welcome-to-crypto-week-2019/">Internet of the Future</a>; where we will be able to rely on distributed, decentralized solutions to problems of a global-scale.</p>
    <div>
      <h2>Randomness &amp; entropy</h2>
      <a href="#randomness-entropy">
        
      </a>
    </div>
    <p>A source of randomness is measured in terms of the amount of <i>entropy</i> it provides. Think about the entropy provided by a random output as a score to indicate how “random” the output actually is. The notion of information entropy was concretised by the famous scientist Claude Shannon in his paper <a href="https://en.wikipedia.org/wiki/A_Mathematical_Theory_of_Communication">A Mathematical Theory of Communication</a>, and is sometimes known as <a href="https://en.wikipedia.org/wiki/Entropy_(information_theory)"><i>Shannon Entropy</i></a>.</p><p>A common way to think about random outputs is: a sequence of bits derived from some random outcome. For the sake of an argument, consider a fair 8-sided dice roll with sides marked 0-7. The outputs of the dice can be written as the bit-strings <code>000,001,010,...,111</code>. Since the dice is fair, any of these outputs is equally likely. This is means that each of the bits is equally likely to be <code>0</code> or <code>1</code>. Consequently, interpreting the output of the dice roll as a random output then derives randomness with <code>3</code> bits of entropy.</p><p>More generally, if a perfect source of randomness guarantees strings with <code>n</code> bits of entropy, then it generates bit-strings where each bit is equally likely to be <code>0</code> or <code>1</code>. This allows us to predict the value of any bit with maximum probability <code>1/2</code>. If the outputs are sampled from such a perfect source, we consider them <i>uniformly distributed</i>. If we sample the outputs from a source where one bit is predictable with higher probability, then the string has <code>n-1</code> bits of entropy. To go back to the dice analogy, rolling a 6-sided dice provides less than <code>3</code> bits of entropy because the possible outputs are <code>000,001,010,011,100,101</code> and so the 2nd and 3rd bits are more likely to be to set to <code>0</code> than to <code>1</code>.</p><p>It is possible to mix entropy sources using specifically designed mixing functions to retrieve something with even greater entropy. The maximum resulting entropy is the sum of the entropy taken from the number of entropic sources used as input.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4LcFWOh6MsFEqmYrGB6PXa/f87eb05da963db50a5f8a0a24d260d76/combined-entropy-_2x.png" />
            
            </figure>
    <div>
      <h4>Sampling randomness</h4>
      <a href="#sampling-randomness">
        
      </a>
    </div>
    <p>To sample randomness, let’s first identify the appropriate sources. There are many natural phenomena that one can use:</p><ul><li><p>atmospheric noise;</p></li><li><p>radioactive decay;</p></li><li><p>turbulent motion; like that generated in Cloudflare’s wall of <a href="/lavarand-in-production-the-nitty-gritty-technical-details/">lava lamps(!)</a>.</p></li></ul>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2nJtxum8JhhrkxOyplTTka/e96eaea45650c205a68cda95c4ee9b8a/pasted-image-0--1-.png" />
            
            </figure><p>Unfortunately, these phenomena require very specific measuring tools, which are prohibitively expensive to install in mainstream consumer electronics. As such, most personal computing devices usually use external usage characteristics for seeding specific generator functions that output randomness as and when the system requires it. These characteristics include keyboard typing patterns and speed and mouse movement – since such usage patterns are based on the human user, it is assumed they provide sufficient entropy as a randomness source. An example of a random number generator that takes entropy from these characteristics is the Linux-based <a href="https://en.wikipedia.org/wiki/RdRand">RDRAND</a> function.</p><p>Naturally, it is difficult to tell whether a system is <i>actually</i> returning random outputs by only inspecting the outputs. There are statistical tests that detect whether a series of outputs is not uniformly distributed, but these tests cannot ensure that they are unpredictable. This means that it is hard to detect if a given system has had its randomness generation compromised.</p>
    <div>
      <h2>Distributed randomness</h2>
      <a href="#distributed-randomness">
        
      </a>
    </div>
    <p>It’s clear we need alternative methods for sampling randomness so that we can provide guarantees that trusted mechanisms, such as elections and lotteries, take place in secure tamper-resistant environments. The <a href="https://github.com/dedis/drand/">drand</a> project was started by researchers at <a href="https://www.epfl.ch/about/">EPFL</a> to address this problem. The drand charter is to provide an easily configurable randomness beacon running at geographically distributed locations around the world. The intention is for each of these beacons to generate portions of randomness that can be combined into a single random string that is publicly verifiable.</p><p>This functionality is achieved using <i>threshold cryptography</i>. Threshold cryptography seeks to derive solutions for standard cryptographic problems by combining information from multiple distributed entities. The notion of the threshold means that if there are <code>n</code> entities, then any <code>t</code> of the entities can combine to construct some cryptographic object (like a ciphertext, or a digital signature). These threshold systems are characterised by a setup phase, where each entity learns a <i>share</i> of data. They will later use this share of data to create a combined cryptographic object with a subset of the other entities.</p>
    <div>
      <h3>Threshold randomness</h3>
      <a href="#threshold-randomness">
        
      </a>
    </div>
    <p>In the case of a distributed randomness protocol, there are <code>n</code> <i>randomness beacons</i> that broadcast random values sampled from their initial data share, and the current state of the system. This data share is created during a trusted setup phase, and also takes in some internal random value that is generated by the beacon itself.</p><p>When a user needs randomness, they send requests to some number <code>t</code> of beacons, where <code>t &lt; n</code>, and combine these values using a specific procedure. The result is a random value that can be verified and used for public auditing mechanisms.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7BeauvWICHvAH9uX1tCqaY/47a57124c026459500347cf14757f818/pasted-image-0--2-.png" />
            
            </figure><p>Consider what happens if some proportion <code>c/n</code> of the randomness beacons are <i>corrupted</i> at any one time. The nature of a threshold cryptographic system is that, as long as <code>c &lt; t</code>, then the end result still remains random.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5NM7oB6KqjsC5ys55cVEGP/9165d5ad0c235240b6e455cf08e62f1e/pasted-image-0--3-.png" />
            
            </figure><p>If <code>c</code> exceeds <code>t</code> then the random values produced by the system become predictable and the notion of randomness is lost. In summary, the distributed randomness procedure provides verifiably random outputs with sufficient entropy only when <code>c &lt; t</code>.</p><p>By distributing the beacons independent of each other and in geographically disparate locations, the probability that <code>t</code> locations can be corrupted at any one time is extremely low. The minimum choice of <code>t</code> is equal to <code>n/2</code>.</p>
    <div>
      <h2>How does it actually work?</h2>
      <a href="#how-does-it-actually-work">
        
      </a>
    </div>
    <p>What we described above sounds a bit like magic<sup>tm</sup>. Even if <code>c = t-1</code>, then we can ensure that the output is indeed random and unpredictable! To make it clearer how this works, let’s dive a bit deeper into the underlying cryptography.</p><p>Two core components of drand are: a <i>distributed key generation</i> (DKG) procedure, and a <i>threshold signature scheme</i>. These core components are used in setup and randomness generation procedures, respectively. In just a bit, we’ll outline how drand uses these components (without navigating too deeply into the onerous mathematics).</p>
    <div>
      <h3>Distributed key generation</h3>
      <a href="#distributed-key-generation">
        
      </a>
    </div>
    <p>At a high-level, the DKG procedure creates a distributed secret key that is formed of <code>n</code> different key pairs <code>(vk_i, sk_i)</code>, each one being held by the entity <code>i</code> in the system. These key pairs will eventually be used to instantiate a <code>(t,n)</code>-threshold signature scheme (we will discuss this more later). In essence, <code>t</code> of the entities will be able to combine to construct a valid signature on any message.</p><p>To think about how this might work, consider a distributed key generation scheme that creates <code>n</code> distributed keys that are going to be represented by pizzas. Each pizza is split into <code>n</code> slices and one slice from each is secretly passed to one of the participants. Each entity receives one slice from each of the different pizzas (<code>n</code> in total) and combines these slices to form their own pizza. Each combined pizza is unique and secret for each entity, representing their own key pair.</p>
    <div>
      <h4>Mathematical intuition</h4>
      <a href="#mathematical-intuition">
        
      </a>
    </div>
    <p>Mathematically speaking, and rather than thinking about pizzas, we can describe the underlying phenomenon by reconstructing lines or curves on a graph. We can take two coordinates on a <code>(x,y)</code> plane and immediately (and uniquely) define a line with the equation <code>y = ax+b</code>. For example, the points <code>(2,3)</code> and <code>(4,7)</code> immediately define a line with gradient <code>(7-3)/(4/2) = 2</code> so <code>a=2</code>. You can then derive the <code>b</code> coefficient as <code>-1</code> by evaluating either of the coordinates in the equation <code>y = 2x + b</code>. By <i>uniquely</i>, we mean that only the line <code>y = 2x -1</code> satisfies the two coordinates that are chosen; no other choice of <code>a</code> or <code>b</code> fits.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2htl50xQeT4euGS3VR6rxf/de0f4b27b223bd88f4fc90ebd204367e/line2-2-.png" />
            
            </figure><p>The curve <code>ax+b</code> has degree <code>1</code>, where the degree of the equation refers to the highest order multiplication of unknown variables in the equation. That might seem like mathematical jargon, but the equation above contains only one term <code>ax</code>, which depends on the unknown variable <code>x</code>. In this specific term, the  <i>exponent</i> (or <i>power</i>) of <code>x</code> is <code>1</code>, and so the degree of the entire equation is also <code>1</code>.</p><p>Likewise, by taking three sets of coordinates pairs in the same plane, we uniquely define a quadratic curve with an equation that approximately takes the form <code>y = ax^2 + bx + c</code> with the coefficients <code>a,b,c</code> uniquely defined by the chosen coordinates. The process is a bit more involved than the above case, but it essentially starts in the same way using three coordinate pairs <code>(x_1, y_1)</code>, <code>(x_2, y_2)</code> and <code>(x_3, y_3)</code>.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/24FGvRaMCx5lUA2umjk6wi/4e76792bba003072606d6196c0febdc2/line3.png" />
            
            </figure><p>By a quadratic curve, we mean a curve of degree <code>2</code>. We can see that this curve has degree <code>2</code> because it contains two terms <code>ax^2</code> and <code>bx</code> that depend on <code>x</code>. The highest order term is the <code>ax^2</code> term with an exponent of <code>2</code>, so this curve has degree <code>2</code> (ignore the term <code>bx</code> which has a smaller power).</p><p>What we are ultimately trying to show is that this approach scales for curves of degree <code>n</code> (of the form <code>y = a_n x^n + … a_1 x + a_0</code>). So, if we take <code>n+1</code> coordinates on the <code>(x,y)</code> plane, then we can uniquely reconstruct the curve of this form entirely. Such degree <code>n</code> equations are also known as <i>polynomials</i> of degree <code>n</code>.</p><p>In order to generalise the approach to general degrees we need some kind of formula. This formula should take <code>n+1</code> pairs of coordinates and return a polynomial of degree <code>n</code>. Fortunately, such a formula exists without us having to derive it ourselves, it is known as the <a href="https://en.wikipedia.org/wiki/Lagrange_polynomial#Definition"><i>Lagrange interpolation polynomial</i></a>. Using the formula in the link, we can reconstruct any <code>n</code> degree polynomial using <code>n+1</code> unique pairs of coordinates.</p><p>Going back to pizzas temporarily, it will become clear in the next section how this Lagrange interpolation procedure essentially describes the dissemination of one slice (corresponding to <code>(x,y)</code> coordinates) taken from a single pizza (the entire <code>n-1</code> degree polynomial) among <code>n</code> participants. Running this procedure <code>n</code> times in parallel allows each entity to construct their entire pizza (or the eventual key pair).</p>
    <div>
      <h4>Back to key generation</h4>
      <a href="#back-to-key-generation">
        
      </a>
    </div>
    <p>Intuitively, in the DKG procedure we want to distribute <code>n</code> key pairs among <code>n</code> participants. This effectively means running <code>n</code> parallel instances of a <code>t</code>-out-of-<code>n</code> <a href="https://en.wikipedia.org/wiki/Shamir's_Secret_Sharing">Shamir Secret Sharing</a> scheme. This secret sharing scheme is built entirely upon the polynomial interpolation technique that we described above.</p><p>In a single instance, we take the secret key to be the first coefficient of a polynomial of degree <code>t-1</code> and the public key is a published value that depends on this secret key, but does not reveal the actual coefficient. Think of RSA, where we have a number <code>N = pq</code> for secret large prime numbers <code>p,q</code>, where <code>N</code> is public but does not reveal the actual factorisation. Notice that if the polynomial is reconstructed using the interpolation technique above, then we immediately learn the secret key, because the first coefficient will be made explicit.</p><p>Each secret sharing scheme publishes shares, where each share is a different evaluation of the polynomial (dependent on the entity <code>i</code> receiving the key share). These evaluations are essentially coordinates on the <code>(x,y)</code> plane.</p><p>By running <code>n</code> parallel instances of the secret sharing scheme, each entity receives <code>n</code> shares and then combines all of these to form their overall key pair <code>(vk_i, sk_i)</code>.</p><p>The DKG procedure uses <code>n</code> parallel secret sharing procedures along with <a href="https://link.springer.com/chapter/10.1007/3-540-46766-1_9">Pedersen commitments</a> to distribute the key pairs. We explain in the next section how this procedure is part of the procedure for provisioning randomness beacons.</p><p>In summary, it is important to remember that <b>each party</b> in the DKG protocol generates a random secret key from the <code>n</code> shares that they receive, and they compute the corresponding public key from this. We will now explain how each entity uses this key pair to perform the cryptographic procedure that is used by the drand protocol.</p>
    <div>
      <h3>Threshold signature scheme</h3>
      <a href="#threshold-signature-scheme">
        
      </a>
    </div>
    <p>Remember: a standard signature scheme considers a key-pair <code>(vk,sk)</code>, where <code>vk</code> is a public verification key and <code>sk</code> is a private signing key. So, messages <code>m</code> signed with <code>sk</code> can be verified with <code>vk</code>. The security of the scheme ensures that it is difficult for anybody who does not hold <code>sk</code> to compute a valid signature for any message <code>m</code>.</p><p>A <i>threshold signature scheme</i> allows a set of users holding distributed key-pairs <code>(vk_i,sk_i)</code> to compute intermediate signatures <code>u_i</code> on a given message <code>m</code>.</p><p>Given knowledge of some number <code>t</code> of intermediate signatures <code>u_i</code>, a valid signature <code>u</code> on the message <code>m</code> can be reconstructed under the combined secret key <code>sk</code>. The public key <code>vk</code> can also be inferred using knowledge of the public keys <code>vk_i</code>, and then this public key can be used to verify <code>u</code>.</p><p>Again, think back to reconstructing the degree <code>t-1</code> curves on graphs with <code>t</code> known coordinates. In this case, the coordinates correspond to the intermediate signatures <code>u_i</code>, and the signature <code>u</code> corresponds to the entire curve. For the actual signature schemes, the mathematics are much more involved than in the DKG procedure, but the principal is the same.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2SpuFg5fE6Trs3KhVfGeQq/829504847821500097e2b7474a83037d/threshold-sig-3-.png" />
            
            </figure>
    <div>
      <h3>drand protocol</h3>
      <a href="#drand-protocol">
        
      </a>
    </div>
    <p>The <code>n</code> beacons that will take part in the drand project are identified. In the trusted setup phase, the DKG protocol from above is run, and each beacon effectively creates a key pair <code>(vk_i, sk_i)</code> for a threshold signature scheme. In other words, this key pair will be able to generate intermediate signatures that can be combined to create an entire signature for the system.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4ZkOUnhX8QiVnhPou3nC9/c7657ac1ec50d7156fee239f69e57e28/DKG-6-.png" />
            
            </figure><p>For each round (occurring once a minute, for example), the beacons agree on a signature <code>u</code> evaluated over a message containing the previous round’s signature and the current round’s number. This signature <code>u</code> is the result of combining the intermediate signatures <code>u_i</code> over the same message. Each intermediate signature <code>u_i</code> is created by each of the beacons using their secret <code>sk_i</code>.</p><p>Once this aggregation completes, each beacon displays the signature for the current round, along with the previous signature and round number. This allows any client to publicly verify the signature over this data to verify that the beacons honestly aggregate. This provides a chain of verifiable signatures, extending back to the first round of output. In addition, there are threshold signature schemes that output signatures that are indistinguishable from random sequences of bytes. Therefore, these signatures can be used directly as verifiable randomness for the applications we discussed previously.</p>
    <div>
      <h3>What does drand use?</h3>
      <a href="#what-does-drand-use">
        
      </a>
    </div>
    <p>To instantiate the required threshold signature scheme, drand uses the <code>(t,n)</code>-<a href="https://www.iacr.org/archive/pkc2003/25670031/25670031.pdf">BLS signature scheme</a> of Boneh, Lynn and Shacham. In particular, we can instantiate this scheme in the elliptic curve setting using  <a href="https://github.com/dfinity/bn">Barreto-Naehrig</a> curves. Moreover, the BLS signature scheme outputs sufficiently large signatures that are randomly distributed, giving them enough entropy to be sources of randomness. Specifically the signatures are randomly distributed over 64 bytes.</p><p>BLS signatures use a specific form of mathematical operation known as a <i>cryptographic pairing</i>. Pairings can be computed in certain elliptic curves, including the Barreto-Naehrig curve configurations. A detailed description of pairing operations is beyond the scope of this blog post; though it is important to remember that these operations are integral to how BLS signatures work.</p><p>Concretely speaking, all drand cryptographic operations are carried out using a library built on top of Cloudflare's implementation of the <a href="https://github.com/cloudflare/bn256/tree/lattices">bn256 curve</a>. The Pedersen DKG protocol follows the design of <a href="https://link.springer.com/article/10.1007/s00145-006-0347-3">Gennaro et al.</a>.</p>
    <div>
      <h3>How does it work?</h3>
      <a href="#how-does-it-work">
        
      </a>
    </div>
    <p>The randomness beacons are synchronised in rounds. At each round, a beacon produces a new signature <code>u_i</code> using its private key <code>sk_i</code> on the previous signature generated and the round ID. These signatures are usually broadcast on the URL <code>drand.&lt;host&gt;.com/api/public</code>. These signatures can be verified using the keys <code>vk_i</code> and over the same data that is signed. By signing the previous signature and the current round identifier, this establishes a chain of trust for the randomness beacon that can be traced back to the original signature value.</p><p>The randomness can be retrieved by combining the signatures from each of the beacons using the threshold property of the scheme. This reconstruction of the signature <code>u</code> from each intermediate signature <code>u_i</code> is done internally by the League of Entropy nodes. Each beacon broadcasts the entire signature <code>u</code>, that can be accessed over the HTTP endpoint above.</p>
    <div>
      <h2>The drand beacon</h2>
      <a href="#the-drand-beacon">
        
      </a>
    </div>
    <p>As we mentioned at the start of this blog post, Cloudflare has launched our <a href="/league-of-entropy">distributed randomness beacon</a>. This beacon is part of a network of beacons from different institutions around the globe that form the <a href="https://leagueofentropy.com">League of  Entropy</a>.</p><p>The Cloudflare beacon uses <a href="/lavarand-in-production-the-nitty-gritty-technical-details/">LavaRand</a> as its internal source of randomness for the DKG. Other League of Entropy drand beacons have their own sources of randomness.</p>
    <div>
      <h3>Give me randomness!</h3>
      <a href="#give-me-randomness">
        
      </a>
    </div>
    <blockquote><p>The below API endpoints are obsolete. Please see <a href="https://drand.love">https://drand.love</a> for the most up-to-date documentation.</p></blockquote><p>The drand beacon allows you to retrieve the latest random value from the League of Entropy using a simple HTTP request:</p>
            <pre><code>curl https://drand.cloudflare.com/api/public</code></pre>
            <p>The response is a JSON blob of the form:</p>
            <pre><code>{
    "round": 7,
    "previous": &lt;hex-encoded-previous-signature&gt;,
    "randomness": {
        "gid": 21,
        "point": &lt;hex-encoded-new-signature&gt;
    }
}</code></pre>
            <p>where, <code>randomness.point</code> is the signature <code>u</code> aggregated among the entire set of beacons.</p><p>The signature is computed as an evaluation of the message, and is comprised of the signature of the previous round, <code>previous</code>, the current round number, <code>round</code>, and the aggregated secret key of the system. This signature can be verified using the entire public key <code>vk</code> of the Cloudflare beacon, learned using another HTTP request:</p>
            <pre><code>curl https://drand.cloudflare.com/api/public</code></pre>
            <p>There are eight collaborators in the League of Entropy. You can learn the current round of randomness (or the system’s public key) by querying these beacons on the HTTP endpoints listed above.</p><ul><li><p><a href="https://drand.cloudflare.com:443">https://drand.cloudflare.com:443</a></p></li><li><p><a href="https://random.uchile.cl:8080">https://random.uchile.cl:8080</a></p></li><li><p><a href="https://drand.cothority.net:7003">https://drand.cothority.net:7003</a></p></li><li><p><a href="https://drand.kudelskisecurity.com:443">https://drand.kudelskisecurity.com:443</a></p></li><li><p><a href="https://drand.lbarman.ch:443">https://drand.lbarman.ch:443</a></p></li><li><p><a href="https://drand.nikkolasg.xyz:8888">https://drand.nikkolasg.xyz:8888</a></p></li><li><p><a href="https://drand.protocol.ai:8080">https://drand.protocol.ai:8080</a></p></li><li><p><a href="https://drand.zerobyte.io:8888">https://drand.zerobyte.io:8888</a></p></li></ul>
    <div>
      <h2>Randomness &amp; the future</h2>
      <a href="#randomness-the-future">
        
      </a>
    </div>
    <p>Cloudflare will continue to take an active role in the drand project, both as a contributor and by running a randomness beacon with the League of Entropy. The League of Entropy is a worldwide joint effort of individuals and academic institutions. We at Cloudflare believe it can help us realize the mission of helping Build a Better Internet. For more information on Cloudflare's participation in the League of Entropy, visit <a href="https://leagueofentropy.com">https://leagueofentropy.com</a> or read <a href="/league-of-entropy">Dina's blog post</a>.</p><p>Cloudflare would like to thank all of their collaborators in the League of Entropy; from EPFL, UChile, Kudelski Security and Protocol Labs. This work would not have been possible without the work of those who contributed to the <a href="https://github.com/dedis/drand">open-source drand project</a>. We would also like to thank and appreciate the work of Gabbi Fisher, Brendan McMillion, and Mahrud Sayrafi in launching the Cloudflare randomness beacon.</p> ]]></content:encoded>
            <category><![CDATA[Crypto Week]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[Cryptography]]></category>
            <category><![CDATA[Entropy]]></category>
            <category><![CDATA[Programming]]></category>
            <guid isPermaLink="false">6yDv8PkyP9X3dUvYYh3MHZ</guid>
            <dc:creator>Alex Davidson</dc:creator>
        </item>
        <item>
            <title><![CDATA[Welcome to Crypto Week 2019]]></title>
            <link>https://blog.cloudflare.com/welcome-to-crypto-week-2019/</link>
            <pubDate>Sun, 16 Jun 2019 17:07:57 GMT</pubDate>
            <description><![CDATA[ The Internet is an extraordinarily complex and evolving ecosystem. Its constituent protocols range from the ancient and archaic (hello FTP) to the modern and sleek (meet WireGuard), with a fair bit of everything in between.  ]]></description>
            <content:encoded><![CDATA[ 
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/15NFCZdp2cpRHNfys7tpAq/15487537909975789f358f467d314eb0/image21.png" />
            
            </figure><p>The Internet is an extraordinarily complex and evolving ecosystem. Its constituent protocols range from the ancient and archaic (hello <a href="https://developers.cloudflare.com/spectrum/getting-started/ftp/">FTP</a>) to the modern and sleek (meet <a href="/boringtun-userspace-wireguard-rust/">WireGuard</a>), with a fair bit of everything in between. This evolution is ongoing, and as one of the <a href="https://bgp.he.net/report/exchanges#_participants">most connected</a> networks on the Internet, Cloudflare has a duty to be a good steward of this ecosystem. We take this responsibility to heart: Cloudflare’s mission is to help build a better Internet. In this spirit, we are very proud to announce Crypto Week 2019.</p><p>Every day this week we’ll announce a new project or service that uses modern cryptography to build a more secure, trustworthy Internet. Everything we release this week will be free and immediately useful. This blog is a fun exploration of the themes of the week.</p><ul><li><p>Monday: <a href="/league-of-entropy/"><b>The League of Entropy</b></a><b>, </b><a href="/inside-the-entropy/"><b>Inside the Entropy</b></a></p></li><li><p>Tuesday: <a href="/secure-certificate-issuance/"><b>Securing Certificate Issuance using Multipath Domain Control Validation</b></a></p></li><li><p>Wednesday: <a href="/cloudflare-ethereum-gateway/"><b>Cloudflare's Ethereum Gateway</b></a><b>, </b><a href="/continuing-to-improve-our-ipfs-gateway/"><b>Continuing to Improve our IPFS Gateway</b></a></p></li><li><p>Thursday: <a href="/the-quantum-menace/"><b>The Quantum Menace</b></a>, <a href="/towards-post-quantum-cryptography-in-tls/"><b>Towards Post-Quantum Cryptography in TLS</b></a>, <a href="/introducing-circl/"><b>Introducing CIRCL: An Advanced Cryptographic Library</b></a></p></li><li><p>Friday: <a href="/secure-time/"><b>Introducing time.cloudflare.com</b></a></p></li></ul>
    <div>
      <h3>The Internet of the Future</h3>
      <a href="#the-internet-of-the-future">
        
      </a>
    </div>
    <p>Many pieces of the Internet in use today were designed in a different era with different assumptions. The Internet’s success is based on strong foundations that support constant reassessment and improvement. Sometimes these improvements require deploying new protocols.</p><p>Performing an upgrade on a system as large and decentralized as the Internet can’t be done by decree;</p><ul><li><p>There are too many economic, cultural, political, and technological factors at play.</p></li><li><p>Changes must be compatible with existing systems and protocols to even be considered for adoption.</p></li><li><p>To gain traction, new protocols must provide tangible improvements for users. Nobody wants to install an update that doesn’t improve their experience!</p></li></ul><p><b>The last time the Internet had a complete reboot and upgrade</b> was during <a href="https://www.internetsociety.org/blog/2016/09/final-report-on-tcpip-migration-in-1983/">TCP/IP flag day</a> <b>in 1983</b>. Back then, the Internet (called ARPANET) had fewer than ten thousand hosts! To have an Internet-wide flag day today to switch over to a core new protocol is inconceivable; the scale and diversity of the components involved is way too massive. Too much would break. It’s challenging enough to deprecate <a href="https://dnsflagday.net/2019/">outmoded functionality</a>. In some ways, the open Internet is a victim of its own success. The bigger a system grows and the longer it stays the same, the <a href="/why-tls-1-3-isnt-in-browsers-yet/">harder it is to change</a>. The Internet is like a massive barge: it takes forever to steer in a different direction and it’s carrying a lot of garbage.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5qyclW7jXghND7AAOoygOv/483e2764aa861229b7487235355208b0/image16.jpg" />
            
            </figure><p>ARPANET, 1983 (<a href="https://www.computerhistory.org/internethistory/1980s/">Computer History Museum</a>)</p><p>As you would expect, many of the warts of the early Internet still remain. Both academic security researchers and real-life adversaries are still finding and exploiting vulnerabilities in the system. Many vulnerabilities are due to the fact that most of the protocols in use on the Internet have a weak notion of trust inherited from the early days. With 50 hosts online, it’s relatively easy to trust everyone, but in a world-scale system, that trust breaks down in fascinating ways. The primary tool to scale trust is cryptography, which helps provide some measure of accountability, though it has its own complexities.</p><p>In an ideal world, the Internet would provide a trustworthy substrate for human communication and commerce. Some people naïvely assume that this is the natural direction the evolution of the Internet will follow. However, constant improvement is not a given. <b>It’s possible that the Internet of the future will actually be</b> <b><i>worse</i></b> <b>than the Internet today: less open, less secure, less private, less</b> <b><i>trustworthy</i></b><b>.</b> There are strong incentives to weaken the Internet on a fundamental level by <a href="https://www.ispreview.co.uk/index.php/2019/04/google-uk-isps-and-gov-battle-over-encrypted-dns-and-censorship.html">Governments</a>, by businesses <a href="https://www.theatlantic.com/technology/archive/2017/03/encryption-wont-stop-your-internet-provider-from-spying-on-you/521208/">such as ISPs</a>, and even by the <a href="https://www.cyberscoop.com/tls-1-3-weakness-financial-industry-ietf/">financial institutions</a> entrusted with our personal data.</p><p>In a system with as many stakeholders as the Internet, <b>real change requires principled commitment from all invested parties.</b> At Cloudflare, we believe everyone is entitled to an Internet built on a solid foundation of trust. <b>Crypto Week</b> is our way of helping nudge the Internet’s evolution in a more trust-oriented direction. Each announcement this week helps bring the Internet of the future to the present in a tangible way.</p>
    <div>
      <h3>Ongoing Internet Upgrades</h3>
      <a href="#ongoing-internet-upgrades">
        
      </a>
    </div>
    <p>Before we explore the Internet of the future, let’s explore some of the previous and ongoing attempts to upgrade the Internet’s fundamental protocols.</p>
    <div>
      <h4>Routing Security</h4>
      <a href="#routing-security">
        
      </a>
    </div>
    <p>As we highlighted in <a href="/crypto-week-2018/">last year’s Crypto Week</a> <b>one of the weak links on the Internet is routing</b>. Not all networks are directly connected.</p><p>To send data from one place to another, <b>you might have to rely on intermediary networks to pass your data along.</b> A packet sent from one host to another <b>may have to be passed through up to a dozen of these intermediary networks.</b> <i>No single network knows the full path the data will have to take to get to its destination, it only knows which network to pass it to next.</i>  <b>The protocol that determines how packets are routed is called the</b> <a href="https://www.cloudflare.com/learning/security/glossary/what-is-bgp/"><b>Border Gateway Protocol (BGP.)</b></a> Generally speaking, networks use BGP to announce to each other which addresses they know how to route packets for and (dependent on a set of complex rules) these networks share what they learn with their neighbors.</p><p>Unfortunately, <b>BGP is completely insecure:</b></p><ul><li><p><b>Any network can announce any set of addresses to any other network,</b> even addresses they don’t control. This leads to a phenomenon called <i>BGP hijacking</i>, where networks are tricked into sending data to the wrong network.</p></li><li><p><b>A BGP hijack</b> is most often caused by accidental misconfiguration, but <b>can also be the result of malice on the network operator’s part</b>.</p></li><li><p><b>During a BGP hijack, a network inappropriately announces a set of addresses to other networks</b>, which results in packets destined for the announced addresses to be routed through the illegitimate network.</p></li></ul>
    <div>
      <h4>Understanding the risk</h4>
      <a href="#understanding-the-risk">
        
      </a>
    </div>
    <p>If the packets represent unencrypted data, this can be a big problem as it <b>allows the hijacker to read or even change the data:</b></p><ul><li><p>In 2018, a rogue network <a href="/bgp-leaks-and-crypto-currencies/">hijacked the addresses of a service called MyEtherWallet</a>, financial transactions were routed through the attacker network, which the attacker modified, <b>resulting in the theft of over a hundred thousand dollars of cryptocurrency.</b></p></li></ul>
    <div>
      <h4>Mitigating the risk</h4>
      <a href="#mitigating-the-risk">
        
      </a>
    </div>
    <p>The <a href="/tag/rpki/">Resource Public Key Infrastructure (RPKI)</a> system helps bring some trust to BGP by <b>enabling networks to utilize cryptography to digitally sign network routes with certificates, making BGP hijacking much more difficult.</b></p><ul><li><p>This enables participants of the network to gain assurances about the authenticity of route advertisements. <a href="/introducing-certificate-transparency-and-nimbus/">Certificate Transparency</a> (CT) is a tool that enables additional trust for certificate-based systems. Cloudflare operates the <a href="https://ct.cloudflare.com/logs/cirrus">Cirrus CT log</a> to support RPKI.</p></li></ul><p>Since we announced our support of RPKI last year, routing security has made big strides. More routes are signed, more networks validate RPKI, and the <a href="https://github.com/cloudflare/cfrpki">software ecosystem has matured</a>, but this work is not complete. Most networks are still vulnerable to BGP hijacking. For example, <a href="https://www.cnet.com/news/how-pakistan-knocked-youtube-offline-and-how-to-make-sure-it-never-happens-again/">Pakistan knocked YouTube offline with a BGP hijack</a> back in 2008, and could likely do the same today. Adoption here is driven less by providing a benefit to users, but rather by reducing systemic risk, which is not the strongest motivating factor for adopting a complex new technology. Full routing security on the Internet could take decades.</p>
    <div>
      <h3>DNS Security</h3>
      <a href="#dns-security">
        
      </a>
    </div>
    <p>The <a href="https://www.cloudflare.com/learning/dns/what-is-dns/">Domain Name System (DNS)</a> is the phone book of the Internet. Or, for anyone under 25 who doesn’t remember phone books, it’s the system that takes hostnames (like cloudflare.com or facebook.com) and returns the Internet address where that host can be found. For example, as of this publication, <a href="http://www.cloudflare.com">www.cloudflare.com</a> is 104.17.209.9 and 104.17.210.9 (IPv4) and 2606:4700::c629:d7a2, 2606:4700::c629:d6a2 (IPv6). Like BGP, <b>DNS is completely insecure. Queries and responses sent unencrypted over the Internet are modifiable by anyone on the path.</b></p><p>There are many ongoing attempts to add security to DNS, such as:</p><ul><li><p><a href="https://www.cloudflare.com/learning/dns/dnssec/ecdsa-and-dnssec/">DNSSEC</a> that <b>adds a chain of digital signatures to DNS responses</b></p></li><li><p>DoT/DoH that <b>wraps DNS queries in the TLS encryption protocol</b> (more on that later)</p></li></ul><p>Both technologies are slowly gaining adoption, but have a long way to go.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1dE9CBbFXOXM7eXg22tDnr/d5aac38c166f0b58eccaddaf77cf0c8d/DNSSEC-adoption-over-time-1.png" />
            
            </figure><p>DNSSEC-signed responses served by Cloudflare</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/186wxeg2yS6VN7fRqMeBS6/cd8261cbee791770fdbdde5b8e856974/DoT_DoH.png" />
            
            </figure><p>Cloudflare’s 1.1.1.1 resolver queries are already over 5% DoT/DoH</p><p>Just like RPKI, <b>securing DNS comes with a performance cost,</b> making it less attractive to users. However,</p><ul><li><p><b>Services like 1.1.1.1 provide</b> <a href="https://www.dnsperf.com/dns-provider/cloudflare"><b>extremely fast DNS</b></a>, which means that for many users, <a href="https://blog.mozilla.org/futurereleases/2019/04/02/dns-over-https-doh-update-recent-testing-results-and-next-steps/">encrypted DNS is faster than the unencrypted DNS</a> from their ISP.</p></li><li><p>This <b>performance improvement makes it appealing for customers</b> of privacy-conscious applications, like Firefox and Cloudflare’s 1.1.1.1 app, to adopt secure DNS.</p></li></ul>
    <div>
      <h3>The Web</h3>
      <a href="#the-web">
        
      </a>
    </div>
    <p><b>Transport Layer Security (TLS)</b> is a cryptographic protocol that gives two parties the ability to communicate over an encrypted and authenticated channel**.** <b>TLS protects communications from eavesdroppers even in the event of a BGP hijack.</b> TLS is what puts the “S” in <b>HTTPS</b>. TLS protects web browsing against multiple types of network adversaries.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6SY5pZINknbDszMPvOf77c/84fb41510e25717243e5ff06d631eeb3/past-connection-1.png" />
            
            </figure><p>Requests hop from network to network over the Internet</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2SrnXAdMqSONKHaYJVarm7/5b3a59fd2159f3d19704cf267dc30cc0/MITM-past-copy-2.png" />
            
            </figure><p>For unauthenticated protocols, an attacker on the path can impersonate the server</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/H8ticQEIC5ebrX7WXsmPW/23e71675bd585233362bd62d9f6840e4/BGP-hijack-past-1.png" />
            
            </figure><p>Attackers can use BGP hijacking to change the path so that communication can be intercepted</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4gg6sYN8sqXSKeTPETI4b0/347ddfea0a9de130aa69726f1dd870da/PKI-validated-connectio-1.png" />
            
            </figure><p>Authenticated protocols are protected from interception attacks</p><p>The adoption of TLS on the web is partially driven by the fact that:</p><ul><li><p><b>It’s easy and free for websites to get an authentication certificate</b> (via <a href="https://letsencrypt.org/">Let’s Encrypt</a>, <a href="/introducing-universal-ssl/">Universal SSL</a>, etc.)</p></li><li><p>Browsers make TLS adoption appealing to website operators by <b>only supporting new web features such as HTTP/2 over HTTPS.</b></p></li></ul><p>This has led to the rapid adoption of HTTPS over the last five years.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/78UVeYpDvOgQp0CQIyb20R/97d29db95d234771da008b5f83dfc7dc/image12.jpg" />
            
            </figure><p>HTTPS adoption curve (<a href="https://transparencyreport.google.com/https/overview">from Google Chrome</a>)‌‌</p><p>To further that adoption, TLS recently got an upgrade in TLS 1.3, <b>making it faster </b><i><b>and</b></i><b> more secure (a combination we love)</b>. It’s taking over the Internet!</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1NFORFUn1FFn10RTJ5HUCS/a21ff9ea0eaa41916107fa9522b13f38/tls.13-adoption-1.png" />
            
            </figure><p>TLS 1.3 adoption over the last 12 months (from Cloudflare's perspective)</p><p>Despite this fantastic progress in the adoption of security for routing, DNS, and the web, <b>there are still gaps in the trust model of the Internet.</b> There are other things needed to help build the Internet of the future. To find and identify these gaps, we lean on research experts.</p>
    <div>
      <h3>Research Farm to Table</h3>
      <a href="#research-farm-to-table">
        
      </a>
    </div>
    <p>Cryptographic security on the Internet is a hot topic and there have been many flaws and issues recently pointed out in academic journals. Researchers often <b>study the vulnerabilities of the past and ask:</b></p><ul><li><p>What other critical components of the Internet have the same flaws?</p></li><li><p>What underlying assumptions can subvert trust in these existing systems?</p></li></ul><p>The answers to these questions help us decide what to tackle next. Some recent research  topics we’ve learned about include:</p><ul><li><p>Quantum Computing</p></li><li><p>Attacks on Time Synchronization</p></li><li><p>DNS attacks affecting Certificate issuance</p></li><li><p>Scaling distributed trust</p></li></ul><p>Cloudflare keeps abreast of these developments and we do what we can to bring these new ideas to the Internet at large. In this respect, we’re truly standing on the shoulders of giants.</p>
    <div>
      <h3>Future-proofing Internet Cryptography</h3>
      <a href="#future-proofing-internet-cryptography">
        
      </a>
    </div>
    <p>The new protocols we are currently deploying (RPKI, DNSSEC, DoT/DoH, TLS 1.3) use relatively modern cryptographic algorithms published in the 1970s and 1980s.</p><ul><li><p>The security of these algorithms is based on hard mathematical problems in the field of number theory, such as <a href="https://en.wikipedia.org/wiki/RSA_(cryptosystem)">factoring</a> and the <a href="/a-relatively-easy-to-understand-primer-on-elliptic-curve-cryptography/">elliptic curve discrete logarithm</a> problem.</p></li><li><p>If you can solve the hard problem, you can crack the code. <b>Using a bigger key makes the problem harder,</b> making it more difficult to break, <b>but also slows performance.</b></p></li></ul><p>Modern Internet protocols typically pick keys large enough to make it infeasible to break with <a href="https://whatis.techtarget.com/definition/classical-computing">classical computers</a>, but no larger. <b>The sweet spot is around 128-bits of security;</b> <b>meaning a computer has to do approximately 2</b>¹²⁸ <b>operations to break it.</b></p><p><a href="https://eprint.iacr.org/2013/635.pdf">Arjen Lenstra and others</a> created a useful measure of security levels by <b>comparing the amount of energy it takes to break a key to the amount of water you can boil</b> using that much energy. You can think of this as the electric bill you’d get if you run a computer long enough to crack the key.</p><ul><li><p><b>35-bit security</b> <b>is “Teaspoon security”</b> -- It takes about the same amount of energy to break a 35-bit key as it does to boil a teaspoon of water (pretty easy).</p></li></ul>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2iNGrNB4yH2iwZY14nQyWB/7bdf9a8e1874fe0444f532823e22fcd3/image20.png" />
            
            </figure><ul><li><p><b>65 bits gets you up to “Pool security” –</b> The energy needed to boil the average amount of water in a swimming pool.</p></li></ul>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1xzgnzDf3dwHcNWdTfAyeC/de62077e9a936e937a78c2dbc4874ece/image8.png" />
            
            </figure><ul><li><p><b>105 bits is “Sea Security”</b> – The energy needed to boil the Mediterranean Sea.</p></li></ul>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5CYdmr4aGJ67Wq0j9t3iXR/2df460cc061c0c747ab66be9a866beed/8reIkbszxaKMxOsDDEzOB4ljqnVtQdJBQsYEz-uL-AZnNL0jUKSd4CbSAz-yS9tvpi_ki1JoYZ_-ZktMSbqRtDSVFMjHvsyBtgmc2rPuiDr9b-Fj6DvEJvLF7tWP.png" />
            
            </figure><ul><li><p><b>114-bits is “Global Security” –</b>  The energy needed to boil all water on Earth.</p></li></ul>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/58XtfEmY4rpTkx7Y71AnDW/b1b31eaf34b318cc83bdcad3c20d85ff/image14.png" />
            
            </figure><ul><li><p><b>128-bit security is safely beyond that</b> <b>of Global Security</b> – Anything larger is excessive.</p></li><li><p><b>256-bit security corresponds to “Universal Security”</b> – The estimated mass-energy of the observable universe. So, if you ever hear someone suggest 256-bit AES, you know they mean business.</p></li></ul>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5oWF3OLLr5WVWIga7mMGf8/d21426fe5df41b9c85bc6b8eca2e4331/image18.png" />
            
            </figure>
    <div>
      <h3>Post-Quantum of Solace</h3>
      <a href="#post-quantum-of-solace">
        
      </a>
    </div>
    <p>As far as we know, <b>the algorithms we use for cryptography are functionally uncrackable</b> with all known algorithms that classical computers can run. <b>Quantum computers change this calculus.</b> Instead of transistors and bits, a quantum computer uses the effects of <a href="https://en.wikipedia.org/wiki/Quantum_entanglement">quantum mechanics</a> to perform calculations that just aren’t possible with classical computers. As you can imagine, quantum computers are very difficult to build. However, despite large-scale quantum computers not existing quite yet, computer scientists have already developed algorithms that can only run efficiently on quantum computers. Surprisingly, it turns out that <b>with a sufficiently powerful quantum computer, most of the hard mathematical problems we rely on for Internet security become easy!</b></p><p>Although there are still <a href="https://www.quantamagazine.org/gil-kalais-argument-against-quantum-computers-20180207/">quantum-skeptics</a> out there, <a href="http://fortune.com/2018/12/15/quantum-computer-security-encryption/">some experts</a> <b>estimate that within 15-30 years these large quantum computers will exist, which poses a risk to every security protocol online.</b> Progress is moving quickly; every few months a more powerful quantum computer <a href="https://en.wikipedia.org/wiki/Timeline_of_quantum_computing">is announced</a>.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/281v5SkqTn89jPkEnFjkPE/587423b66380be1051c67f2f17263e69/image1-2.png" />
            
            </figure><p>Luckily, there are cryptography algorithms that rely on different hard math problems that seem to be resistant to attack from quantum computers. These math problems form the basis of so-called <i>quantum-resistant</i> (or <i>post-quantum</i>) cryptography algorithms that can run on classical computers. These algorithms can be used as substitutes for most of our current quantum-vulnerable algorithms.</p><ul><li><p>Some quantum-resistant algorithms (such as <a href="https://en.wikipedia.org/wiki/McEliece_cryptosystem">McEliece</a> and <a href="https://en.wikipedia.org/wiki/Lamport_signature">Lamport Signatures</a>) were invented decades ago, but there’s a reason they aren’t in common use: they <b>lack some of the nice properties of the algorithms we’re currently using, such as key size and efficiency.</b></p></li><li><p>Some quantum-resistant algorithms <b>require much larger keys to provide 128-bit security</b></p></li><li><p>Some are very <b>CPU intensive</b>,</p></li><li><p>And some just <b>haven’t been studied enough to know if they’re secure.</b></p></li></ul><p>It is possible to swap our current set of quantum-vulnerable algorithms with new quantum-resistant algorithms, but it’s a daunting engineering task. With widely deployed <a href="https://en.wikipedia.org/wiki/IPsec">protocols</a>, it is hard to make the transition from something fast and small to something slower, bigger or more complicated without providing concrete user benefits. <b>When exploring new quantum-resistant algorithms, minimizing user impact is of utmost importance</b> to encourage adoption. This is a big deal, because almost all the protocols we use to protect the Internet are vulnerable to quantum computers.</p><p>Cryptography-breaking quantum computing is still in the distant future, but we must start the transition to ensure that today’s secure communications are safe from tomorrow’s quantum-powered onlookers; however, that’s not the most <i>timely</i> problem with the Internet. We haven’t addressed that...yet.</p>
    <div>
      <h3>Attacking time</h3>
      <a href="#attacking-time">
        
      </a>
    </div>
    <p>Just like DNS, BGP, and HTTP, <b>the Network Time Protocol (NTP) is fundamental to how the Internet works</b>. And like these other protocols, it is <b>completely insecure</b>.</p><ul><li><p>Last year, <b>Cloudflare introduced</b> <a href="/roughtime/"><b>Roughtime</b></a> support as a mechanism for computers to access the current time from a trusted server in an authenticated way.</p></li><li><p>Roughtime is powerful because it <b>provides a way to distribute trust among multiple time servers</b> so that if one server attempts to lie about the time, it will be caught.</p></li></ul><p>However, Roughtime is not exactly a secure drop-in replacement for NTP.</p><ul><li><p><b>Roughtime lacks the complex mechanisms of NTP</b> that allow it to compensate for network latency and yet maintain precise time, especially if the time servers are remote. This leads to <b>imprecise time</b>.</p></li><li><p>Roughtime also <b>involves expensive cryptography that can further reduce precision</b>. This lack of precision makes Roughtime useful for browsers and other systems that need coarse time to validate certificates (most certificates are valid for 3 months or more), but some systems (such as those used for financial trading) require precision to the millisecond or below.</p></li></ul><p>With Roughtime we supported the time protocol of the future, but there are things we can do to help improve the health of security online <i>today</i>.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/76dqfHQzRcGyA09D55f46W/35be1f73c91f7d4e128b9582638b80ab/image2-3.png" />
            
            </figure><p>Some academic researchers, including Aanchal Malhotra of Boston University, have <a href="https://www.cs.bu.edu/~goldbe/NTPattack.html">demonstrated</a> a variety of attacks against NTP, including <b>BGP hijacking and off-path User Datagram Protocol (UDP) attacks.</b></p><ul><li><p>Some of these attacks can be avoided by connecting to an NTP server that is close to you on the Internet.</p></li><li><p>However, to bring cryptographic trust to time while maintaining precision, we need something in between NTP and Roughtime.</p></li><li><p>To solve this, it’s natural to turn to the same system of trust that enabled us to patch HTTP and DNS: Web PKI.</p></li></ul>
    <div>
      <h3>Attacking the Web PKI</h3>
      <a href="#attacking-the-web-pki">
        
      </a>
    </div>
    <p>The Web PKI is similar to the RPKI, but is more widely visible since it relates to websites rather than routing tables.</p><ul><li><p>If you’ve ever clicked the lock icon on your browser’s address bar, you’ve interacted with it.</p></li><li><p>The PKI relies on a set of trusted organizations called Certificate Authorities (CAs) to issue certificates to websites and web services.</p></li><li><p>Websites use these certificates to authenticate themselves to clients as <a href="https://www.cloudflare.com/learning/ssl/transport-layer-security-tls/">part of the TLS protocol</a> in HTTPS.</p></li></ul>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/wzqfOvaCcj0TCr3ezbOY3/f87fc64402bee2de14b4c4ba5d0b93bb/pki-validated.png" />
            
            </figure><p>TLS provides encryption and integrity from the client to the server with the help of a digital certificate </p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1bOHTpPBXfi0VeYoJikyfD/cbc3e67f2294e322436a5b542bb3644e/attack-against-PKI-validated-connectio-2.png" />
            
            </figure><p>TLS connections are safe against MITM, because the client doesn’t trust the attacker’s certificate</p><p>While we were all patting ourselves on the back for moving the web to HTTPS, <a href="https://dl.acm.org/citation.cfm?id=3243790">some</a> <a href="https://www.princeton.edu/~pmittal/publications/bgp-tls-usenix18.pdf">researchers</a> managed to find and exploit <b>a weakness in the system: the process for getting HTTPS certificates.</b></p><p>Certificate Authorities (CAs) use a process called <b>domain control validation (DCV) to ensure that they only issue certificates to websites owners who legitimately request them.</b></p><ul><li><p>Some CAs do this validation manually, which is secure, but <b>can’t scale to the total number of websites deployed today.</b></p></li><li><p>More progressive CAs have <b>automated this validation process, but rely on insecure methods</b> (HTTP and DNS) to validate domain ownership.</p></li></ul><p>Without ubiquitous cryptography in place (DNSSEC may never reach 100% deployment), there is <b>no completely secure way to bootstrap this system</b>. So, let’s look at how to distribute trust using other methods.</p><p><b>One tool at our disposal is the distributed nature of the Cloudflare network.</b></p><p>Cloudflare is global. We have locations all over the world connected to dozens of networks. That means we have different <i>vantage points</i>, resulting in different ways to traverse networks. This diversity can prove an advantage when dealing with BGP hijacking, since <b>an attacker would have to hijack multiple routes from multiple locations to affect all the traffic between Cloudflare and other distributed parts of the Internet.</b> The natural diversity of the network raises the cost of the attacks.</p><p>A distributed set of connections to the Internet and using them as a quorum is a mighty paradigm to distribute trust, with or without cryptography.</p>
    <div>
      <h3>Distributed Trust</h3>
      <a href="#distributed-trust">
        
      </a>
    </div>
    <p>This idea of distributing the source of trust is powerful. Last year we announced the <b>Distributed Web Gateway</b> that</p><ul><li><p>Enables users to access content on the InterPlanetary File System (IPFS), a network structured to <b>reduce the trust placed in any single party.</b></p></li><li><p>Even if a participant of the network is compromised, <b>it can’t be used to distribute compromised content</b> because the network is content-addressed.</p></li><li><p>However, using content-based addressing is <b>not the only way to distribute trust between multiple independent parties.</b></p></li></ul><p>Another way to distribute trust is to literally <b>split authority between multiple independent parties</b>. <a href="/red-october-cloudflares-open-source-implementation-of-the-two-man-rule/">We’ve explored this topic before.</a> In the context of Internet services, this means ensuring that no single server can authenticate itself to a client on its own. For example,</p><ul><li><p>In HTTPS the server’s private key is the lynchpin of its security. Compromising the owner of the private key (by <a href="https://www.theguardian.com/world/2013/oct/03/lavabit-ladar-levison-fbi-encryption-keys-snowden">hook</a> or by <a href="https://www.symantec.com/connect/blogs/how-attackers-steal-private-keys-digital-certificates">crook</a>) <b>gives an attacker the ability to impersonate (spoof) that service</b>. This single point of failure <b>puts services at risk.</b> You can mitigate this risk by distributing the authority to authenticate the service between multiple independently-operated services.</p></li></ul>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3wuX22HpjVAbiwcwEDojxB/df21a1462febcf64f1a613ab075a104a/TLS-server-compromise-1.png" />
            
            </figure><p>TLS doesn’t protect against server compromise</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/34nZukSMSjU2QDwNr5zkhf/dd5a6024a01c4dc0c6c6153e0205d91c/future-distributed-trust-copy-2-1.png" />
            
            </figure><p>With distributed trust, multiple parties combine to protect the connection</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/29B6lr8FV35nqA8Bw9TgtL/34fb894d8d74dc083424398f1e65e80f/future-distributed-trust.png" />
            
            </figure><p>An attacker that has compromised one of the servers cannot break the security of the system‌‌</p><p>The Internet barge is old and slow, and we’ve only been able to improve it through the meticulous process of patching it piece by piece. Another option is to build new secure systems on top of this insecure foundation. IPFS is doing this, and IPFS is not alone in its design. <b>There has been more research into secure systems with decentralized trust in the last ten years than ever before</b>.</p><p>The result is radical new protocols and designs that use exotic new algorithms. These protocols do not supplant those at the core of the Internet (like TCP/IP), but instead, they sit on top of the existing Internet infrastructure, enabling new applications, much like HTTP did for the web.</p>
    <div>
      <h3>Gaining Traction</h3>
      <a href="#gaining-traction">
        
      </a>
    </div>
    <p>Some of the most innovative technical projects were considered failures because <b>they couldn’t attract users.</b> New technology has to bring tangible benefits to users to sustain it: useful functionality, content, and a decent user experience. Distributed projects, such as IPFS and others, are gaining popularity, but have not found mass adoption. This is a chicken-and-egg problem. New protocols have a high barrier to entry**—<b>users have to install new software</b>—**and because of the small audience, there is less incentive to create compelling content. <b>Decentralization and distributed trust are nice security features to have, but they are not products</b>. Users still need to get some benefit out of using the platform.</p><p>An example of a system to break this cycle is the web. In 1992 the web was hardly a cornucopia of awesomeness. <b>What helped drive the dominance of the web was its users.</b></p><ul><li><p>The growth of the user base meant <b>more incentive for people to build services</b>, and the availability of more services attracted more users. It was a virtuous cycle.</p></li><li><p>It’s hard for a platform to gain momentum, but once the cycle starts, a flywheel effect kicks in to help the platform grow.</p></li></ul><p>The <a href="https://www.cloudflare.com/distributed-web-gateway/">Distributed Web Gateway</a> project Cloudflare launched last year in Crypto Week is our way of exploring what happens if we try to kickstart that flywheel. By providing a secure, reliable, and fast interface from the classic web with its two billion users to the content on the distributed web, we give the fledgling ecosystem an audience.</p><ul><li><p><b>If the advantages provided by building on the distributed web are appealing to users, then the larger audience will help these services grow in popularity</b>.</p></li><li><p>This is somewhat reminiscent of how IPv6 gained adoption. It started as a niche technology only accessible using IPv4-to-IPv6 translation services.</p></li><li><p>IPv6 adoption has now <a href="https://www.internetsociety.org/resources/2018/state-of-ipv6-deployment-2018/">grown so much</a> that it is becoming a requirement for new services. For example, <b>Apple is</b> <a href="https://developer.apple.com/support/ipv6/"><b>requiring</b></a> <b>that all apps work in IPv6-only contexts.</b></p></li></ul><p>Eventually, as user-side implementations of distributed web technologies improve, people may move to using the distributed web natively rather than through an HTTP gateway. Or they may not! By leveraging Cloudflare’s global network to <b>give users access to new technologies based on distributed trust, we give these technologies a better chance at gaining adoption.</b></p>
    <div>
      <h3>Happy Crypto Week</h3>
      <a href="#happy-crypto-week">
        
      </a>
    </div>
    <p>At Cloudflare, we always support new technologies that help make the Internet better. Part of helping make a better Internet is scaling the systems of trust that underpin web browsing and protect them from attack. We provide the tools to create better systems of assurance with fewer points of vulnerability. We work with academic researchers of security to get a vision of the future and engineer away vulnerabilities before they can become widespread. It’s a constant journey.</p><p>Cloudflare knows that none of this is possible without the work of researchers. From award-winning researcher publishing papers in top journals to the blog posts of clever hobbyists, dedicated and curious people are moving the state of knowledge of the world forward. However, the push to publish new and novel research sometimes holds researchers back from committing enough time and resources to fully realize their ideas. Great research can be powerful on its own, but it can have an even broader impact when combined with practical applications. We relish the opportunity to stand on the shoulders of these giants and use our engineering know-how and global reach to expand on their work to help build a better Internet.</p><p>So, to all of you dedicated researchers, <b>thank you for your work!</b> Crypto Week is yours as much as ours. If you’re working on something interesting and you want help to bring the results of your research to the broader Internet, please contact us at <a>ask-research@cloudflare.com</a>. We want to help you realize your dream of making the Internet safe and trustworthy.</p><p>If you're a research-oriented <a href="https://boards.greenhouse.io/cloudflare/jobs/1346216">engineering manager</a> or student, we're also hiring in <a href="https://boards.greenhouse.io/cloudflare/jobs/1025810">London</a> and <a href="https://boards.greenhouse.io/cloudflare/jobs/608495">San Francisco</a>!</p> ]]></content:encoded>
            <category><![CDATA[Crypto Week]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[BGP]]></category>
            <category><![CDATA[Cryptography]]></category>
            <category><![CDATA[RPKI]]></category>
            <category><![CDATA[DNS]]></category>
            <category><![CDATA[HTTPS]]></category>
            <guid isPermaLink="false">2Cs84t1yRSnIXcoIIszCGj</guid>
            <dc:creator>Nick Sullivan</dc:creator>
        </item>
        <item>
            <title><![CDATA[Roughtime: Securing Time with Digital Signatures]]></title>
            <link>https://blog.cloudflare.com/roughtime/</link>
            <pubDate>Fri, 21 Sep 2018 12:00:00 GMT</pubDate>
            <description><![CDATA[ When you visit a secure website, it offers you a TLS certificate that asserts its identity. Every certificate has an expiration date, and when it’s passed due, it is no longer valid. ]]></description>
            <content:encoded><![CDATA[ <p></p><p>When you visit a secure website, it offers you a <a href="https://www.cloudflare.com/application-services/products/ssl/">TLS certificate</a> that asserts its identity. Every certificate has an expiration date, and when it’s passed due, it is no longer valid. The idea is almost as old as the web itself: limiting the lifetime of certificates is meant to reduce the risk in case a TLS server’s secret key is compromised.</p><p>Certificates aren’t the only cryptographic artifacts that expire. When you visit a site protected by Cloudflare, we also tell you whether its certificate has been revoked (see our blog post on <a href="/high-reliability-ocsp-stapling/">OCSP stapling</a>) — for example, due to the secret key being compromised — and this value (a so-called OCSP staple) has an expiration date, too.</p><p>Thus, to determine if a certificate is valid and hasn’t been revoked, your system needs to know the current time. Indeed, time is crucial for the security of TLS and myriad other protocols. To help keep clocks in sync, we are announcing a free, high-availability, and low-latency authenticated time service called <a href="https://roughtime.googlesource.com/roughtime">Roughtime</a>, available at <a href="https://roughtime.cloudflare.com">roughtime.cloudflare.com</a> on port 2002.</p>
    <div>
      <h2>Time is tricky</h2>
      <a href="#time-is-tricky">
        
      </a>
    </div>
    <p>It may surprise you to learn that, in practice, clients’ clocks are heavily skewed. A <a href="https://acmccs.github.io/papers/p1407-acerA.pdf">recent study of Chrome users</a> showed that a significant fraction of reported <a href="https://www.cloudflare.com/learning/ssl/common-errors/">TLS-certificate errors</a> are caused by client-clock skew. During the period in which error reports were collected, 6.7% of client-reported times were behind by more than 24 hours. (0.05% were ahead by more than 24 hours.) This skew was a causal factor for at least 33.5% of the sampled reports from Windows users, 8.71% from Mac OS, 8.46% from Android, and 1.72% from Chrome OS. These errors are usually presented to users as warnings that the user can click through to get to where they’re going. However, showing too many warnings makes users grow accustomed to clicking through them; <a href="https://en.wikipedia.org/wiki/Alarm_fatigue">this is risky</a>, since these warnings are meant to keep users away from malicious websites.</p><p>Clock skew also holds us back from improving the security of certificates themselves. We’d like to issue certificates with shorter lifetimes because the less time the certificate is valid, the lower the risk of the secret key being exposed. (This is why Let’s Encrypt issues certificates valid for just <a href="https://letsencrypt.org/2015/11/09/why-90-days.html">90 days by default</a>.) But the long tail of skewed clocks limits the effective lifetime of certificates; shortening the lifetime too much would only lead to more warnings.</p><p>Endpoints on the Internet often synchronize their clocks using a protocol like the <a href="https://en.wikipedia.org/wiki/Network_Time_Protocol">Network Time Protocol</a> (NTP). NTP aims for precise synchronization, and even accounts for network latency. However, it is usually deployed without security features, as the added overhead on high-load servers <a href="https://www.usenix.org/conference/usenixsecurity16/technical-sessions/presentation/dowling">degrades precision significantly</a>. As a result, a man-in-the-middle attacker between the client and server can easily influence the client’s clock. By moving the client back in time, the attacker can force it to accept expired (and possibly compromised) certificates; by moving forward in time, it can force the client to accept a certificate that is <i>not yet</i> valid.</p><p>Fortunately, for settings in which both security and precision are paramount, workable solutions are <a href="https://www.usenix.org/conference/usenixsecurity16/technical-sessions/presentation/dowling">on the horizon</a>. But for many applications, precise network time isn’t essential; it suffices to be <i>accurate</i>, say, within 10 seconds of real time. This observation is the primary motivation of Google’s <a href="https://roughtime.googlesource.com/roughtime">Roughtime</a> protocol, a simple protocol by which clients can synchronize their clocks with one or more authenticated servers. Roughtime lacks the precision of NTP, but aims to be accurate enough for cryptographic applications, and since the responses are authenticated, man-in-the-middle attacks aren’t possible.</p><p>The protocol is designed to be simple and flexible. A client can get Roughtime from just one server it trusts, or it may contact many servers to make its calculation more robust. But its most distinctive feature is that it adds <i>accountability</i> to time servers. If a server misbehaves by providing the wrong time, then the protocol allows clients to produce publicly verifiable, cryptographic proof of this misbehavior. Making servers auditable in this manner makes them accountable to provide accurate time.</p><p>We are deploying a Roughtime service for two reasons.</p><p>First, the clock we use for this service is the same as the clock we use to determine whether our customers’ certificates are valid and haven’t been revoked; as a result, exposing this service makes us accountable for the validity of TLS artifacts we serve to clients on behalf of our customers.</p><p>Second, Roughtime is a great idea whose time has come. But it is only useful if several independent organizations participate; the more Roughtime servers there are, the more robust the ecosystem becomes. Our hope is that putting our weight behind it will help the Roughtime ecosystem grow.</p>
    <div>
      <h2>The Roughtime protocol</h2>
      <a href="#the-roughtime-protocol">
        
      </a>
    </div>
    <p>At its most basic level, Roughtime is a one-round protocol in which the client requests the current time and the server sends a signed response. The response is comprised of a timestamp (the number of microseconds since the Unix epoch) and a <i>radius</i> (in microseconds) used to indicate the server’s certainty about the reported time. For example, a radius of 1,000,000μs means the server is reasonably sure that the true time is within one second of the reported time.</p><p>The server proves freshness of its response as follows. The request consists of a short, random string commonly called a <i>nonce</i> (pronounced /<a href="https://www.merriam-webster.com/dictionary/nonce">nän(t)s</a>/, or sometimes /ˈen wən(t)s/). The server incorporates the nonce into its signed response so that it’s needed to verify the signature. If the nonce is sufficiently long (say, 16 bytes), then the number of possible nonces is so large that it’s extremely unlikely the server has encountered (or will ever encounter) a request with the same nonce. Thus, a valid signature serves as cryptographic proof that the response is fresh.</p><p>The client uses the server’s <i>root public key</i> to verify the signature. (The key is obtained out-of-band; you can get our key <a href="https://developers.cloudflare.com/time-services/roughtime/usage/">here</a>.) When the server starts, it generates an online public/secret key pair; the root secret key is used to create a delegation for the online public key, and the online secret key is used to sign the response. The delegation serves the same function as a traditional <a href="https://en.wikipedia.org/wiki/X.509">X.509</a> certificate on the web: as illustrated in the figure below, the client first uses the root public key to verify the delegation, then uses the online public key to verify the response. This allows for operational separation of the delegator and the server and limits exposure of the root secret key.</p><hr />
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/52glM0EDJ0cO6EzChb6oz4/611386837f9e99dfa650d562a850d7ed/Cloudflare-Roughtime-1.png" />
            
            </figure><p>Simplified Roughtime (without delegation)</p><hr />
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3G51im5eHgZt2KlZ3sSaRm/a29f0ede0d24251022a85f719f0c6ff5/Cloudflare-Roughtime-2.png" />
            
            </figure><p>Roughtime with delegation</p><hr /><p>Roughtime offers two features designed to make it scalable.  First, when the volume of requests is high, the server may batch-sign a number of clients’ requests by constructing a <a href="https://en.wikipedia.org/wiki/Merkle_tree">Merkle tree</a> from the nonces. The server signs the root of the tree and sends in its response the information needed to prove to the client that its request is in the tree. (The data structure is a binary tree, so the amount of information is proportional to the base-2 logarithm of the number of requests in the batch; see the figure below) Second, the protocol is executed over UDP. In order to prevent the Roughtime server from being an amplifier for <a href="https://www.cloudflare.com/learning/ddos/what-is-a-ddos-attack/">DDoS attacks</a>, the request is padded to 1KB; if the UDP packet is too short, then it’s dropped without further processing. Check out <a href="https://int08h.com/post/to-catch-a-lying-timeserver/">this blog post</a> for a more in-depth discussion.</p><hr />
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7qbWjYHoaWRrXuc0as57rf/0cc40418c06f19279f23d3cb68d8a5f9/Cloudflare-Roughtime-3.png" />
            
            </figure><p>Roughtime with batching</p><hr />
    <div>
      <h3>Using Roughtime</h3>
      <a href="#using-roughtime">
        
      </a>
    </div>
    <p>The protocol is flexible enough to support a variety of use cases. A web browser could use a Roughtime server to proactively synchronize its clock when validating TLS certificates. It could also be used retroactively to avoid showing the user too many warnings: when a certificate validation error occurs — in particular, when the browser believes it’s expired or not yet valid — Roughtime could be used to determine if the clock skew was the root cause. Instead of telling the user the certificate is invalid, it could tell the user that their clock is incorrect.</p><p>Using just one server is sufficient if that server is trustworthy, but a security-conscious user could make requests to many servers; the delta might be computed by eliminating outliers and averaging the responses, or by some <a href="https://roughtime.googlesource.com/roughtime/+/master/go/client/">more sophisticated method</a>. This makes the calculation robust to one or more of the servers misbehaving.</p>
    <div>
      <h3>Making servers accountable</h3>
      <a href="#making-servers-accountable">
        
      </a>
    </div>
    <p>The real power of Roughtime is that it’s auditable. Consider the following mode of operation. The client has a list of servers it will query in a particular order. The client generates a random string — called a blind in the parlance of Roughtime — hashes it, and uses the output as the nonce for its request to the server. For subsequent requests, it computes the nonce as follows: generate a blind, compute the hash of this string and the response from the previous server (including the timestamp and signature), and use this hash as the nonce for the next request.</p><hr />
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4Tud1Kf1Y3Epq0Cpqd6Zvc/8effcbc470799ce34d0f976e8caeb2ec/Cloudflare-Roughtime-4.png" />
            
            </figure><p>Chaining multiple Roughtime servers</p><hr /><p>Creating a chain of timestamps in this way binds each response to the response that precedes it. Thus, the sequence of blinds and signatures constitute a publicly verifiable, cryptographic proof that the timestamps were requested in order (a “clockchain” if you will ?). If the servers are roughly synchronized, then we expect that the sequence to monotonically increase, at least roughly. If one of the servers were consistently behind or ahead of the others, then this would be evident in the sequence. Suppose you get the following sequence of timestamps, each from different servers:</p>
<table>
    <tbody>
        <tr>
            <th>Server
            </th><th>Timestamp
        </th></tr>
        <tr>
            <td>ServerA-Roughtime</td>
            <td>2018-08-29 14:51:50 -0700 PDT</td>
        </tr>
        <tr>
            <td>ServerB-Roughtime</td>
            <td>2018-08-29 14:51:51 -0700 PDT +0:00:01</td>
        </tr>
        <tr>
            <td>Cloudflare-Roughtime</td>
            <td>2018-08-29 12:51:52 -0700 PDT -1:59:59</td>
        </tr>
        <tr>
            <td>ServerC-Roughtime</td>
            <td>2018-08-29 14:51:53 -0700 PDT +2:00:01</td>
        </tr>

    </tbody>
</table>
<p>Servers B and C corroborate the time given by server A, but — oh no! Cloudflare is two hours behind! Unless servers A, B, and C are in cahoots, it’s likely that the time offered by Cloudflare is incorrect. Moreover, you have verifiable, cryptographic proof. In this way, the Roughtime protocol makes our server (and all Roughtime servers) accountable to provide accurate time, or, at least, to be in sync with the others.</p>
    <div>
      <h2>The Roughtime ecosystem</h2>
      <a href="#the-roughtime-ecosystem">
        
      </a>
    </div>
    <p>The infrastructure for monitoring and auditing the <a href="https://roughtime.googlesource.com/roughtime/+/HEAD/ECOSYSTEM.md">Roughtime ecosystem</a> hasn’t been built yet. Right now there’s only a handful of servers: in addition to Cloudflare’s and <a href="https://roughtime.googlesource.com/roughtime/+/master/roughtime-servers.json">Google’s</a>, there’s also a really nice <a href="https://github.com/int08h/roughenough">Rust implementation</a>. The more diversity there is, the healthier the ecosystem becomes. We hope to see more organizations adopt this protocol.</p>
    <div>
      <h3>Cloudflare’s Roughtime service</h3>
      <a href="#cloudflares-roughtime-service">
        
      </a>
    </div>
    <p>For the initial deployment of this service, our primary goals are to ensure high availability and minimal maintenance overhead. Each machine at each Cloudflare location executes an instance of the service and responds to queries using its system clock. The server signs each request individually rather than batch-signing them as described above; we rely on our load balancer to ensure no machine is overwhelmed. There are three ways in which we envision this service could be used:</p><ol><li><p><i>TLS authentication</i>. When a TLS application (a web browser for example) starts, it could make a request to roughtime.cloudflare.com and compute the difference between the reported time and its system time. Whenever it authenticates a TLS server, it would add this difference to the system time to get the current time.</p></li><li><p><i>Roughtime daemon</i>. One could implement an OS daemon that periodically requests the current time. If the reported time differs from the system time by more than a second, it might issue an alert.</p></li><li><p><i>Server auditing</i>. As the <a href="https://roughtime.googlesource.com/roughtime/+/HEAD/ECOSYSTEM.md">Roughtime ecosystem</a> grows, it will be important to ensure that all of the servers are in sync. Individuals or organizations may take it upon themselves to monitor the ecosystem and ensure that the servers are in sync with one another.</p></li></ol><p>The service is reachable wherever you are via our anycast network. This is important for a service like Roughtime, because minimizing network latency helps improve accuracy. For information about how to configure a client to use Cloudflare-Roughtime, check out the <a href="https://developers.cloudflare.com/time-services/roughtime/">developer documentation</a>. Note that our initial release is somewhat experimental. As such, our root public key may change in the future. See the developer docs for information on obtaining the current public key.</p><p>If you want to see what time our Roughtime server returns, click the button below!</p><p>function getTime() { document.getElementById("time-txt-box").innerHTML = "Loading..." fetch("/cdn-cgi/trace").then((res) =&gt; res.text()).then((txt) =&gt; { let ts = txt.match(/ts=([0-9\.]+)/)[1] let str = new Date(parseFloat(ts) * 1000) document.getElementById("time-txt-box").innerHTML = str }) .catch((err) =&gt; { console.log(err) document.getElementById("time-txt-box").innerHTML = "Request failed." }) }</p><p>Get Time! </p><p><a href="/subscribe/"><i>Subscribe to the blog</i></a><i> for daily updates on our announcements.</i></p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3kjD9adVSlGmVPxJvtg0VJ/ba8c654b3f2cfeef827719c8dcd79ca4/Crypto-Week-1-1-3.png" />
            
            </figure><p></p> ]]></content:encoded>
            <category><![CDATA[Crypto Week]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[TLS]]></category>
            <category><![CDATA[OCSP]]></category>
            <category><![CDATA[Research]]></category>
            <category><![CDATA[Cryptography]]></category>
            <guid isPermaLink="false">7kCyQ6aExp7N5vYW6kc3gY</guid>
            <dc:creator>Christopher Patton</dc:creator>
        </item>
        <item>
            <title><![CDATA[Introducing the Cloudflare Onion Service]]></title>
            <link>https://blog.cloudflare.com/cloudflare-onion-service/</link>
            <pubDate>Thu, 20 Sep 2018 12:00:00 GMT</pubDate>
            <description><![CDATA[ Two years ago this week Cloudflare introduced Opportunistic Encryption, a feature that provided additional security and performance benefits to websites that had not yet moved to HTTPS. ]]></description>
            <content:encoded><![CDATA[ <p></p><ul><li><p><b>When</b>: a cold San Francisco summer afternoon</p></li><li><p><b>Where</b>: Room <a href="https://httpstat.us/305">305</a>, Cloudflare</p></li><li><p><b>Who</b>: 2 from Cloudflare + 9 from the Tor Project</p></li></ul><p>What could go wrong?</p>
    <div>
      <h3>Bit of Background</h3>
      <a href="#bit-of-background">
        
      </a>
    </div>
    <p>Two years ago this week Cloudflare introduced <a href="/opportunistic-encryption-bringing-http-2-to-the-unencrypted-web/">Opportunistic Encryption</a>, a feature that provided additional security and performance benefits to websites that had not yet moved to HTTPS. Indeed, back in the old days some websites only used HTTP --- weird, right? “Opportunistic” here meant that the server advertised support for HTTP/2 via an <a href="https://tools.ietf.org/html/rfc7838">HTTP Alternative Service</a> header in the hopes that any browser that recognized the protocol could take advantage of those benefits in subsequent requests to that domain.</p><p>Around the same time, CEO Matthew Prince <a href="/the-trouble-with-tor/">wrote</a> about the importance and challenges of privacy on the Internet and tasked us to find a solution that provides <b>convenience</b>, <b>security</b>, and <b>anonymity</b>.</p><p>From neutralizing fingerprinting vectors and everyday browser trackers that <a href="https://www.eff.org/privacybadger">Privacy Badger</a> feeds on, all the way to mitigating correlation attacks that only big actors are capable of, guaranteeing privacy is a complicated challenge. Fortunately, the <a href="https://www.torproject.org/">Tor Project</a> addresses this extensive <a href="https://www.torproject.org/projects/torbrowser/design/#adversary">adversary model</a> in Tor Browser.</p><p>However, the Internet is full of bad actors, and distinguishing legitimate traffic from malicious traffic, which is one of Cloudflare’s core features, becomes much more difficult when the traffic is anonymous. In particular, many features that make Tor a great tool for privacy also make it a tool for hiding the source of malicious traffic. That is why many resort to using CAPTCHA challenges to make it more expensive to be a bot on the Tor network. There is, however, a collateral damage associated with using CAPTCHA challenges to stop bots: humans eyes also have to deal with them.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/59vmBdRen9zTJnzUEwUOOL/54910c287d16f022e66afc2d8ff68d0e/Captcha-Example.png" />
            
            </figure><p>One way to minimize this is using privacy-preserving cryptographic signatures, aka blinded tokens, such as those that power <a href="/privacy-pass-the-math/">Privacy Pass</a>.</p><p>The other way is to use onions.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3CENi6YjjPGrHKOsS7hfgE/3f07dbad9c56b377cb5bcfa7d8f40c36/Onion-Cloudflare.png" />
            
            </figure>
    <div>
      <h3>Here Come the Onions</h3>
      <a href="#here-come-the-onions">
        
      </a>
    </div>
    <p>Today’s edition of the Crypto Week introduces an “opportunistic” solution to this problem, so that under suitable conditions, anyone using <a href="https://blog.torproject.org/new-release-tor-browser-80">Tor Browser 8.0</a> will benefit from improved security and performance when visiting Cloudflare websites without having to face a CAPTCHA. At the same time, this feature enables more fine-grained rate-limiting to prevent malicious traffic, and since the mechanics of the idea described here are not specific to Cloudflare, anyone can <a href="https://github.com/mahrud/caddy-altonions">reuse this method</a> on their own website.</p><p>Before we continue, if you need a refresher on what Tor is or why we are talking about onions, check out the <a href="https://www.torproject.org/about/overview.html.en">Tor Project</a> website or our own blog post on the <a href="/welcome-hidden-resolver/">DNS resolver onion</a> from June.</p><p>As Matthew mentioned in his blog post, one way to sift through Tor traffic is to use the <a href="https://www.torproject.org/docs/onion-services.html.en">onion service</a> protocol. Onion services are Tor nodes that advertise their public key, encoded as an address with .onion <a href="https://www.cloudflare.com/learning/dns/top-level-domain/">TLD</a>, and use “rendezvous points” to establish connections entirely within the Tor network:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4YwoSQd9m6fwJ4INk44fsz/7aa81a4f71f35f9323ba4173e328d356/Tor-network-example-1.png" />
            
            </figure><p>While onion services are designed to provide anonymity for content providers, <a href="https://securedrop.org/directory/">media organizations</a> use them to allow whistleblowers to communicate securely with them and <a href="https://www.facebook.com/notes/protect-the-graph/making-connections-to-facebook-more-secure/1526085754298237">Facebook</a> uses one to tell Tor users from bots.</p><p>The technical reason why this works is that from an onion service’s point of view each individual Tor connection, or circuit, has a unique but ephemeral number associated to it, while from a normal server’s point of view all Tor requests made via one exit node share the same IP address. Using this circuit number, onion services can distinguish individual circuits and terminate those that seem to behave maliciously. To clarify, this does not mean that onion services can identify or track Tor users.</p><p>While bad actors can still establish a fresh circuit by repeating the rendezvous protocol, doing so involves a cryptographic key exchange that costs time and computation. Think of this like a cryptographic <a href="https://en.wikipedia.org/wiki/File:Dial_up_modem_noises.ogg">dial-up</a> sequence. Spammers can dial our onion service over and over, but every time they have to repeat the key exchange.</p><p>Alternatively, finishing the rendezvous protocol can be thought of as a small proof of work required in order to use the Cloudflare Onion Service. This increases the cost of using our onion service for performing denial of service attacks.</p>
    <div>
      <h3>Problem solved, right?</h3>
      <a href="#problem-solved-right">
        
      </a>
    </div>
    <p>Not quite. As discussed when we introduced the <a href="/welcome-hidden-resolver/">hidden resolver</a>, the problem of ensuring that a seemingly random .onion address is correct is a barrier to usable security. In that case, our solution was to purchase an <a href="https://www.digicert.com/extended-validation-ssl.htm">Extended Validation</a> (EV) certificate, which costs considerably more. Needless to say, this limits who can buy an HTTPS certificate for their onion service to a <a href="https://crt.sh/?Identity=%25.onion">privileged few</a>.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1n6CTaL7LjM6pQGorlXIQ6/4ffd43906dcaa7fac54a098379d12171/Address-Bar.png" />
            
            </figure><p>Some people <a href="https://cabforum.org/pipermail/public/2017-November/012451.html">disagree</a>. In particular, the <a href="https://blog.torproject.org/tors-fall-harvest-next-generation-onion-services">new generation</a> of onion services resolves the weakness that Matthew pointed to as a possible reason why the CA/B Forum <a href="https://cabforum.org/2015/02/18/ballot-144-validation-rules-dot-onion-names/">only permits</a> EV certificates for onion services. This could mean that getting Domain Validation (DV) certificates for onion services could be possible soon. We certainly hope that’s the case.</p><p>Still, DV certificates lack the organization name (e.g. “Cloudflare, Inc.”) that appears in the address bar, and cryptographically relevant numbers are nearly impossible to remember or distinguish for humans. This brings us back to the problem of usable security, so we came up with a different idea.</p>
    <div>
      <h3>Looking at onion addresses differently</h3>
      <a href="#looking-at-onion-addresses-differently">
        
      </a>
    </div>
    <p>Forget for a moment that we’re discussing anonymity. When you type “cloudflare.com” in a browser and press enter, your device first resolves that domain name into an IP address, then your browser asks the server for a certificate valid for “cloudflare.com” and attempts to establish an encrypted connection with the host. As long as the certificate is trusted by a certificate authority, there’s no reason to mind the IP address.</p><p>Roughly speaking, the idea here is to simply switch the IP address in the scenario above with an .onion address. As long as the certificate is valid, the .onion address itself need not be manually entered by a user or even be memorable. Indeed, the fact that the certificate was valid indicates that the .onion address was correct.</p><p>In particular, in the same way that a single IP address can serve millions of domains, a single .onion address should be able to serve any number of domains.</p><p>Except, <a href="https://www.cloudflare.com/learning/dns/what-is-dns/">DNS</a> doesn’t work this way.</p>
    <div>
      <h3>How does it work then?</h3>
      <a href="#how-does-it-work-then">
        
      </a>
    </div>
    <p>Just as with Opportunistic Encryption, we can point users to the Cloudflare Onion Service using <a href="https://tools.ietf.org/html/rfc7838">HTTP Alternative Services</a>, a mechanism that allows servers to tell clients that the service they are accessing is available at another network location or over another protocol. For instance, when Tor Browser makes a request to “cloudflare.com,” Cloudflare adds an Alternative Service header to indicate that the site is available to access over HTTP/2 via our onion services.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2G5bKEOaIJdnsqhNTT2Ozo/e1466f89156e68b539b2cefc8d506d2a/tor-resquest_2x.png" />
            
            </figure><p>In the same sense that Cloudflare owns the IP addresses that serve our customers’ websites, we run 10 .onion addresses. Think of them as 10 Cloudflare points of presence (or PoPs) within the Tor network. The exact header looks something like this, except with all 10 .onion addresses included, each starting with the prefix “cflare”:</p>
            <pre><code>alt-svc: h2="cflare2nge4h4yqr3574crrd7k66lil3torzbisz6uciyuzqc2h2ykyd.onion:443"; ma=86400; persist=1</code></pre>
            <p>This simply indicates that the “cloudflare.com” can be authoritatively accessed using HTTP/2 (“h2”) via the onion service “cflare2n[...].onion”, over virtual port 443. The field “ma” (max-age) indicates how long in seconds the client should remember the existence of the alternative service and “persist” indicates whether alternative service cache should be cleared when the network is interrupted.</p><p>Once the browser receives this header, it attempts to make a new Tor circuit to the onion service advertised in the alt-svc header and confirm that the server listening on virtual port 443 can present a valid certificate for “cloudflare.com” — that is, the original hostname, not the .onion address.</p><p>The onion service then relays the Client Hello packet to a local server which can serve a certificate for “cloudflare.com.” This way the Tor daemon itself can be very minimal. Here is a sample configuration file:</p>
            <pre><code>SocksPort 0
HiddenServiceNonAnonymousMode 1
HiddenServiceSingleHopMode 1
HiddenServiceVersion 3
HiddenServicePort 443
SafeLogging 1
Log notice stdout</code></pre>
            <p>Be careful with using the configuration above, as it enables a non-anonymous setting for onion services that do not require anonymity for themselves. To clarify, this does not sacrifice privacy or anonymity of Tor users, just the server. Plus, it improves latency of the circuits.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5NVzWfvM9FVH73gjp0AG3X/8a3dbf2a8440bb0f32e05626e30bb695/Tor-Onion-Service-Cloudflare.png" />
            
            </figure><p>If the certificate is signed by a trusted certificate authority, for any subsequent requests to “cloudflare.com” the browser will connect using HTTP/2 via the onion service, sidestepping the need for going through an exit node.</p><p>Here are the steps summarized one more time:</p><ol><li><p>A new Tor circuit is established;</p></li><li><p>The browser sends a Client Hello to the onion service with SNI=cloudflare.com;</p></li><li><p>The onion service relays the packet to a local server;</p></li><li><p>The server replies with Server Hello for SNI=cloudflare.com;</p></li><li><p>The onion service relays the packet to the browser;</p></li><li><p>The browser verifies that the certificate is valid.</p></li></ol><p>To reiterate, the certificate presented by the onion service only needs to be valid for the original hostname, meaning that the onion address need not be mentioned anywhere on the certificate. This is a huge benefit, because it allows you to, for instance, present a free <a href="https://letsencrypt.org">Let’s Encrypt</a> certificate for your .org domain rather than an expensive EV certificate.</p><p>Convenience, ✓</p>
    <div>
      <h3>Distinguishing the Circuits</h3>
      <a href="#distinguishing-the-circuits">
        
      </a>
    </div>
    <p>Remember that while one exit node can serve many many different clients, from Cloudflare’s point of view all of that traffic comes from one IP address. This pooling helps cover the malicious traffic among legitimate traffic, but isn’t essential in the security or privacy of Tor. In fact, it can potentially hurt users by exposing their traffic to <a href="https://trac.torproject.org/projects/tor/wiki/doc/ReportingBadRelays">bad exit nodes</a>.</p><p>Remember that Tor circuits to onion services carry a circuit number which we can use to rate-limit the circuit. Now, the question is how to inform a server such as nginx of this number with minimal effort. As it turns out, with only a <a href="https://github.com/torproject/tor/pull/343/">small tweak</a> in the Tor binary, we can insert a <a href="https://www.haproxy.org/download/1.8/doc/proxy-protocol.txt">Proxy Protocol</a> header in the beginning of each packet that is forwarded to the server. This protocol is designed to help TCP proxies pass on parameters that can be lost in translation, such as source and destination IP addresses, and is already supported by nginx, Apache, Caddy, etc.</p><p>Luckily for us, the IPv6 space is so vast that we can encode the Tor circuit number as an IP address in an unused range and use the Proxy Protocol to send it to the server. Here is an example of the header that our Tor daemon would insert in the connection:</p>
            <pre><code>PROXY TCP6 2405:8100:8000:6366:1234:ABCD ::1 43981 443\r\n</code></pre>
            <p>In this case, 0x1234ABCD encodes the circuit number in the last 32 bits of the source IP address. The local Cloudflare server can then transparently use that IP to assign reputation, show CAPTCHAs, or block requests when needed.</p><p>Note that even though requests relayed by an onion service don’t carry an IP address, you will see an IP address like the one above with country code “T1” in your logs. This IP only specifies the circuit number seen by the onion service, not the actual user IP address. In fact, 2405:8100:8000::/48 is an unused subnet allocated to Cloudflare that we are not routing globally for this purpose.</p><p>This enables customers to continue detecting bots using IP reputation while sparing humans the trouble of clicking on CAPTCHA street signs over and over again.</p><p>Security, ✓</p>
    <div>
      <h3>Why should I trust Cloudflare?</h3>
      <a href="#why-should-i-trust-cloudflare">
        
      </a>
    </div>
    <p>You don’t need to. The Cloudflare Onion Service presents the exact same certificate that we would have used for direct requests to our servers, so you could audit this service using Certificate Transparency (which includes <a href="/introducing-certificate-transparency-and-nimbus/">Nimbus</a>, our certificate transparency log), to reveal any potential cheating.</p><p>Additionally, since Tor Browser 8.0 makes a new circuit for each hostname when connecting via an .onion alternative service, the circuit number cannot be used to link connections to two different sites together.</p><p>Note that all of this works without running any entry, relay, or exit nodes. Therefore the only requests that we see as a result of this feature are the requests that were headed for us anyway. In particular, since no new traffic is introduced, Cloudflare does not gain any more information about what people do on the internet.</p><p>Anonymity, ✓</p>
    <div>
      <h3>Is it faster?</h3>
      <a href="#is-it-faster">
        
      </a>
    </div>
    <p>Tor isn’t known for being fast. One reason for that is the physical cost of having packets bounce around in a decentralized network. Connections made through the Cloudflare Onion Service don’t add to this cost because the number of hops is no more than usual.</p><p>Another reason is the bandwidth costs of exit node operators. This is an area that we hope this service can offer relief since it shifts traffic from exit nodes to our own servers, reducing exit node operation costs along with it.</p><p>BONUS: Performance, ✓</p>
    <div>
      <h3>How do I enable it?</h3>
      <a href="#how-do-i-enable-it">
        
      </a>
    </div>
    <p>Onion Routing is now available to all Cloudflare customers, enabled by default for Free and <a href="https://www.cloudflare.com/plans/pro/">Pro plans</a>. The option is available in the Crypto tab of the Cloudflare dashboard.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5hA2RUo2mh5WZDM5xSkwow/d407c5fb030c3df65cf64fdbad2fffcd/Screen-Shot-2018-09-20-at-7.36.11-AM.jpg" />
            
            </figure>
    <div>
      <h3>Browser support</h3>
      <a href="#browser-support">
        
      </a>
    </div>
    <p>We recommend using <a href="https://blog.torproject.org/new-release-tor-browser-80">Tor Browser 8.0</a>, which is the first stable release based on Firefox 60 ESR, and supports .onion Alt-Svc headers as well as HTTP/2. The new Tor Browser for Android (alpha) also supports this feature. You can check whether your connection is routed through an onion service or not in the Developer Tools window under the Network tab. If you're using the Tor Browser and you don't see the Alt-Svc in the response headers, that means you're already using the .onion route. In future versions of Tor Browser you'll be able to see this <a href="https://trac.torproject.org/projects/tor/ticket/27590">in the UI</a>.</p><blockquote><p>We've got BIG NEWS. We gave Tor Browser a UX overhaul.</p><p>Tor Browser 8.0 has a new user onboarding experience, an updated landing page, additional language support, and new behaviors for bridge fetching, displaying a circuit, and visiting .onion sites.<a href="https://t.co/fpCpSTXT2L">https://t.co/fpCpSTXT2L</a> <a href="https://t.co/xbj9lKTApP">pic.twitter.com/xbj9lKTApP</a></p><p>— The Tor Project (@torproject) <a href="https://twitter.com/torproject/status/1037397236257366017?ref_src=twsrc%5Etfw">September 5, 2018</a></p></blockquote><p>There is also interest from other privacy-conscious browser vendors. Tom Lowenthal, Product Manager for Privacy &amp; Security at <a href="https://brave.com/">Brave</a> said:</p><blockquote><p>Automatic upgrades to `.onion` sites will provide another layer of safety to Brave’s Private Browsing with Tor. We’re excited to implement this emerging standard.</p></blockquote>
    <div>
      <h3>Any last words?</h3>
      <a href="#any-last-words">
        
      </a>
    </div>
    <p>Similar to Opportunistic Encryption, Opportunistic Onions do not fully protect against attackers who can simply remove the alternative service header. Therefore it is important to use <a href="https://www.eff.org/https-everywhere">HTTPS Everywhere</a> to secure the first request. Once a Tor circuit is established, subsequent requests should stay in the Tor network from source to destination.</p><p>As we maintain and <a href="https://trac.torproject.org/projects/tor/ticket/27502">improve</a> this service we will share what we learn. In the meanwhile, feel free to try out this idea on <a href="https://github.com/mahrud/caddy-altonions">Caddy</a> and reach out to us with any comments or suggestions that you might have.</p>
    <div>
      <h3>Acknowledgments</h3>
      <a href="#acknowledgments">
        
      </a>
    </div>
    <p>Patrick McManus of Mozilla for enabling support for .onion alternative services in Firefox; Arthur Edelstein of the Tor Project for reviewing and enabling HTTP/2 and HTTP Alternative Services in Tor Browser 8.0; Alexander Færøy and George Kadianakis of the Tor Project for adding support for Proxy Protocol in onion services; the entire Tor Project team for their invaluable assistance and discussions; and last, but not least, many folks at Cloudflare who helped with this project.</p>
    <div>
      <h4>Addresses used by the Cloudflare Onion Service</h4>
      <a href="#addresses-used-by-the-cloudflare-onion-service">
        
      </a>
    </div>
    
            <pre><code>cflarexljc3rw355ysrkrzwapozws6nre6xsy3n4yrj7taye3uiby3ad.onion
cflarenuttlfuyn7imozr4atzvfbiw3ezgbdjdldmdx7srterayaozid.onion
cflares35lvdlczhy3r6qbza5jjxbcplzvdveabhf7bsp7y4nzmn67yd.onion
cflareusni3s7vwhq2f7gc4opsik7aa4t2ajedhzr42ez6uajaywh3qd.onion
cflareki4v3lh674hq55k3n7xd4ibkwx3pnw67rr3gkpsonjmxbktxyd.onion
cflarejlah424meosswvaeqzb54rtdetr4xva6mq2bm2hfcx5isaglid.onion
cflaresuje2rb7w2u3w43pn4luxdi6o7oatv6r2zrfb5xvsugj35d2qd.onion
cflareer7qekzp3zeyqvcfktxfrmncse4ilc7trbf6bp6yzdabxuload.onion
cflareub6dtu7nvs3kqmoigcjdwap2azrkx5zohb2yk7gqjkwoyotwqd.onion
cflare2nge4h4yqr3574crrd7k66lil3torzbisz6uciyuzqc2h2ykyd.onion</code></pre>
            <p><a href="/subscribe/"><i>Subscribe to the blog</i></a><i> for daily updates on our announcements.</i></p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/15NmOPYhQ1eUrnNvavD3TX/f3878ea7031dee5fa0b8fcfffb5e6563/Crypto-Week.png" />
            
            </figure><p></p> ]]></content:encoded>
            <category><![CDATA[Crypto Week]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[Tor]]></category>
            <category><![CDATA[Privacy]]></category>
            <category><![CDATA[Privacy Pass]]></category>
            <category><![CDATA[Cryptography]]></category>
            <guid isPermaLink="false">7mmYqDqVbCUWqpT2wyf2OU</guid>
            <dc:creator>Mahrud Sayrafi</dc:creator>
        </item>
        <item>
            <title><![CDATA[RPKI and BGP: our path to securing Internet Routing]]></title>
            <link>https://blog.cloudflare.com/rpki-details/</link>
            <pubDate>Wed, 19 Sep 2018 12:01:00 GMT</pubDate>
            <description><![CDATA[ This article will talk about our approach to network security using technologies like RPKI to sign Internet routes and protect our users and customers from route hijacks and misconfigurations. ]]></description>
            <content:encoded><![CDATA[ <p></p><p>This article will talk about our approach to <a href="https://www.cloudflare.com/network-security/">network security</a> using technologies like RPKI to sign Internet routes and protect our users and customers from route hijacks and misconfigurations. We are proud to announce we have started deploying active filtering by using RPKI for routing decisions and signing our routes.</p><p>Back in April, articles including our blog post on <a href="/bgp-leaks-and-crypto-currencies/">BGP and route-leaks</a> were reported in the news, highlighting how IP addresses can be redirected maliciously or by mistake. While enormous, the underlying routing infrastructure, the bedrock of the Internet, has remained mostly unsecured.</p><p>At Cloudflare, we decided to secure our part of the Internet by protecting our customers and everyone using our services including our recursive resolver <a href="https://www.cloudflare.com/learning/dns/what-is-1.1.1.1/">1.1.1.1</a>.</p>
    <div>
      <h3>From BGP to RPKI, how do we Internet ?</h3>
      <a href="#from-bgp-to-rpki-how-do-we-internet">
        
      </a>
    </div>
    <p>A prefix is a range of IP addresses, for instance, <code>10.0.0.0/24</code>, whose first address is <code>10.0.0.0</code> and the last one is <code>10.0.0.255</code>. A computer or a server usually have one. A router creates a list of reachable prefixes called a routing table and uses this routing table to transport packets from a source to a destination.  </p><p>On the Internet, network devices exchange routes via a protocol called <a href="https://www.cloudflare.com/learning/security/glossary/what-is-bgp/">BGP</a> (Border Gateway Protocol). BGP enables a map of the interconnections on the Internet so that packets can be sent across different networks to reach their final destination. BGP binds the separate networks together into the Internet.</p><p>This dynamic protocol is also what makes Internet so resilient by providing multiple paths in case a router on the way fails. A BGP announcement is usually composed of a <i>prefix</i> which can be reached at a <i>destination</i> and was originated by an <i>Autonomous System Number</i> (ASN).</p><p>IP addresses and Autonomous Systems Numbers are allocated by five Regional Internet Registries (RIR): <a href="https://afrinic.net/">Afrinic</a> for Africa, <a href="https://www.apnic.net/">APNIC</a> for Asia-Pacific, <a href="https://www.arin.net">ARIN</a> for North America, <a href="https://www.lacnic.net">LACNIC</a> for Central and South America and <a href="https://www.ripe.net">RIPE</a> for Europe, Middle-East and Russia. Each one operates independently.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3fnGjueGh1e3tInixUT2zc/49f63762c184aacac85079da9e50b744/rirs-01.png" />
            
            </figure><p>With more than 700,000 IPv4 routes and 40,000 IPv6 routes announced by all Internet actors, it is difficult to know who owns which resource.</p><p>There is no simple relationship between the entity that has a prefix assigned, the one that announces it with an ASN and the ones that receive or send packets with these IP addresses. An entity owning <code>10.0.0.0/8</code> may be delegating a subset <code>10.1.0.0/24</code> of that space to another operator while being announced through the AS of another entity.</p><p>Thereby, a route leak or a route hijack is defined as the illegitimate advertisement of an IP space. The term <i>route hijack</i> implies a malicious purpose while a route leak usually happens because of a misconfiguration.</p><p>A change in route will cause the traffic to be redirected via other networks. Unencrypted traffic can be read and modified. HTTP webpages and DNS without DNSSEC are sensitive to these exploits.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/wdvXA5fV7m7z6e5Xdtqx2/adce2d2dc0b79010b7c4a3e353fec50a/bgp-hijacking-technical-flow-1.png" />
            
            </figure><p>You can learn more about BGP Hijacking in our <a href="https://www.cloudflare.com/learning/security/glossary/bgp-hijacking/">Learning Center</a>.</p><p>When an illegitimate announcement is detected by a peer, they usually notify the origin and reconfigure their network to reject the invalid route.Unfortunately, the time to detect and act upon may take from a few minutes up to a few days, more than enough to steal cryptocurrencies, <a href="https://en.wikipedia.org/wiki/DNS_spoofing">poison a DNS</a> cache or make a website unavailable.</p><p>A few systems exist to document and prevent illegitimate BGP announcements.</p><p><b>The Internet Routing Registries (IRR)</b> are semi-public databases used by network operators to register their assigned Internet resources. Some database maintainers do not check whether the entry was actually made by the owner, nor check if the prefix has been transferred to somebody else. This makes them prone to error and not completely reliable.</p><p><b>Resource Public Key Infrastructure (RPKI)</b> is similar to the IRR “route” objects, but adding the authentication with cryptography.</p><p>Here’s how it works: each RIR has a root certificate. They can generate a signed certificate for a Local Internet Registry (LIR, a.k.a. a network operator) with all the resources they are assigned (IPs and ASNs). The LIR then signs the prefix containing the origin AS that they intend to use: a ROA (Route Object Authorization) is created. ROAs are just simple X.509 certificates.</p><p>If you are used to <a href="https://www.cloudflare.com/application-services/products/ssl/">SSL/TLS certificates</a> used by browsers to authenticate the holder of a website, then ROAs are the equivalent in the Internet routing world.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/790gd4F1LviExZpxP0HAFk/f1430645a7c769788129fb47c3b9dca6/roas_3x-1.png" />
            
            </figure>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4tQYkBINEbYbCSl7cM5Jvw/568c706d6256d0f44740aecb28297cd6/routing-rpki-2-01.png" />
            
            </figure>
    <div>
      <h3>Signing prefixes</h3>
      <a href="#signing-prefixes">
        
      </a>
    </div>
    <p>Each network operator owning and managing Internet resources (IP addresses, Autonomous System Numbers) has access to their Regional Internet Registry portal. Signing their prefixes through the portal or the API of their RIR is the easiest way to start with RPKI.</p><p>Because of our global presence, Cloudflare has resources in each of the 5 RIR regions. With more than 800 prefix announcements over different ASNs, the first step was to ensure the prefixes we were going to sign were correctly announced.</p><p>We started by signing our less used prefixes, checked if the traffic levels remained the same and then signed more prefixes. Today about 25% of Cloudflare prefixes are signed. This includes our critical DNS servers and our <a href="https://one.one.one.one">public 1.1.1.1 resolver</a>.</p>
    <div>
      <h3>Enforcing validated prefixes</h3>
      <a href="#enforcing-validated-prefixes">
        
      </a>
    </div>
    <p>Signing the prefixes is one thing. But ensuring that the prefixes we receive from our peers match their certificates is another.</p><p>The first part is validating the certificate chain. It is done by synchronizing the RIR databases of ROAs through rsync (although there are some new proposals regarding <a href="https://tools.ietf.org/html/rfc8182">distribution over HTTPS</a>), then check the signature of every ROA against the RIR’s certificate public key. Once the valid records are known, this information is sent to the routers.</p><p>Major vendors support a protocol called <a href="https://tools.ietf.org/html/rfc6810">RPKI to Router Protocol</a> (abbreviated as RTR). This is a simple protocol for passing a list of valid prefixes with their origin ASN and expected mask length. However, while the RFC defines 4 different secure transport methods, vendors have only implemented the insecure one. Routes sent in clear text over TCP can be tampered with.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7ahyNsczbXfzaYtuvvbJ6g/2bf47a4a24b03da433a29a456725cce3/RPKI-diagram-_3x-2.png" />
            
            </figure><p>With more than 150 routers over the globe, it would be unsafe to rely on these cleartext TCP sessions over the insecure and lossy Internet to our validator. We needed local distribution on a link we know secure and reliable.</p><p>One option we considered was to install an RPKI RTR server and a validator in each of our 150+ datacenters, but doing so would cause a significant increase in operational cost and reduce debugging capabilities.</p>
    <div>
      <h4>Introducing GoRTR</h4>
      <a href="#introducing-gortr">
        
      </a>
    </div>
    <p>We needed an easier way of passing an RPKI cache securely. After some system design sessions, we settled on distributing the list of valid routes from a central validator securely, distribute it via our own <a href="https://www.cloudflare.com/learning/cdn/what-is-a-cdn/">Content Delivery Network</a> and use a lightweight local RTR server. This server fetches the cache file over HTTPS and passes the routes over RTR.</p><p>Rolling out this system on all our PoPs using automation was straightforward and we are progressively moving towards enforcing strict validation of RPKI signed routes everywhere.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/ztRweRuwqRgBnTauHI0P9/215d2f4b6ee60a54a26956d55e3f3839/gortr-2-01.png" />
            
            </figure><p>To encourage adoption of Route Origin Validation on the Internet, we also want to provide this service to everyone, for free. You can already download our <a href="https://github.com/cloudflare/gortr">RTR server</a> with the cache behind Cloudflare. Just configure your <a href="https://www.juniper.net/documentation/en_US/junos/topics/topic-map/bgp-origin-as-validation.html">Juniper</a> or <a href="https://www.cisco.com/c/en/us/td/docs/routers/asr9000/software/asr9k_r6-1/routing/configuration/guide/b-routing-cg-asr9k-61x/b-routing-cg-asr9k-61x_chapter_010.html#concept_A84818AD41744DFFBD094DA7FCD7FE8B">Cisco</a> router. And if you do not want to use our file of prefixes, it is compatible with the RIPE RPKI Validator Export format.</p><p>We are also working on providing a public RTR server using our own <a href="https://www.cloudflare.com/products/cloudflare-spectrum/">Spectrum service</a> so that you will not have to install anything, just make sure you peer with us! Cloudflare is present on many Internet Exchange Points so we are one hop away from most routers.</p>
    <div>
      <h3>Certificate transparency</h3>
      <a href="#certificate-transparency">
        
      </a>
    </div>
    <p>A few months ago, <a href="/author/nick-sullivan/">Nick Sullivan</a> introduced our new <a href="/introducing-certificate-transparency-and-nimbus/">Nimbus Certificate Transparency Log</a>.</p><p>In order to track the emitted certificates in the RPKI, our Crypto team created a new Certificate Transparency Log called <a href="https://ct.cloudflare.com/logs/cirrus">Cirrus</a> which includes the five RIRs root certificates as trust anchors. Certificate transparency is a great tool for detecting bad behavior in the RPKI because it keeps a permanent record of all valid certificates that are submitted to it in an append-only database that can’t be modified without detection. It also enables users to download the entire set of certificates via an HTTP API.</p>
    <div>
      <h3>Being aware of route leaks</h3>
      <a href="#being-aware-of-route-leaks">
        
      </a>
    </div>
    <p>We use services like <a href="https://www.bgpmon.net">BGPmon</a> and other public observation services extensively to ensure quick action if some of our prefixes are leaked. We also have internal BGP and BMP collectors, aggregating more than 60 millions routes and processing live updates.</p><p>Our filters make use of this live feed to ensure we are alerted when a suspicious route appears.</p>
    <div>
      <h3>The future</h3>
      <a href="#the-future">
        
      </a>
    </div>
    <p>The <a href="https://blog.benjojo.co.uk/post/are-bgps-security-features-working-yet-rpki">latest statistics</a> suggest that around 8.7% of the IPv4 Internet routes are signed with RPKI, but only 0.5% of all the networks apply strict RPKI validation.Even with RPKI validation enforced, a BGP actor could still impersonate your origin AS and advertise your BGP route through a malicious router configuration.</p><p>However that can be partially solved by denser interconnections, that Cloudflare already has through an extensive network of private and public interconnections.To be fully effective, RPKI must be deployed by multiple major network operators.</p><p>As said by <a href="http://instituut.net/~job/">Job Snijders</a> from NTT Communications, who’s been at the forefront of the efforts of securing Internet routing:</p><blockquote><p>If the Internet's major content providers use RPKI and validate routes, the impact of BGP attacks is greatly reduced because protected paths are formed back and forth. It'll only take a small specific group of densely connected organizations to positively impact the Internet experience for billions of end users.</p></blockquote><p>RPKI is not a bullet-proof solution to securing all routing on the Internet, however it represents the first milestone in moving from trust based to authentication based routing. Our intention is to demonstrate that it can be done simply and cost efficiently. We are inviting operators of critical Internet infrastructure to follow us in a large scale deployment.</p><p><a href="/subscribe/"><i>Subscribe to the blog</i></a><i> for daily updates on our announcements.</i></p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1H1JDnSWx28sdAjdF3sd0Z/b5425cf515b60c2c3a9be6b2420d8a3b/CRYPTO-WEEK-banner_2x.png" />
            
            </figure><p></p> ]]></content:encoded>
            <category><![CDATA[Crypto Week]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[RPKI]]></category>
            <category><![CDATA[BGP]]></category>
            <category><![CDATA[Cryptography]]></category>
            <guid isPermaLink="false">2D6tCrWBtiucUXYsoEFWJZ</guid>
            <dc:creator>Jérôme Fleury</dc:creator>
            <dc:creator>Louis Poinsignon</dc:creator>
        </item>
    </channel>
</rss>