
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/">
    <channel>
        <title><![CDATA[ The Cloudflare Blog ]]></title>
        <description><![CDATA[ Get the latest news on how products at Cloudflare are built, technologies used, and join the teams helping to build a better Internet. ]]></description>
        <link>https://blog.cloudflare.com</link>
        <atom:link href="https://blog.cloudflare.com/" rel="self" type="application/rss+xml"/>
        <language>en-us</language>
        
        <lastBuildDate>Sat, 04 Apr 2026 10:44:23 GMT</lastBuildDate>
        <item>
            <title><![CDATA[Supporting the latest version of the Privacy Pass Protocol]]></title>
            <link>https://blog.cloudflare.com/supporting-the-latest-version-of-the-privacy-pass-protocol/</link>
            <pubDate>Mon, 28 Oct 2019 13:00:00 GMT</pubDate>
            <description><![CDATA[ At Cloudflare, we are committed to supporting and developing new privacy-preserving technologies that benefit all Internet users. In November 2017, we announced server-side support for the Privacy Pass protocol, a piece of work developed in collaboration with the academic community. ]]></description>
            <content:encoded><![CDATA[ 
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/445gSyr8pFihUh221BqGV5/0e704d8d2ecd834689eea53c04554be5/Privacy-Pass-_2x-2.png" />
            
            </figure><p>At Cloudflare, we are committed to supporting and developing new privacy-preserving technologies that benefit all Internet users. In November 2017, we announced server-side support for the <a href="http://staging.blog.mrk.cfdata.org/cloudflare-supports-privacy-pass/">Privacy Pass protocol</a>, a piece of work developed in <a href="https://petsymposium.org/2018/files/papers/issue3/popets-2018-0026.pdf">collaboration with the academic community</a>. Privacy Pass, in a nutshell, allows clients to provide proof of trust <a href="https://privacypass.github.io/protocol/">without revealing where and when the trust was provided</a>. The aim of the protocol is then to allow anyone to prove they are trusted by a server, without that server being able to track the user via the trust that was assigned.</p><p>On a technical level, Privacy Pass clients receive attestation tokens from a server, that can then be redeemed in the future. These tokens are provided when a server deems the client to be trusted; for example, after they have logged into a service or if they prove certain characteristics. The redeemed tokens are cryptographically unlinkable to the attestation originally provided by the server, and so they do not reveal anything about the client.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7LVQnmDxgw0kv43MipnEO5/ed93518b30730567d1780e22fa46e606/imageLikeEmbed--2-.png" />
            
            </figure>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/62mh0kvdwZSIUQhkLHmqLt/ea96c6a856a3b53c7ceb8a3b52c6dd3d/imageLikeEmbed--1-.png" />
            
            </figure><p>To use Privacy Pass, clients can install an <a href="https://github.com/privacypass/challenge-bypass-extension">open-source</a> browser extension available in <a href="https://chrome.google.com/webstore/detail/privacy-pass/ajhmfdgkijocedmfjonnpjfojldioehi?hl=en">Chrome</a> &amp; <a href="https://addons.mozilla.org/en-US/firefox/addon/privacy-pass/">Firefox</a>. There have been over 150,000 individual downloads of Privacy Pass worldwide; approximately 130,000 in Chrome and more than 20,000 in Firefox. The extension is supported by Cloudflare to make websites more accessible for users. This complements previous work, including the launch of <a href="http://staging.blog.mrk.cfdata.org/cloudflare-onion-service/">Cloudflare onion services</a> to help improve accessibility for users of the Tor Browser.</p><p>The initial release was almost two years ago, and it was followed up with a <a href="https://petsymposium.org/2018/files/papers/issue3/popets-2018-0026.pdf">research publication</a> that was presented at the <a href="https://www.youtube.com/watch?v=9DsUi-UF2pM&amp;list=PLWSQygNuIsPd6YJmGV9kn1mP2A6-IBCoU&amp;index=10">Privacy Enhancing Technologies Symposium 2018</a> (winning a Best Student Paper award). Since then, Cloudflare has been working with the wider community to build on the initial design and improve Privacy Pass. We’ll be talking about the work that we have done to develop the existing implementations, alongside the protocol itself.</p><h1>What’s new?</h1><p><b>Support for Privacy Pass v2.0 browser extension:</b></p><ul><li><p>Easier configuration of workflow.</p></li><li><p>Integration with new service provider (hCaptcha).</p></li><li><p>Compliance with hash-to-curve draft.</p></li><li><p>Possible to rotate keys outside of extension release.</p></li><li><p>Available in <a href="https://chrome.google.com/webstore/detail/privacy-pass/ajhmfdgkijocedmfjonnpjfojldioehi?hl=en">Chrome</a> and <a href="https://addons.mozilla.org/en-US/firefox/addon/privacy-pass/">Firefox</a> (works best with up-to-date browser versions).</p></li></ul><p><b>Rolling out a new server backend using Cloudflare Workers platform:</b></p><ul><li><p>Cryptographic operations performed using internal V8 engine.</p></li><li><p>Provides public redemption API for Cloudflare Privacy Pass v2.0 tokens.</p></li><li><p>Available by making POST requests to <a href="https://privacypass.cloudflare.com/api/redeem">https://privacypass.cloudflare.com/api/redeem</a>. See the documentation for <a href="https://privacypass.github.io/api-redeem">example usage</a>.</p></li><li><p>Only compatible with extension v2.0 (check that you have updated!).</p></li></ul><p><b>Standardization:</b></p><ul><li><p>Continued development of oblivious pseudorandom functions (OPRFs) <a href="https://datatracker.ietf.org/doc/draft-irtf-cfrg-voprf/">draft</a> in prime-order groups with CFRG@IRTF.</p></li><li><p><a href="https://github.com/alxdavids/draft-privacy-pass">New draft</a> specifying Privacy Pass protocol.</p></li></ul><h1>Extension v2.0</h1><p>In the time since the release, we’ve been working on a number of new features. Today we’re excited to announce support for version 2.0 of the extension, the first update since the original release. The extension continues to be available for <a href="https://chrome.google.com/webstore/detail/privacy-pass/ajhmfdgkijocedmfjonnpjfojldioehi?hl=en">Chrome</a> and <a href="https://addons.mozilla.org/en-US/firefox/addon/privacy-pass/">Firefox</a>. You may need to download v2.0 manually from the store if you have auto-updates disabled in your browser.</p><p>The extension remains under active development and we still regard our support as in the beta phase. This will continue to be the case as the draft specification of the protocol continues to be written in collaboration with the wider community.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6WHnz7X5VULjd2rQPvSCYM/3fb47ca24f01788504fd4768813267fc/pasted-image-0-2.png" />
            
            </figure>
    <div>
      <h3>New Integrations</h3>
      <a href="#new-integrations">
        
      </a>
    </div>
    <p>The client implementation uses the <a href="https://developer.chrome.com/extensions/webRequest">WebRequest API</a> to look for certain types of HTTP requests. When these requests are spotted, they are rewritten to include some cryptographic data required for the Privacy Pass protocol. This allows Privacy Pass providers receiving this data to authorize access for the user.</p><p>For example, a user may receive Privacy Pass tokens for completing some server security checks. These tokens are stored by the browser extension, and any future request that needs similar security clearance can be modified to add a stored token as an extra HTTP header. The server can then check the client token and verify that the client has the correct authorization to proceed.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1EOFEhhNfe23pe6Beqsprx/f11bd08106711abf443dd53afd45f013/imageLikeEmbed--4-.png" />
            
            </figure><p>While Cloudflare supports a particular type of request flow, it would be impossible to expect different service providers to all abide by the same exact interaction characteristics. One of the major changes in the v2.0 extension has been a technical rewrite to instead use a central configuration file. The config is specified in the <a href="https://github.com/privacypass/challenge-bypass-extension/blob/master/src/ext/config.js">source code</a> of the extension and allows easier modification of the browsing characteristics that initiate Privacy Pass actions. This makes adding new, completely different request flows possible by simply cloning and adapting the configuration for new providers.</p><p>To demonstrate that such integrations are now possible with other services beyond Cloudflare, a new version of the extension will soon be rolling out that is supported by the CAPTCHA provider <a href="https://www.hcaptcha.com/">hCaptcha</a>. Users that solve ephemeral challenges provided by hCaptcha will receive privacy-preserving tokens that will be redeemable at other hCaptcha customer sites.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/66fIxiUIFsXF1mShWhpp9U/8e1e7cf73b844e2128b8766a8dde95dc/image-8-1.png" />
            
            </figure><p><i>“hCaptcha is focused on user privacy, and supporting Privacy Pass is a natural extension of our work in this area. We look forward to working with Cloudflare and others to make this a common and widely adopted standard, and are currently exploring other applications. Implementing Privacy Pass into our globally distributed service was relatively straightforward, and we have enjoyed working with the Cloudflare team to improve the open source Chrome browser extension in order to deliver the best experience for our users.”</i></p><p></p><p>— <b>Eli-Shaoul Khedouri</b>, founder of hCaptcha</p><p>This hCaptcha integration with the Privacy Pass browser extension acts as a proof-of-concept in establishing support for new services. Any new providers that would like to integrate with the Privacy Pass browser extension can do so simply by making a PR to the <a href="https://github.com/privacypass/challenge-bypass-extension/">open-source repository</a>.</p>
    <div>
      <h2>Improved cryptographic functionality</h2>
      <a href="#improved-cryptographic-functionality">
        
      </a>
    </div>
    <p>After the release of v1.0 of the extension, there were features that were still unimplemented. These included proper zero-knowledge proof validation for checking that the server was always using the same committed key. In v2.0 this functionality has been completed, verifiably preventing a malicious server from attempting to deanonymize users by using a different key for each request.</p><p>The cryptographic operations required for Privacy Pass are performed using <a href="http://staging.blog.mrk.cfdata.org/a-relatively-easy-to-understand-primer-on-elliptic-curve-cryptography/">elliptic curve cryptography</a> (ECC). The extension currently uses the <a href="https://www.secg.org/SEC2-Ver-1.0.pdf">NIST P-256</a> curve, for which we have included some optimisations. Firstly, this makes it possible to store elliptic curve points in compressed and uncompressed data formats. This means that browser storage can be reduced by 50%, and that server responses can be made smaller too.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6k9bRq8TswnyzrNdFxl0km/9db49e7d648d03b225077aecb6ee0fa0/imageLikeEmbed--5-.png" />
            
            </figure><p>Secondly, support has been added for hashing to the P-256 curve using the “Simplified Shallue-van de Woestijne-Ulas” (SSWU) method specified in an ongoing draft (<a href="https://tools.ietf.org/html/draft-irtf-cfrg-hash-to-curve-03">https://tools.ietf.org/html/draft-irtf-cfrg-hash-to-curve-03</a>) for standardizing encodings for hashing to elliptic curves. The implementation is compliant with the specification of the “P256-SHA256-SSWU-” ciphersuite in this draft.</p><p>These changes have a dual advantage, firstly ensuring that the P-256 hash-to-curve specification is compliant with the draft specification. Secondly this ciphersuite removes the necessity for using probabilistic methods, such as <a href="https://tools.ietf.org/html/draft-irtf-cfrg-vrf-05#section-5.4.1.1">hash-and-increment</a>. The hash-and-increment method has a non-negligible chance of failure, and the running time is highly dependent on the hidden client input. While it is not clear how to abuse timing attack vectors currently, using the SSWU method should reduce the potential for attacking the implementation, and learning the client input.</p>
    <div>
      <h2>Key rotation</h2>
      <a href="#key-rotation">
        
      </a>
    </div>
    <p>As we mentioned above, verifying that the server is always using the same key is an important part of ensuring the client’s privacy. This ensures that the server cannot segregate the user base and reduce client privacy by using different secret keys for each client that it interacts with. The server guarantees that it’s always using the same key by publishing a commitment to its public key somewhere that the client can access.</p><p>Every time the server issues Privacy Pass tokens to the client, it also produces a <a href="https://en.wikipedia.org/wiki/Zero-knowledge_proof">zero-knowledge proof</a> that it has produced these tokens using the correct key.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4SUH19vNZ1MZ3G7hvTNN40/6717cc2c64b8fc3a69efb014d76411c8/imageLikeEmbed--6-.png" />
            
            </figure><p>Before the extension stores any tokens, it first verifies the proof against the commitments it knows. Previously, these commitments were stored directly in the source code of the extension. This meant that if the server wanted to rotate its key, then it required releasing a new version of the extension, which was unnecessarily difficult. The extension has been modified so that the commitments are stored in a <a href="https://github.com/privacypass/ec-commitments">trusted location</a> that the client can access when it needs to verify the server response. Currently this location is a separate Privacy Pass <a href="https://github.com/privacypass/ec-commitments">GitHub repository</a>. For those that are interested, we have provided a more detailed description of the new commitment format in Appendix A at the end of this post.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1ak9ZJ0QWKpWnQBAa4oOpe/59a1907df42318f2e63dbd889c80f839/imageLikeEmbed--7-.png" />
            
            </figure><h1>Implementing server-side support in Workers</h1><p>So far we have focused on client-side updates. As part of supporting v2.0 of the extension, we are rolling out some major changes to the server-side support that Cloudflare uses. For version 1.0, we used a <a href="https://github.com/privacypass/challenge-bypass-server">Go implementation</a> of the server. In v2.0 we are introducing a new server implementation that runs in the <a href="https://www.cloudflare.com/products/cloudflare-workers/">Cloudflare Workers</a> platform. This server implementation is only compatible with v2.0 releases of Privacy Pass, so you may need to update your extension if you have auto-updates turned off in your browser.</p><p>Our server will run at <a href="https://privacypass.cloudflare.com">https://privacypass.cloudflare.com</a>, and all Privacy Pass requests sent to the Cloudflare edge are handled by Worker scripts that run on this domain. Our implementation has been rewritten using Javascript, with cryptographic operations running in the <a href="https://v8.dev/">V8 engine</a> that powers Cloudflare Workers. This means that we are able to run highly efficient and constant-time cryptographic operations. On top of this, we benefit from the enhanced performance provided by running our code in the Workers Platform, as close to the user as possible.</p>
    <div>
      <h2>WebCrypto support</h2>
      <a href="#webcrypto-support">
        
      </a>
    </div>
    <p>Firstly, you may be asking, how do we manage to implement cryptographic operations in Cloudflare Workers? Currently, support for performing cryptographic operations is provided in the Workers platform via the <a href="https://developers.cloudflare.com/workers/reference/apis/web-crypto/">WebCrypto API</a>. This API allows users to compute functionality such as cryptographic hashing, alongside more complicated operations like ECDSA signatures.</p><p>In the Privacy Pass protocol, as we’ll discuss a bit later, the main cryptographic operations are performed by a protocol known as a verifiable oblivious pseudorandom function (VOPRF). Such a protocol allows a client to learn function outputs computed by a server, without revealing to the server what their actual input was. The verifiable aspect means that the server must also prove (in a publicly verifiable way) that the evaluation they pass to the user is correct. Such a function is pseudorandom because the server output is indistinguishable from a random sequence of bytes.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2KJRvlrFMHsy9QlSofzWVs/b208b4b31a1169d2a3b60ffb396049a8/imageLikeEmbed--8-.png" />
            
            </figure><p>The VOPRF functionality requires a server to perform low-level ECC operations that are not currently exposed in the WebCrypto API. We balanced out the possible ways of getting around this requirement. First we trialled trying to use the WebCrypto API in a non-standard manner, using EC Diffie-Hellman key exchange as a method for performing the scalar multiplication that we needed. We also tried to implement all operations using pure JavaScript. Unfortunately both methods were unsatisfactory in the sense that they would either mean integrating with large external cryptographic libraries, or they would be far too slow to be used in a performant Internet setting.</p><p>In the end, we settled on a solution that adds functions necessary for Privacy Pass to the internal WebCrypto interface in the Cloudflare V8 Javascript engine. This algorithm mimics the sign/verify interface provided by signature algorithms like ECDSA. In short, we use the <code>sign()</code> function to issue Privacy Pass tokens to the client. While <code>verify()</code> can be used by the server to verify data that is redeemed by the client. These functions are implemented directly in the V8 layer and so they are much more performant and secure (running in constant-time, for example) than pure JS alternatives.</p><p>The Privacy Pass WebCrypto interface is not currently available for public usage. If it turns out there is enough interest in using this additional algorithm in the Workers platform, then we will consider making it public.</p>
    <div>
      <h3>Applications</h3>
      <a href="#applications">
        
      </a>
    </div>
    <p>In recent times, VOPRFs have been shown to be a highly useful primitive in establishing many cryptographic tools. Aside from Privacy Pass, they are also essential for constructing password-authenticated key exchange protocols such as <a href="https://datatracker.ietf.org/doc/draft-krawczyk-cfrg-opaque/">OPAQUE</a>. They have also been used in designs of <a href="https://eprint.iacr.org/2016/799">private set intersection</a>, <a href="https://eprint.iacr.org/2014/650">password-protected secret-sharing</a> protocols, and <a href="https://medium.com/least-authority/the-path-from-s4-to-privatestorage-ae9d4a10b2ae">privacy-preserving access-control</a> for private data storage.</p>
    <div>
      <h2>Public redemption API</h2>
      <a href="#public-redemption-api">
        
      </a>
    </div>
    <p>Writing the server in Cloudflare Workers means that we will be providing server-side support for Privacy Pass on a <a href="https://privacypass.cloudflare.com">public domain</a>! While we only issue tokens to clients after we are sure that we can trust them, anyone will be able to redeem the tokens using our public redemption API at <a href="https://privacypass.cloudflare.com/api/redeem">https://privacypass.cloudflare.com/api/redeem</a>. As we roll-out the server-side component worldwide, you will be able to interact with this API and verify Cloudflare Privacy Pass tokens <a href="https://privacypass.github.io/api-redeem">independently of the browser extension</a>.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7zI9w6HIR8cXOr884kCw8m/601fe8c196b434ac50fdb12eaca63927/imageLikeEmbed--9-.png" />
            
            </figure><p>This means that any service can accept Privacy Pass tokens from a client that were issued by Cloudflare, and then verify them with the Cloudflare redemption API. Using the result provided by the API, external services can check whether Cloudflare has authorized the user in the past.</p><p>We think that this will benefit other service providers because they can use the attestation of authorization from Cloudflare in their own decision-making processes, without sacrificing the privacy of the client at any stage. We hope that this ecosystem can grow further, with potentially more services providing public redemption APIs of their own. With a more diverse set of issuers, these attestations will become more useful.</p><p>By running our server on a public domain, we are effectively a customer of the Cloudflare Workers product. This means that we are also able to make use of <a href="https://developers.cloudflare.com/workers/reference/storage/">Workers KV</a> for protecting against malicious clients. In particular, servers must check that clients are not re-using tokens during the redemption phase. The performance of Workers KV in analyzing reads makes this an obvious choice for providing double-spend protection globally.</p><p>If you would like to use the public redemption API, we provide documentation for using it at <a href="https://privacypass.github.io/api-redeem">https://privacypass.github.io/api-redeem</a>. We also provide some example requests and responses in Appendix B at the end of the post.</p><h1>Standardization &amp; new applications</h1><p>In tandem with the recent engineering work that we have been doing on supporting Privacy Pass, we have been collaborating with the wider community in an attempt to standardize both the <a href="https://datatracker.ietf.org/doc/draft-irtf-cfrg-voprf/">underlying VOPRF functionality</a>, and the <a href="https://github.com/alxdavids/draft-privacy-pass">protocol itself</a>. While the process of standardization for oblivious pseudorandom functions (OPRFs) has been running for over a year, the recent efforts to standardize the Privacy Pass protocol have been driven by very recent applications that have come about in the last few months.</p><p>Standardizing protocols and functionality is an important way of providing interoperable, secure, and performant interfaces for running protocols on the Internet. This makes it easier for developers to write their own implementations of this complex functionality. The process also provides helpful peer reviews from experts in the community, which can lead to better surfacing of potential security risks that should be mitigated in any implementation. Other benefits include coming to a consensus on the most reliable, scalable and performant protocol designs for all possible applications.</p>
    <div>
      <h2>Oblivious pseudorandom functions</h2>
      <a href="#oblivious-pseudorandom-functions">
        
      </a>
    </div>
    <p>Oblivious pseudorandom functions (OPRFs) are a generalization of VOPRFs that do not require the server to prove that they have evaluated the functionality properly. Since July 2019, we have been collaborating <a href="https://datatracker.ietf.org/doc/draft-irtf-cfrg-voprf/">on a draft</a> with the <a href="https://irtf.org/cfrg">Crypto Forum Research Group</a> (CFRG) at the Internet Research Task Force (IRTF) to standardize an OPRF protocol that operates in prime-order groups. This is a generalisation of the setting that is provided by <a href="http://staging.blog.mrk.cfdata.org/tag/elliptic-curves/">elliptic curves</a>. This is the same VOPRF construction that was <a href="http://staging.blog.mrk.cfdata.org/privacy-pass-the-math/">originally specified</a> by the Privacy Pass protocol and is based heavily on the original protocol design from the <a href="https://eprint.iacr.org/2014/650.pdf">paper of Jarecki, Kiayias and Krawczyk</a>.</p><p>One of the recent changes that we've made in the draft is to increase the size of the key that we consider for performing OPRF operations on the server-side. Existing research suggests that it is possible to create specific queries that can lead to small amounts of the key being leaked. For keys that provide only 128 bits of security this can be a problem as leaking too many bits would reduce security <a href="https://www.keylength.com/en/4/">beyond currently accepted levels</a>. To counter this, we have effectively increased the minimum key size to 192 bits. This prevents this leakage becoming an attack vector using any practical methods. We discuss these attacks in more detail later on when discussing our future plans for VOPRF development.</p>
    <div>
      <h2>Recent applications and standardizing the protocol</h2>
      <a href="#recent-applications-and-standardizing-the-protocol">
        
      </a>
    </div>
    <p>The application that we demonstrated when originally supporting Privacy Pass was always intended as a proof-of-concept for the protocol. Over the past few months, a number of new possibilities have arisen in areas that go far beyond what was previously envisaged.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/SceyvCGOJ7quMiDSyPxR5/9785c42a79cf313d05ef5eb5a113d2f2/imageLikeEmbed--10-.png" />
            
            </figure><p>For example, the <a href="https://github.com/WICG/trust-token-api">trust token API</a>, developed by the <a href="https://wicg.io/">Web Incubator Community Group</a>, has been proposed as an interface for using Privacy Pass. This applications allows third-party vendors to check that a user has received a trust attestation from a set of central issuers. This allows the vendor to make decisions about the honesty of a client without having to associate a behaviour profile with the identity of the user. The objective is to prevent against fraudulent activity from users who are not trusted by the central issuer set. Checking trust attestations with central issuers would be possible using similar redemption APIs to the one that <a href="https://privacypass.cloudflare.com">we have introduced</a>.</p><p>A <a href="https://engineering.fb.com/security/partially-blind-signatures/">separate piece of work from Facebook</a> details a similar application for preventing fraudulent behavior that may also be compatible with the Privacy Pass protocol. Finally, other applications have arisen in the areas of providing access to <a href="https://medium.com/least-authority/the-path-from-s4-to-privatestorage-ae9d4a10b2ae">private storage</a> and <a href="https://github.com/brave/brave-browser/wiki/Security-and-privacy-model-for-ad-confirmations">establishing security and privacy models in advertisement confirmations</a>.</p>
    <div>
      <h3>A new draft</h3>
      <a href="#a-new-draft">
        
      </a>
    </div>
    <p>With the applications above in mind, we have recently started collaborative work on a <a href="https://github.com/alxdavids/draft-privacy-pass">new IETF draft</a> that specifically lays out the required functionality provided by the Privacy Pass protocol as a whole. Our aim is to develop, alongside wider industrial partners and the academic community, a functioning specification of the Privacy Pass protocol. We hope that by doing this we will be able to design a base-layer protocol, that can then be used as a cryptographic primitive in wider applications that require some form of lightweight authorization. Our plan is to present the first version of this draft at the upcoming <a href="https://www.ietf.org/how/meetings/106/">IETF 106 meeting</a> in Singapore next month.</p><p>The draft is still in the early stages of development and we are actively looking for people who are interested in helping to shape the protocol specification. We would be grateful for any help that contributes to this process. See <a href="https://github.com/alxdavids/draft-privacy-pass">the GitHub repository</a> for the current version of the document.</p><h1>Future avenues</h1><p>Finally, while we are actively working on a number of different pathways in the present, the future directions for the project are still open. We believe that there are many applications out there that we have not considered yet and we are excited to see where the protocol is used in the future. Here are some other ideas we have for novel applications and security properties that we think might be worth pursuing in future.</p>
    <div>
      <h2>Publicly verifiable tokens</h2>
      <a href="#publicly-verifiable-tokens">
        
      </a>
    </div>
    <p>One of the disadvantages of using a VOPRF is that redemption tokens are only verifiable by the original issuing server. If we used an underlying primitive that allowed public verification of redemption tokens, then anyone could verify that the issuing server had issued the particular token. Such a protocol could be constructed on top of so-called blind signature schemes, such as <a href="https://en.wikipedia.org/wiki/Blind_signature#Blind_RSA_signatures">Blind RSA</a>. Unfortunately, there are performance and security concerns arising from the usage of blind signature schemes in a browser environment. Existing schemes (especially RSA-based variants) require cryptographic computations that are much heavier than the construction used in our VOPRF protocol.</p>
    <div>
      <h2>Post-quantum VOPRF alternatives</h2>
      <a href="#post-quantum-voprf-alternatives">
        
      </a>
    </div>
    <p>The only constructions of VOPRFs exist in pre-quantum settings, usually based on the hardness of well-known problems in group settings such as the <a href="https://en.wikipedia.org/wiki/Decisional_Diffie%E2%80%93Hellman_assumption">discrete-log assumption</a>. No constructions of VOPRFs are known to provide security against adversaries that can run <a href="http://staging.blog.mrk.cfdata.org/the-quantum-menace/">quantum computational algorithms</a>. This means that the Privacy Pass protocol is only believed to be secure against adversaries running  on classical hardware.</p><p>Recent developments suggest that quantum computing may arrive <a href="https://www.nature.com/articles/s41586-019-1666-5">sooner than previously thought</a>. As such, we believe that investigating the possibility of <a href="http://staging.blog.mrk.cfdata.org/introducing-circl/">constructing practical post-quantum alternatives</a> for our current cryptographic toolkit is a task of great importance for ourselves and the wider community. In this case, devising performant post-quantum alternatives for VOPRF constructions would be an important theoretical advancement. Eventually this would lead to a Privacy Pass protocol that still provides privacy-preserving authorization in a post-quantum world.</p>
    <div>
      <h2>VOPRF security and larger ciphersuites</h2>
      <a href="#voprf-security-and-larger-ciphersuites">
        
      </a>
    </div>
    <p>We mentioned previously that VOPRFs (or simply OPRFs) are susceptible to small amounts of possible leakage in the key. Here we will give a brief description of the actual attacks themselves, along with further details on our plans for implementing higher security ciphersuites to mitigate the leakage.</p><p>Specifically, malicious clients can interact with a VOPRF for creating something known as a <a href="https://eprint.iacr.org/2010/215.pdf">q-Strong-Diffie-Hellman</a> (q-sDH) sample. Such samples are created in mathematical groups (usually in the elliptic curve setting). For any group there is a public element <code>g</code> that is central to all <a href="https://en.wikipedia.org/wiki/Diffie%E2%80%93Hellman_key_exchange">Diffie-Hellman</a> type operations, along with the server key <code>K</code>, which is usually just interpreted as a randomly generated number from this group. A q-sDH sample takes the form:</p>
            <pre><code>( g, g^K, g^(K^2), … , g^(K^q) )</code></pre>
            <p>and asks the malicious adversary to create a pair of elements satisfying <code>(g^(1/(s+K)),s)</code>. It is possible for a client in the VOPRF protocol to create a q-SDH sample by just submitting the result of the previous VOPRF evaluation back to the server.</p><p>While this problem is believed to be hard to break, there are a number of past works that show that the problem is somewhat easier than the size of the group suggests (for example, see <a href="https://eprint.iacr.org/2004/306">here</a> and <a href="https://www.iacr.org/archive/eurocrypt2006/40040001/40040001.pdf">here</a>). Concretely speaking, the bit security implied by the group can be reduced by up to log<sub>2</sub>(q) bits. While this is not immediately fatal, even to groups that should provide 128 bits of security, it can lead to a loss of security that means that the setting is no longer future-proof. As a result, any group providing VOPRF functionality that is instantiated using an elliptic curve such as P-256 or Curve25519 provides weaker than advised security guarantees.</p><p>With this in mind, we have taken the recent decision to upgrade the ciphersuites that we recommend for OPRF usage to only those that provide &gt; 128 bits of security, as standard. For example, Curve448 provides 192 bits of security. To launch an attack that reduced security to an amount lower than 128 bits would require making 2^(68) client OPRF queries. This is a significant barrier to entry for any attacker, and so we regard these ciphersuites as safe for instantiating the OPRF functionality.</p><p>In the near future, it will be necessary to upgrade the ciphersuites that are used in our support of the Privacy Pass browser extension to the recommendations made in the current VOPRF draft. In general, with a more iterative release process, we hope that the Privacy Pass implementation will be able to follow the current draft standard more closely as it evolves during the standardization process.</p>
    <div>
      <h2>Get in touch!</h2>
      <a href="#get-in-touch">
        
      </a>
    </div>
    <p>You can now install v2.0 of the Privacy Pass extension in <a href="https://chrome.google.com/webstore/detail/privacy-pass/ajhmfdgkijocedmfjonnpjfojldioehi?hl=en">Chrome</a> or <a href="https://addons.mozilla.org/en-US/firefox/addon/privacy-pass/">Firefox</a>.</p><p>If you would like to help contribute to the development of this extension then you can do so on <a href="https://github.com/privacypass/challenge-bypass-extension">GitHub</a>. Are you a service provider that would like to integrate server-side support for the extension? Then we would be very interested in <a>hearing from you!</a></p><p>We will continue to work with the wider community in developing the standardization of the protocol; taking our motivation from the available applications that have been developed. We are always looking for new applications that can help to expand the Privacy Pass ecosystem beyond its current boundaries.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1eVrPb2t3hl5pVI97EtEu8/86eb63191b0cd299390f24162d51c54a/tales-from-the-crypto-team_2x--1-.png" />
            
            </figure><h1>Appendix</h1><p>Here are some extra details related to the topics that we covered above.</p>
    <div>
      <h2>A. Commitment format for key rotations</h2>
      <a href="#a-commitment-format-for-key-rotations">
        
      </a>
    </div>
    <p>Key commitments are necessary for the server to prove that they’re acting honestly during the Privacy Pass protocol. The commitments that Privacy Pass uses for the v2.0 release have a slightly different format from the previous release.</p>
            <pre><code>"2.00": {
  "H": "BPivZ+bqrAZzBHZtROY72/E4UGVKAanNoHL1Oteg25oTPRUkrYeVcYGfkOr425NzWOTLRfmB8cgnlUfAeN2Ikmg=",
  "expiry": "2020-01-11T10:29:10.658286752Z",
  "sig": "MEUCIQDu9xeF1q89bQuIMtGm0g8KS2srOPv+4hHjMWNVzJ92kAIgYrDKNkg3GRs9Jq5bkE/4mM7/QZInAVvwmIyg6lQZGE0="
}</code></pre>
            <p>First, the version of the server key is <code>2.00</code>, the server must inform the client which version it intends to use in the response to a client containing issued tokens. This is so that the client can always use the correct commitments when verifying the zero-knowledge proof that the server sends.</p><p>The value of the member <code>H</code> is the public key commitment to the secret key used by the server. This is base64-encoded elliptic curve point of the form <code>H=kG</code> where <code>G</code> is the fixed generator of the curve, and <code>k</code> is the secret key of the server. Since the discrete-log problem is believed to be hard to solve, deriving k from H is believed to be difficult. The value of the member <code>expiry</code> is an expiry date for the commitment that is used. The value of the member <code>sig</code> is an ECDSA signature evaluated using a long-term signing key associated with the server, and over the values of <code>H</code> and <code>expiry</code>.</p><p>When a client retrieves the commitment, it checks that it hasn’t expired and that the signature verifies using the corresponding verification key that is embedded into the configuration of the extension. If these checks pass, it retrieves <code>H</code> and verifies the issuance response sent by the server. Previous versions of these commitments did not include signatures, but these signatures will be validated from v2.0 onwards.</p><p>When a server wants to rotate the key, it simply generates a new key <code>k2</code> and appends a new commitment to <code>k2</code> with a new identifier such as <code>2.01</code>. It can then use <code>k2</code> as the secret for the VOPRF operations that it needs to compute.</p>
    <div>
      <h2>B. Example Redemption API request</h2>
      <a href="#b-example-redemption-api-request">
        
      </a>
    </div>
    <p>The redemption API at is available over HTTPS by sending POST requests to <a href="https://privacypass.cloudflare.com/api/redeem">https://privacypass.cloudflare.com/api/redeem</a>. Requests to this endpoint must specify Privacy Pass data using JSON-RPC 2.0 syntax in the body of the request. Let’s look at an example request:</p>
            <pre><code>{
  "jsonrpc": "2.0",
  "method": "redeem",
  "params": {
    "data": [
      "lB2ZEtHOK/2auhOySKoxqiHWXYaFlAIbuoHQnlFz57A=",
      "EoSetsN0eVt6ztbLcqp4Gt634aV73SDPzezpku6ky5w=",
      "eyJjdXJ2ZSI6InAyNTYiLCJoYXNoIjoic2hhMjU2IiwibWV0aG9kIjoic3d1In0="
    ],
    "bindings": [
      "string1",
      "string2"
    ],
    "compressed":"false"
  },
  "id": 1
}</code></pre>
            <p>In the above: <code>params.data[0]</code> is the client input data used to generate a token in the issuance phase; <code>params.data[1]</code> is the HMAC tag that the server uses to verify a redemption; and <code>params.data[2]</code> is a stringified, base64-encoded JSON object that specifies the hash-to-curve parameters used by the client. For example, the last element in the array corresponds to the object:</p>
            <pre><code>{
    curve: "p256",
    hash: "sha256",
    method: "swu",
}</code></pre>
            <p>Which specifies that the client has used the curve P-256, with hash function SHA-256, and the SSWU method for hashing to curve. This allows the server to verify the transaction with the correct ciphersuite. The client must bind the redemption request to some fixed information, which it stores as multiple strings in the array <code>params.bindings</code>. For example, it could send the Host header of the HTTP request, and the HTTP path that was used (this is what is used in the Privacy Pass browser extension). Finally, <code>params.compressed</code> is an optional boolean value (defaulting to false) that indicates whether the HMAC tag was computed over compressed or uncompressed point encodings.</p><p>Currently the only supported ciphersuites are the example above, or the same except with <code>method</code> equal to <code>increment</code> for the hash-and-increment method of hashing to a curve. This is the original method used in v1.0 of Privacy Pass, and is supported for backwards-compatibility only. See the <a href="https://privacypass.github.io/api-redeem">provided documentation</a> for more details.</p>
    <div>
      <h3>Example response</h3>
      <a href="#example-response">
        
      </a>
    </div>
    <p>If a request is sent to the redemption API and it is successfully verified, then the following response will be returned.</p>
            <pre><code>{
  "jsonrpc": "2.0",
  "result": "success",
  "id": 1
}</code></pre>
            <p>When an error occurs something similar to the following will be returned.</p>
            <pre><code>{
  "jsonrpc": "2.0",
  "error": {
    "message": &lt;error-message&gt;,
    "code": &lt;error-code&gt;,
  },
  "id": 1
}</code></pre>
            <p>The error codes that we provide are specified as JSON-RPC 2.0 codes, we document the types of errors that we provide in the <a href="https://privacypass.github.io/api-redeem">API documentation</a>.</p> ]]></content:encoded>
            <category><![CDATA[Crypto Week]]></category>
            <category><![CDATA[Privacy Pass]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[Research]]></category>
            <category><![CDATA[Cryptography]]></category>
            <guid isPermaLink="false">3Zpj2uDmshq6ssT51N9alr</guid>
            <dc:creator>Alex Davidson</dc:creator>
        </item>
        <item>
            <title><![CDATA[Inside the Entropy]]></title>
            <link>https://blog.cloudflare.com/inside-the-entropy/</link>
            <pubDate>Mon, 17 Jun 2019 13:00:00 GMT</pubDate>
            <description><![CDATA[ Generating random outcomes is an essential part of everyday life; from lottery drawings and constructing competitions, to performing deep cryptographic computations.  ]]></description>
            <content:encoded><![CDATA[ <p></p><blockquote><p>Randomness, randomness everywhere;Nor any verifiable entropy.</p></blockquote><p>Generating random outcomes is an essential part of everyday life; from lottery drawings and constructing competitions, to performing deep cryptographic computations. To use randomness, we must have some way to 'sample' it. This requires interpreting some natural phenomenon (such as a fair dice roll) as an event that generates some random output. From a computing perspective, we interpret random outputs as bytes that we can then use in algorithms (such as drawing a lottery) to achieve the functionality that we want.</p><p>The sampling of randomness securely and efficiently is a critical component of all modern computing systems. For example, nearly all public-key cryptography relies on the fact that algorithms can be seeded with bytes generated from genuinely random outcomes.</p><p>In scientific experiments, a random sampling of results is necessary to ensure that data collection measurements are not skewed. Until now, generating random outputs in a way that we can verify that they are indeed random has been very difficult; typically involving taking a variety of statistical measurements.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/CEsiU1gZNLRG4VZ8pKenE/4fe5ae39fa2c63683bf7db2540ad266a/image9-2.png" />
            
            </figure><p>During Crypto week, Cloudflare is releasing a new <a href="/league-of-entropy">public randomness beacon</a> as part of the launch of the <a href="https://leagueofentropy.com">League of Entropy</a>. The League of Entropy is a network of beacons that produces <i>distributed</i>, <i>publicly verifiable</i> random outputs for use in applications where the nature of the randomness must be publicly audited. The underlying cryptographic architecture is based on the <a href="https://github.com/dedis/drand">drand project</a>.</p><p>Verifiable randomness is essential for ensuring trust in various institutional decision-making processes such as <a href="/league-of-entropy">elections and lotteries</a>. There are also cryptographic applications that require verifiable randomness. In the land of decentralized consensus mechanisms, the <a href="https://dfinity.org/static/dfinity-consensus-0325c35128c72b42df7dd30c22c41208.pdf">DFINITY approach</a> uses random seeds to decide the outcome of leadership elections. In this setting, it is essential that the randomness is publicly verifiable so that the outcome of the leadership election is trustworthy. Such a situation arises more generally in <a href="https://en.wikipedia.org/wiki/Sortition">Sortitions</a>: an election where leaders are selected as a random individual (or subset of individuals) from a larger set.</p><p>In this blog post, we will give a technical overview behind the cryptography used in the distributed randomness beacon, and how it can be used to generate publicly verifiable randomness. We believe that distributed randomness beacons have a huge amount of utility in realizing the <a href="/welcome-to-crypto-week-2019/">Internet of the Future</a>; where we will be able to rely on distributed, decentralized solutions to problems of a global-scale.</p>
    <div>
      <h2>Randomness &amp; entropy</h2>
      <a href="#randomness-entropy">
        
      </a>
    </div>
    <p>A source of randomness is measured in terms of the amount of <i>entropy</i> it provides. Think about the entropy provided by a random output as a score to indicate how “random” the output actually is. The notion of information entropy was concretised by the famous scientist Claude Shannon in his paper <a href="https://en.wikipedia.org/wiki/A_Mathematical_Theory_of_Communication">A Mathematical Theory of Communication</a>, and is sometimes known as <a href="https://en.wikipedia.org/wiki/Entropy_(information_theory)"><i>Shannon Entropy</i></a>.</p><p>A common way to think about random outputs is: a sequence of bits derived from some random outcome. For the sake of an argument, consider a fair 8-sided dice roll with sides marked 0-7. The outputs of the dice can be written as the bit-strings <code>000,001,010,...,111</code>. Since the dice is fair, any of these outputs is equally likely. This is means that each of the bits is equally likely to be <code>0</code> or <code>1</code>. Consequently, interpreting the output of the dice roll as a random output then derives randomness with <code>3</code> bits of entropy.</p><p>More generally, if a perfect source of randomness guarantees strings with <code>n</code> bits of entropy, then it generates bit-strings where each bit is equally likely to be <code>0</code> or <code>1</code>. This allows us to predict the value of any bit with maximum probability <code>1/2</code>. If the outputs are sampled from such a perfect source, we consider them <i>uniformly distributed</i>. If we sample the outputs from a source where one bit is predictable with higher probability, then the string has <code>n-1</code> bits of entropy. To go back to the dice analogy, rolling a 6-sided dice provides less than <code>3</code> bits of entropy because the possible outputs are <code>000,001,010,011,100,101</code> and so the 2nd and 3rd bits are more likely to be to set to <code>0</code> than to <code>1</code>.</p><p>It is possible to mix entropy sources using specifically designed mixing functions to retrieve something with even greater entropy. The maximum resulting entropy is the sum of the entropy taken from the number of entropic sources used as input.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4LcFWOh6MsFEqmYrGB6PXa/f87eb05da963db50a5f8a0a24d260d76/combined-entropy-_2x.png" />
            
            </figure>
    <div>
      <h4>Sampling randomness</h4>
      <a href="#sampling-randomness">
        
      </a>
    </div>
    <p>To sample randomness, let’s first identify the appropriate sources. There are many natural phenomena that one can use:</p><ul><li><p>atmospheric noise;</p></li><li><p>radioactive decay;</p></li><li><p>turbulent motion; like that generated in Cloudflare’s wall of <a href="/lavarand-in-production-the-nitty-gritty-technical-details/">lava lamps(!)</a>.</p></li></ul>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2nJtxum8JhhrkxOyplTTka/e96eaea45650c205a68cda95c4ee9b8a/pasted-image-0--1-.png" />
            
            </figure><p>Unfortunately, these phenomena require very specific measuring tools, which are prohibitively expensive to install in mainstream consumer electronics. As such, most personal computing devices usually use external usage characteristics for seeding specific generator functions that output randomness as and when the system requires it. These characteristics include keyboard typing patterns and speed and mouse movement – since such usage patterns are based on the human user, it is assumed they provide sufficient entropy as a randomness source. An example of a random number generator that takes entropy from these characteristics is the Linux-based <a href="https://en.wikipedia.org/wiki/RdRand">RDRAND</a> function.</p><p>Naturally, it is difficult to tell whether a system is <i>actually</i> returning random outputs by only inspecting the outputs. There are statistical tests that detect whether a series of outputs is not uniformly distributed, but these tests cannot ensure that they are unpredictable. This means that it is hard to detect if a given system has had its randomness generation compromised.</p>
    <div>
      <h2>Distributed randomness</h2>
      <a href="#distributed-randomness">
        
      </a>
    </div>
    <p>It’s clear we need alternative methods for sampling randomness so that we can provide guarantees that trusted mechanisms, such as elections and lotteries, take place in secure tamper-resistant environments. The <a href="https://github.com/dedis/drand/">drand</a> project was started by researchers at <a href="https://www.epfl.ch/about/">EPFL</a> to address this problem. The drand charter is to provide an easily configurable randomness beacon running at geographically distributed locations around the world. The intention is for each of these beacons to generate portions of randomness that can be combined into a single random string that is publicly verifiable.</p><p>This functionality is achieved using <i>threshold cryptography</i>. Threshold cryptography seeks to derive solutions for standard cryptographic problems by combining information from multiple distributed entities. The notion of the threshold means that if there are <code>n</code> entities, then any <code>t</code> of the entities can combine to construct some cryptographic object (like a ciphertext, or a digital signature). These threshold systems are characterised by a setup phase, where each entity learns a <i>share</i> of data. They will later use this share of data to create a combined cryptographic object with a subset of the other entities.</p>
    <div>
      <h3>Threshold randomness</h3>
      <a href="#threshold-randomness">
        
      </a>
    </div>
    <p>In the case of a distributed randomness protocol, there are <code>n</code> <i>randomness beacons</i> that broadcast random values sampled from their initial data share, and the current state of the system. This data share is created during a trusted setup phase, and also takes in some internal random value that is generated by the beacon itself.</p><p>When a user needs randomness, they send requests to some number <code>t</code> of beacons, where <code>t &lt; n</code>, and combine these values using a specific procedure. The result is a random value that can be verified and used for public auditing mechanisms.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7BeauvWICHvAH9uX1tCqaY/47a57124c026459500347cf14757f818/pasted-image-0--2-.png" />
            
            </figure><p>Consider what happens if some proportion <code>c/n</code> of the randomness beacons are <i>corrupted</i> at any one time. The nature of a threshold cryptographic system is that, as long as <code>c &lt; t</code>, then the end result still remains random.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5NM7oB6KqjsC5ys55cVEGP/9165d5ad0c235240b6e455cf08e62f1e/pasted-image-0--3-.png" />
            
            </figure><p>If <code>c</code> exceeds <code>t</code> then the random values produced by the system become predictable and the notion of randomness is lost. In summary, the distributed randomness procedure provides verifiably random outputs with sufficient entropy only when <code>c &lt; t</code>.</p><p>By distributing the beacons independent of each other and in geographically disparate locations, the probability that <code>t</code> locations can be corrupted at any one time is extremely low. The minimum choice of <code>t</code> is equal to <code>n/2</code>.</p>
    <div>
      <h2>How does it actually work?</h2>
      <a href="#how-does-it-actually-work">
        
      </a>
    </div>
    <p>What we described above sounds a bit like magic<sup>tm</sup>. Even if <code>c = t-1</code>, then we can ensure that the output is indeed random and unpredictable! To make it clearer how this works, let’s dive a bit deeper into the underlying cryptography.</p><p>Two core components of drand are: a <i>distributed key generation</i> (DKG) procedure, and a <i>threshold signature scheme</i>. These core components are used in setup and randomness generation procedures, respectively. In just a bit, we’ll outline how drand uses these components (without navigating too deeply into the onerous mathematics).</p>
    <div>
      <h3>Distributed key generation</h3>
      <a href="#distributed-key-generation">
        
      </a>
    </div>
    <p>At a high-level, the DKG procedure creates a distributed secret key that is formed of <code>n</code> different key pairs <code>(vk_i, sk_i)</code>, each one being held by the entity <code>i</code> in the system. These key pairs will eventually be used to instantiate a <code>(t,n)</code>-threshold signature scheme (we will discuss this more later). In essence, <code>t</code> of the entities will be able to combine to construct a valid signature on any message.</p><p>To think about how this might work, consider a distributed key generation scheme that creates <code>n</code> distributed keys that are going to be represented by pizzas. Each pizza is split into <code>n</code> slices and one slice from each is secretly passed to one of the participants. Each entity receives one slice from each of the different pizzas (<code>n</code> in total) and combines these slices to form their own pizza. Each combined pizza is unique and secret for each entity, representing their own key pair.</p>
    <div>
      <h4>Mathematical intuition</h4>
      <a href="#mathematical-intuition">
        
      </a>
    </div>
    <p>Mathematically speaking, and rather than thinking about pizzas, we can describe the underlying phenomenon by reconstructing lines or curves on a graph. We can take two coordinates on a <code>(x,y)</code> plane and immediately (and uniquely) define a line with the equation <code>y = ax+b</code>. For example, the points <code>(2,3)</code> and <code>(4,7)</code> immediately define a line with gradient <code>(7-3)/(4/2) = 2</code> so <code>a=2</code>. You can then derive the <code>b</code> coefficient as <code>-1</code> by evaluating either of the coordinates in the equation <code>y = 2x + b</code>. By <i>uniquely</i>, we mean that only the line <code>y = 2x -1</code> satisfies the two coordinates that are chosen; no other choice of <code>a</code> or <code>b</code> fits.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2htl50xQeT4euGS3VR6rxf/de0f4b27b223bd88f4fc90ebd204367e/line2-2-.png" />
            
            </figure><p>The curve <code>ax+b</code> has degree <code>1</code>, where the degree of the equation refers to the highest order multiplication of unknown variables in the equation. That might seem like mathematical jargon, but the equation above contains only one term <code>ax</code>, which depends on the unknown variable <code>x</code>. In this specific term, the  <i>exponent</i> (or <i>power</i>) of <code>x</code> is <code>1</code>, and so the degree of the entire equation is also <code>1</code>.</p><p>Likewise, by taking three sets of coordinates pairs in the same plane, we uniquely define a quadratic curve with an equation that approximately takes the form <code>y = ax^2 + bx + c</code> with the coefficients <code>a,b,c</code> uniquely defined by the chosen coordinates. The process is a bit more involved than the above case, but it essentially starts in the same way using three coordinate pairs <code>(x_1, y_1)</code>, <code>(x_2, y_2)</code> and <code>(x_3, y_3)</code>.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/24FGvRaMCx5lUA2umjk6wi/4e76792bba003072606d6196c0febdc2/line3.png" />
            
            </figure><p>By a quadratic curve, we mean a curve of degree <code>2</code>. We can see that this curve has degree <code>2</code> because it contains two terms <code>ax^2</code> and <code>bx</code> that depend on <code>x</code>. The highest order term is the <code>ax^2</code> term with an exponent of <code>2</code>, so this curve has degree <code>2</code> (ignore the term <code>bx</code> which has a smaller power).</p><p>What we are ultimately trying to show is that this approach scales for curves of degree <code>n</code> (of the form <code>y = a_n x^n + … a_1 x + a_0</code>). So, if we take <code>n+1</code> coordinates on the <code>(x,y)</code> plane, then we can uniquely reconstruct the curve of this form entirely. Such degree <code>n</code> equations are also known as <i>polynomials</i> of degree <code>n</code>.</p><p>In order to generalise the approach to general degrees we need some kind of formula. This formula should take <code>n+1</code> pairs of coordinates and return a polynomial of degree <code>n</code>. Fortunately, such a formula exists without us having to derive it ourselves, it is known as the <a href="https://en.wikipedia.org/wiki/Lagrange_polynomial#Definition"><i>Lagrange interpolation polynomial</i></a>. Using the formula in the link, we can reconstruct any <code>n</code> degree polynomial using <code>n+1</code> unique pairs of coordinates.</p><p>Going back to pizzas temporarily, it will become clear in the next section how this Lagrange interpolation procedure essentially describes the dissemination of one slice (corresponding to <code>(x,y)</code> coordinates) taken from a single pizza (the entire <code>n-1</code> degree polynomial) among <code>n</code> participants. Running this procedure <code>n</code> times in parallel allows each entity to construct their entire pizza (or the eventual key pair).</p>
    <div>
      <h4>Back to key generation</h4>
      <a href="#back-to-key-generation">
        
      </a>
    </div>
    <p>Intuitively, in the DKG procedure we want to distribute <code>n</code> key pairs among <code>n</code> participants. This effectively means running <code>n</code> parallel instances of a <code>t</code>-out-of-<code>n</code> <a href="https://en.wikipedia.org/wiki/Shamir's_Secret_Sharing">Shamir Secret Sharing</a> scheme. This secret sharing scheme is built entirely upon the polynomial interpolation technique that we described above.</p><p>In a single instance, we take the secret key to be the first coefficient of a polynomial of degree <code>t-1</code> and the public key is a published value that depends on this secret key, but does not reveal the actual coefficient. Think of RSA, where we have a number <code>N = pq</code> for secret large prime numbers <code>p,q</code>, where <code>N</code> is public but does not reveal the actual factorisation. Notice that if the polynomial is reconstructed using the interpolation technique above, then we immediately learn the secret key, because the first coefficient will be made explicit.</p><p>Each secret sharing scheme publishes shares, where each share is a different evaluation of the polynomial (dependent on the entity <code>i</code> receiving the key share). These evaluations are essentially coordinates on the <code>(x,y)</code> plane.</p><p>By running <code>n</code> parallel instances of the secret sharing scheme, each entity receives <code>n</code> shares and then combines all of these to form their overall key pair <code>(vk_i, sk_i)</code>.</p><p>The DKG procedure uses <code>n</code> parallel secret sharing procedures along with <a href="https://link.springer.com/chapter/10.1007/3-540-46766-1_9">Pedersen commitments</a> to distribute the key pairs. We explain in the next section how this procedure is part of the procedure for provisioning randomness beacons.</p><p>In summary, it is important to remember that <b>each party</b> in the DKG protocol generates a random secret key from the <code>n</code> shares that they receive, and they compute the corresponding public key from this. We will now explain how each entity uses this key pair to perform the cryptographic procedure that is used by the drand protocol.</p>
    <div>
      <h3>Threshold signature scheme</h3>
      <a href="#threshold-signature-scheme">
        
      </a>
    </div>
    <p>Remember: a standard signature scheme considers a key-pair <code>(vk,sk)</code>, where <code>vk</code> is a public verification key and <code>sk</code> is a private signing key. So, messages <code>m</code> signed with <code>sk</code> can be verified with <code>vk</code>. The security of the scheme ensures that it is difficult for anybody who does not hold <code>sk</code> to compute a valid signature for any message <code>m</code>.</p><p>A <i>threshold signature scheme</i> allows a set of users holding distributed key-pairs <code>(vk_i,sk_i)</code> to compute intermediate signatures <code>u_i</code> on a given message <code>m</code>.</p><p>Given knowledge of some number <code>t</code> of intermediate signatures <code>u_i</code>, a valid signature <code>u</code> on the message <code>m</code> can be reconstructed under the combined secret key <code>sk</code>. The public key <code>vk</code> can also be inferred using knowledge of the public keys <code>vk_i</code>, and then this public key can be used to verify <code>u</code>.</p><p>Again, think back to reconstructing the degree <code>t-1</code> curves on graphs with <code>t</code> known coordinates. In this case, the coordinates correspond to the intermediate signatures <code>u_i</code>, and the signature <code>u</code> corresponds to the entire curve. For the actual signature schemes, the mathematics are much more involved than in the DKG procedure, but the principal is the same.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2SpuFg5fE6Trs3KhVfGeQq/829504847821500097e2b7474a83037d/threshold-sig-3-.png" />
            
            </figure>
    <div>
      <h3>drand protocol</h3>
      <a href="#drand-protocol">
        
      </a>
    </div>
    <p>The <code>n</code> beacons that will take part in the drand project are identified. In the trusted setup phase, the DKG protocol from above is run, and each beacon effectively creates a key pair <code>(vk_i, sk_i)</code> for a threshold signature scheme. In other words, this key pair will be able to generate intermediate signatures that can be combined to create an entire signature for the system.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4ZkOUnhX8QiVnhPou3nC9/c7657ac1ec50d7156fee239f69e57e28/DKG-6-.png" />
            
            </figure><p>For each round (occurring once a minute, for example), the beacons agree on a signature <code>u</code> evaluated over a message containing the previous round’s signature and the current round’s number. This signature <code>u</code> is the result of combining the intermediate signatures <code>u_i</code> over the same message. Each intermediate signature <code>u_i</code> is created by each of the beacons using their secret <code>sk_i</code>.</p><p>Once this aggregation completes, each beacon displays the signature for the current round, along with the previous signature and round number. This allows any client to publicly verify the signature over this data to verify that the beacons honestly aggregate. This provides a chain of verifiable signatures, extending back to the first round of output. In addition, there are threshold signature schemes that output signatures that are indistinguishable from random sequences of bytes. Therefore, these signatures can be used directly as verifiable randomness for the applications we discussed previously.</p>
    <div>
      <h3>What does drand use?</h3>
      <a href="#what-does-drand-use">
        
      </a>
    </div>
    <p>To instantiate the required threshold signature scheme, drand uses the <code>(t,n)</code>-<a href="https://www.iacr.org/archive/pkc2003/25670031/25670031.pdf">BLS signature scheme</a> of Boneh, Lynn and Shacham. In particular, we can instantiate this scheme in the elliptic curve setting using  <a href="https://github.com/dfinity/bn">Barreto-Naehrig</a> curves. Moreover, the BLS signature scheme outputs sufficiently large signatures that are randomly distributed, giving them enough entropy to be sources of randomness. Specifically the signatures are randomly distributed over 64 bytes.</p><p>BLS signatures use a specific form of mathematical operation known as a <i>cryptographic pairing</i>. Pairings can be computed in certain elliptic curves, including the Barreto-Naehrig curve configurations. A detailed description of pairing operations is beyond the scope of this blog post; though it is important to remember that these operations are integral to how BLS signatures work.</p><p>Concretely speaking, all drand cryptographic operations are carried out using a library built on top of Cloudflare's implementation of the <a href="https://github.com/cloudflare/bn256/tree/lattices">bn256 curve</a>. The Pedersen DKG protocol follows the design of <a href="https://link.springer.com/article/10.1007/s00145-006-0347-3">Gennaro et al.</a>.</p>
    <div>
      <h3>How does it work?</h3>
      <a href="#how-does-it-work">
        
      </a>
    </div>
    <p>The randomness beacons are synchronised in rounds. At each round, a beacon produces a new signature <code>u_i</code> using its private key <code>sk_i</code> on the previous signature generated and the round ID. These signatures are usually broadcast on the URL <code>drand.&lt;host&gt;.com/api/public</code>. These signatures can be verified using the keys <code>vk_i</code> and over the same data that is signed. By signing the previous signature and the current round identifier, this establishes a chain of trust for the randomness beacon that can be traced back to the original signature value.</p><p>The randomness can be retrieved by combining the signatures from each of the beacons using the threshold property of the scheme. This reconstruction of the signature <code>u</code> from each intermediate signature <code>u_i</code> is done internally by the League of Entropy nodes. Each beacon broadcasts the entire signature <code>u</code>, that can be accessed over the HTTP endpoint above.</p>
    <div>
      <h2>The drand beacon</h2>
      <a href="#the-drand-beacon">
        
      </a>
    </div>
    <p>As we mentioned at the start of this blog post, Cloudflare has launched our <a href="/league-of-entropy">distributed randomness beacon</a>. This beacon is part of a network of beacons from different institutions around the globe that form the <a href="https://leagueofentropy.com">League of  Entropy</a>.</p><p>The Cloudflare beacon uses <a href="/lavarand-in-production-the-nitty-gritty-technical-details/">LavaRand</a> as its internal source of randomness for the DKG. Other League of Entropy drand beacons have their own sources of randomness.</p>
    <div>
      <h3>Give me randomness!</h3>
      <a href="#give-me-randomness">
        
      </a>
    </div>
    <blockquote><p>The below API endpoints are obsolete. Please see <a href="https://drand.love">https://drand.love</a> for the most up-to-date documentation.</p></blockquote><p>The drand beacon allows you to retrieve the latest random value from the League of Entropy using a simple HTTP request:</p>
            <pre><code>curl https://drand.cloudflare.com/api/public</code></pre>
            <p>The response is a JSON blob of the form:</p>
            <pre><code>{
    "round": 7,
    "previous": &lt;hex-encoded-previous-signature&gt;,
    "randomness": {
        "gid": 21,
        "point": &lt;hex-encoded-new-signature&gt;
    }
}</code></pre>
            <p>where, <code>randomness.point</code> is the signature <code>u</code> aggregated among the entire set of beacons.</p><p>The signature is computed as an evaluation of the message, and is comprised of the signature of the previous round, <code>previous</code>, the current round number, <code>round</code>, and the aggregated secret key of the system. This signature can be verified using the entire public key <code>vk</code> of the Cloudflare beacon, learned using another HTTP request:</p>
            <pre><code>curl https://drand.cloudflare.com/api/public</code></pre>
            <p>There are eight collaborators in the League of Entropy. You can learn the current round of randomness (or the system’s public key) by querying these beacons on the HTTP endpoints listed above.</p><ul><li><p><a href="https://drand.cloudflare.com:443">https://drand.cloudflare.com:443</a></p></li><li><p><a href="https://random.uchile.cl:8080">https://random.uchile.cl:8080</a></p></li><li><p><a href="https://drand.cothority.net:7003">https://drand.cothority.net:7003</a></p></li><li><p><a href="https://drand.kudelskisecurity.com:443">https://drand.kudelskisecurity.com:443</a></p></li><li><p><a href="https://drand.lbarman.ch:443">https://drand.lbarman.ch:443</a></p></li><li><p><a href="https://drand.nikkolasg.xyz:8888">https://drand.nikkolasg.xyz:8888</a></p></li><li><p><a href="https://drand.protocol.ai:8080">https://drand.protocol.ai:8080</a></p></li><li><p><a href="https://drand.zerobyte.io:8888">https://drand.zerobyte.io:8888</a></p></li></ul>
    <div>
      <h2>Randomness &amp; the future</h2>
      <a href="#randomness-the-future">
        
      </a>
    </div>
    <p>Cloudflare will continue to take an active role in the drand project, both as a contributor and by running a randomness beacon with the League of Entropy. The League of Entropy is a worldwide joint effort of individuals and academic institutions. We at Cloudflare believe it can help us realize the mission of helping Build a Better Internet. For more information on Cloudflare's participation in the League of Entropy, visit <a href="https://leagueofentropy.com">https://leagueofentropy.com</a> or read <a href="/league-of-entropy">Dina's blog post</a>.</p><p>Cloudflare would like to thank all of their collaborators in the League of Entropy; from EPFL, UChile, Kudelski Security and Protocol Labs. This work would not have been possible without the work of those who contributed to the <a href="https://github.com/dedis/drand">open-source drand project</a>. We would also like to thank and appreciate the work of Gabbi Fisher, Brendan McMillion, and Mahrud Sayrafi in launching the Cloudflare randomness beacon.</p> ]]></content:encoded>
            <category><![CDATA[Crypto Week]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[Cryptography]]></category>
            <category><![CDATA[Entropy]]></category>
            <category><![CDATA[Programming]]></category>
            <guid isPermaLink="false">6yDv8PkyP9X3dUvYYh3MHZ</guid>
            <dc:creator>Alex Davidson</dc:creator>
        </item>
        <item>
            <title><![CDATA[Preventing Request Loops Using CDN-Loop]]></title>
            <link>https://blog.cloudflare.com/preventing-request-loops-using-cdn-loop/</link>
            <pubDate>Wed, 20 Mar 2019 13:00:00 GMT</pubDate>
            <description><![CDATA[ HTTP requests originate with a client and end at a web server that processes the request and returns a response. Such requests pass through multiple proxies before arriving at the requested resource.  ]]></description>
            <content:encoded><![CDATA[ <p>HTTP requests typically originate with a client, and end at a web server that processes the request and returns some response. Such requests may pass through multiple proxies before they arrive at the requested resource. If one of these proxies is configured badly (for instance, back to a proxy that had already processed it) then the request may be caught in a loop.</p><p>Request loops, accidental or malicious, can consume resources and degrade user's Internet performance. Such loops can even be <a href="http://www.icir.org/vern/papers/cdn-loops.NDSS16.pdf">observed at the CDN-level</a>. Such a wide-scale attack would affect all customers of that CDN. It's been over <a href="/preventing-malicious-request-loops/">three years</a> since Cloudflare acknowledged the power of such non-compliant or malicious request loops. The proposed solution in that blog post was quickly found to be flawed and loop protection has since been implemented in an <i>ad-hoc</i> manner that is specific to each individual provider. This lack of cohesion and co-operation has led to a fragmented set of protection mechanisms.</p><p>We are finally happy to report that a recent collaboration between multiple CDN providers (including Cloudflare) has led to a new <a href="https://datatracker.ietf.org/doc/draft-ietf-httpbis-cdn-loop/">mechanism for loop protection</a>. This now runs at the Cloudflare edge and is compliant with other CDNs, allowing us to provide protection against loops. The loop protection mechanism is currently a draft item being worked on by the HTTPbis working group. It will be published as an RFC in the standards track in the near future.</p>
    <div>
      <h3>The original problem</h3>
      <a href="#the-original-problem">
        
      </a>
    </div>
    <p>The original problem was summarised really nicely in the <a href="/preventing-malicious-request-loops/">previous blog post</a>, but I will summarise it again here (with some diagrams that are suspiciously similar to the original post, sorry Nick!).</p><p>As you may well know, Cloudflare is a reverse proxy. When requests are made for Cloudflare websites, the Cloudflare edge returns origin content via a cached response or by making requests to the origin web server. Some Cloudflare customers choose to use different CDN providers for different facets of functionality. This means that requests go through multiple proxy services before origin content is received from the origin.</p><p>This is where things can sometimes get messy, either through misconfiguration or deliberately. It's possible to configure multiple proxy services for a given origin in a <i>loop</i>. For example, an origin website could configure proxy <b>A</b> so that proxy <b>B</b> is the origin, and <b>B</b> such that <b>A</b> is the origin.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/59Hrtvc3DZHWSzpQUqeKvd/cd385c46d8d1bde51b162c561dd9585f/Loop1.png" />
            
            </figure><p>Then any request sent to the origin would get caught in a loop between the two Proxies (see above). If such a loop goes undetected, then this can quickly eat the computing resources of the two proxies, especially if the request requires a lot of processing at the edge. In these cases, it is conceivable that such an attack could lead to a DoS on one or both of the proxy services. Indeed, a <a href="http://www.icir.org/vern/papers/cdn-loops.NDSS16.pdf">research paper from NDSS 2016</a> showed that such an attack was practical when leveraging multiple CDN providers (including Cloudflare) in the way method shown above.</p>
    <div>
      <h4>The original solution</h4>
      <a href="#the-original-solution">
        
      </a>
    </div>
    <p>The previous blog post advocated using the Via header on HTTP requests to log the proxy services that had previously processed any previous request. This header is specified in <a href="https://tools.ietf.org/html/rfc7230#section-5.7.1">RFC7230</a> and is purpose-built for providing request loop detection. The idea was that CDN providers would log each time a request came through their edge architecture in the Via header. Any request that arrived at the edge would be checked to see if it previously passed through the same CDN using the value of the Via header. If the header indicated that it had passed through before, then the request could be dropped before any serious processing had taken place.</p><p>Nick’s previous <a href="/preventing-malicious-request-loops/">post</a> finished with a call-to-arms for all services proxying requests to be compliant with the standard.</p>
    <div>
      <h3>The problem with Via</h3>
      <a href="#the-problem-with-via">
        
      </a>
    </div>
    <p>In theory, the Via header would solve the loop protection problem. In practice, it was quickly discovered there were issues with the implementation of Via that meant that using the header was infeasible. Adding the header to outbound requests from the Cloudflare edge had grave performance consequences for a large number of Cloudflare customers.</p><p>Such issues arose from legacy usage of the Via header that conflicts with using it for tracking loop detection. For instance, around 8% of Cloudflare enterprise customers experienced issues where gzip failed to compress requests containing the Via header. This meant that transported requests were much larger and led to wide-scale problems for their web servers. Such performance degradation is even <i>expected</i> in some server implementations. For example, NGINX actively chooses not to compress proxied requests:</p><blockquote><p>By default, NGINX does not compress responses to proxied requests (requests that come from the proxy server). The fact that a request comes from a proxy server is determined by the presence of the Via header field in the request.</p></blockquote><p>While Cloudflare takes security very seriously, such performance issues were unacceptable. The difficult decision was taken to switch off loop protection based on the contents of the Via header shortly after it was implemented.</p><p>Since then, Cloudflare has implemented loop protection based on the CF-Connecting-IP and X-Forwarded-For headers. In essence, when a request is processed by the edge these headers are added to the request before it is sent to the origin. Then, any request that is processed by the edge including either of these headers is dropped. While this is enough to avoid malicious loop attacks, there are some disadvantages with this approach.</p><p>Firstly, this approach naturally means that there is no unified way of approaching loop protection across the different CDN providers. Without a standardised method, the possibility of mistakes in implementations that could <a href="https://www.cloudflare.com/learning/cdn/common-cdn-issues/">cause problems</a> in the future rises.</p><p>Secondly, there are some valid reasons that Cloudflare customers may require requests to loop through the edge more than once.While such reasons are usually quite esoteric, customers with such a need had to manually modify such requests so that they did not fall foul of the loop protection mechanism. For example, workflows that include usage of Cloudflare Workers can send requests through the edge more than once via subrequests for returning custom content to clients. The headers that are currently used mean that requests are dropped as soon as a request loops once. This can add noticeable friction to using CDN services and it would be preferable to have a more granular solution to loop detection.</p>
    <div>
      <h3>A new solution</h3>
      <a href="#a-new-solution">
        
      </a>
    </div>
    <p>Collaborators at Cloudflare, Fastly and Akamai set about defining a unified solution to the loop protection problem for CDNs.</p><p>The output was the following was <a href="https://tools.ietf.org/html/draft-ietf-httpbis-cdn-loop-02">this draft</a> that has recently been accepted by the HTTPbis working group on the Standards Track. the document has been approved by the IESG, it will join the RFC series.</p><p>The CDN-Loop header sets out a syntax that allows individual CDNs to mark requests as having been processed by their edge. This header should be added to any request that passes through the CDN architecture towards some separate endpoint. The <a href="https://tools.ietf.org/html/draft-ietf-httpbis-cdn-loop-02">current draft</a> defines the syntax of the header to be the following:</p>
            <pre><code>CDN-Loop  = #cdn-info
cdn-info  = cdn-id *( OWS ";" OWS parameter )
cdn-id	= ( uri-host [ ":" port ] ) / pseudonym
pseudonym = token</code></pre>
            <p>This initially seems a lot to unpack. Essentially, <code>cdn-id</code> is a URI host ID for the destination resource, or a <code>pseudonym</code> related to the CDN that has processed the request. In the Cloudflare case, we might choose <code>pseudonym = cloudflare</code>, or use the URI host ID for the origin website that has been requested.</p><p>Then, <code>cdn-info</code> contains the <code>cdn-id</code> in addition to some optional parameters. This is denoted by <code>*( OWS ";" OWS parameter )</code> where `OWS` represents optional whitespace, and <code>parameter</code> represents any CDN-specific information that may be informative for the specific request. If different CDN-specific <code>cdn-info</code> parameters are included in the same header, then these are comma-separated. For example, we may have <code>cdn-info = cdn1; param1, cdn2; param2</code> for two different CDNs that have interacted with the request.</p><p>Concretely, we give some examples to describe how the CDN-Loop header may be used by a CDN to mark requests as being processed.</p><p>If a request arrives at CDN <code>A</code> that has no current CDN-Loop header. Then <code>A</code> processes the request and adds:</p>
            <pre><code>CDN-Loop: cdn-info(A)</code></pre>
            <p>to the request headers.</p><p>If a request arrives at <code>A</code> with the following header:</p>
            <pre><code>CDN-Loop: cdn-info(B)</code></pre>
            <p>for some different CDN <code>B</code>, then <code>A</code> either modifies the header to be:</p>
            <pre><code>CDN-Loop: cdn-info(B), cdn-info(A)</code></pre>
            <p>or adds a separate header:</p>
            <pre><code>CDN-Loop: cdn-info(B)
CDN-Loop: cdn-info(A)</code></pre>
            <p>If a request arrives at <code>A</code> with:</p>
            <pre><code>CDN-Loop: cdn-info(A)</code></pre>
            <p>this indicates that the request has already been processed. At this point <code>A</code> detects a loop and may implement loop protection in accordance with its own policies. This is an implementation decision that is not defined in the specification. Options may include dropping the request or simply re-marking it, for example:</p>
            <pre><code>CDN-Loop: cdn-info(A); cdn-info(A)</code></pre>
            <p>A CDN could also utilise the optional parameters to indicate that a request had been processed:</p>
            <pre><code>CDN-Loop: cdn-info(A); processed=1</code></pre>
            <p>The ability to use different parameters in the header allows for much more granular loop detection and protection. For example, a CDN could drop requests that had previously looped N&gt;1 times, rather than just once. In addition, the advantage of using the CDN-Loop header is that it does not come with legacy baggage.  As we experienced previously, loop detection based on the Via header can conflict with existing usage of the header in web server implementations that eventually lead to compression issues and performance degradation. This makes CDN-Loop a viable and effective solution for detecting loop-protection attacks and applying preventions where needed.</p>
    <div>
      <h4>Implementing CDN-Loop at Cloudflare</h4>
      <a href="#implementing-cdn-loop-at-cloudflare">
        
      </a>
    </div>
    <p>The IETF standardisation process welcomes running code and implementation experience in the real world. Cloudflare recently added support for the CDN-Loop header to requests that pass through the Cloudflare edge. This replaces the <code>CF-Connecting-IP</code> and <code>X-Forwarded-For</code> headers as the primary means for establishing loop protection. The structure that Cloudflare uses is similar to the examples above, where <code>cdn-info = cloudflare</code>. Extra parameters can be added to the header to determine how many times a request has been processed and in what manner.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2pUAuvQrFWUQ0KIcMZW97p/81878c177780d661c3a9277473595c88/loop2.png" />
            
            </figure><p>The Cloudflare edge drops any requests that have been processed multiple times to prevent malicious loop attacks. In the diagram above, requests that have looped more times than is allowed by a given CDN (red arrows) are dropped and an error is returned to the client. The edge can decide to allow requests to loop more than once in certain situations, rather than dropping immediately after the first loop.</p>
    <div>
      <h3>A (second) call-to-arms</h3>
      <a href="#a-second-call-to-arms">
        
      </a>
    </div>
    <p>Cloudflare previously made a call-to-arms to make use of the Via header across the industry for preventing malicious usage of proxies for request looping. This did not turn out as we hoped for the reasons mentioned above. Using CDN-Loop, we believe that there is finally a way of allowing CDNs to block loop attacks in a standardised and generic manner that fits with other existing implementations.</p><p>CDN-Loop is actively supported by Cloudflare and there have been none of the performance issues that came with the usage of Via. Recently, another CDN, Fastly <a href="https://www.fastly.com/blog/creating-standards-for-cdns">introduced usage of the CDN-Loop header</a> of the CDN-Loop header for their own edge-based loop protection. We believe that this could be the start of a wider movement and that it would be advantageous for all reverse proxies and CDN-like providers to implement compliant usage of the CDN-Loop header.</p><p>While the original solution three years ago was very different, what Nick said at the time is still salient for all CDNs globally: <b>Let’s work together to avoid request loops.</b></p><p><i>Special thanks to Stephen Ludin, Mark Nottingham and Nick Sullivan for their work in drafting and improving the CDN-Loop specification. We would also like to extend thanks to HTTPbis working group for their advice during the standardisation process.</i></p> ]]></content:encoded>
            <category><![CDATA[Attacks]]></category>
            <category><![CDATA[WAF]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[Speed & Reliability]]></category>
            <guid isPermaLink="false">4Ql99Ek6GNpANFcj3rG4JJ</guid>
            <dc:creator>Alex Davidson</dc:creator>
        </item>
        <item>
            <title><![CDATA[Privacy Pass - “The Math”]]></title>
            <link>https://blog.cloudflare.com/privacy-pass-the-math/</link>
            <pubDate>Thu, 09 Nov 2017 16:05:00 GMT</pubDate>
            <description><![CDATA[ During a recent internship at Cloudflare, I had the chance to help integrate support for improving the accessibility of websites that are protected by the Cloudflare edge network.  ]]></description>
            <content:encoded><![CDATA[ <p><i>This is a guest post by Alex Davidson, a PhD student in Cryptography at Royal Holloway, University of London, who is part of the team that developed </i><a href="https://privacypass.github.io"><i>Privacy Pass</i></a><i>. Alex worked at Cloudflare for the summer on deploying Privacy Pass on the Cloudflare network</i>.</p><p>During a recent internship at Cloudflare, I had the chance to help integrate support for improving the accessibility of websites that are protected by the Cloudflare edge network. Specifically, I helped develop an open-source browser extension named ‘Privacy Pass’ and added support for the Privacy Pass protocol within Cloudflare infrastructure. Currently, Privacy Pass works with the Cloudflare edge to help honest users to reduce the number of Cloudflare CAPTCHA pages that they see when browsing the web. However, the operation of Privacy Pass is not limited to the Cloudflare use-case and we envisage that it has applications over a wider and more diverse range of applications as support grows.</p><p>In summary, this browser extension allows a user to generate cryptographically ‘blinded’ tokens that can then be signed by supporting servers following some receipt of authenticity (e.g. a CAPTCHA solution). The browser extension can then use these tokens to ‘prove’ honesty in future communications with the server, without having to solve more authenticity challenges.</p><p>The ‘blind’ aspect of the protocol means that it is infeasible for a server to link tokens token that it signs to tokens that are redeemed in the future. This means that a client using the browser extension should not compromise their own privacy with respect to the server they are communicating with.</p><p>In this blog post we hope to give more of an insight into how we have developed the protocol and the security considerations that we have taken into account. We have made use of some interesting and modern cryptographic techniques that we believe could have a future impact on a wide array of problems.</p>
    <div>
      <h3>Previously…</h3>
      <a href="#previously">
        
      </a>
    </div>
    <p>The research team released a specification last year for a “blind signing” protocol (very similar to the original proposal of <a href="#Cha82">Chaum</a> using a variant of RSA known as ‘blind RSA’. Blind RSA simply uses the homomorphic properties of the textbook RSA signature scheme to allow the user to have messages signed <i>obliviously</i>. Since then, George Tankersley and Filippo Valsorda gave a talk at <a href="https://youtu.be/GqY7YUv8b5Y">Real World Crypto 2017</a> explaining the idea in more detail and how the protocol could be implemented. The intuition behind a blind signing protocol is also given in <a href="/cloudflare-supports-privacy-pass">Nick’s blog post</a>.</p><p>A blind signing protocol between a server A and a client B roughly takes the following form:</p><ul><li><p>B generates some value <code>t</code> that they require a signature from A for.</p></li><li><p>B calculates a ‘blinded’ version of <code>t</code> that we will call <code>bt</code></p></li><li><p>B sends <code>bt</code> to A</p></li><li><p>A signs <code>bt</code> with their secret signing key and returns a signature <code>bz</code> to B</p></li><li><p>B receives <code>bz</code> and ‘unblinds’ to receive a signature <code>z</code> for value <code>t</code>.</p></li></ul><p>Due to limitations arising from the usage of RSA (e.g. large signature sizes, slower operations), there were efficiency concerns surrounding the extra bandwidth and computation time on the client browser. Fortunately, we received a lot of feedback from many notable individuals (full acknowledgments below). In short, this helped us to come up with a protocol with much lower overheads in storage, bandwidth and computation time using elliptic curve cryptography as the foundation instead.</p>
    <div>
      <h3>Elliptic curves (a very short introduction)</h3>
      <a href="#elliptic-curves-a-very-short-introduction">
        
      </a>
    </div>
    <p>An elliptic curve is defined over a finite field modulo some prime <code>p</code>. Briefly, an <code>(x,y)</code> coordinate is said to lie on the curve if it satisfies the following equation:</p><p><code>y^2 = x^3 + a*x + b (modulo p)</code></p><p>Nick Sullivan wrote an introductory <a href="/a-relatively-easy-to-understand-primer-on-elliptic-curve-cryptography/">blog post</a> on the use of elliptic curves in cryptography a while back, so this may be a good place to start if you’re new to the area.</p><p>Elliptic curves have been studied for use in cryptography since the independent works of Koblitz and Miller (1984-85). However, EC-based ciphers and signature algorithms have rapidly started replacing older primitives in the Internet-space due to large improvements in the choice of security parameters available. What this translates to is that encryption/signing keys can be much smaller in EC cryptography when compared to more traditional methods such as RSA. This comes with huge efficiency benefits when computing encryption and signing operations, thus making EC cipher suites perfect for use on an Internet-wide scale.</p><p>Importantly, there are many different elliptic curve configurations that are defined by the choice of <code>p</code>, <code>a</code> and <code>b</code> for the equation above. These prevent different security and efficiency benefits; some have been standardized by NIST. In this work, we will be using the NIST specified <a href="https://csrc.nist.gov/publications/detail/fips/186/4/final">P256 curve</a>, however, this choice is largely agnostic to the protocol that we have designed.</p>
    <div>
      <h4>Blind signing via elliptic curves</h4>
      <a href="#blind-signing-via-elliptic-curves">
        
      </a>
    </div>
    <p>Translating our blind signing protocol from RSA to elliptic curves required deriving a whole new protocol. Some of the suggestions pointed out cryptographic constructions known as “oblivious pseudorandom functions”. A pseudorandom function or PRF is a mainstay of the traditional cryptographic arsenal and essentially takes a key and some string as input and outputs some cryptographically random value.</p><p>Let F be our PRF, then the security requirement on such a function is that evaluating:</p><p><code>y = F(K,x)</code></p><p>is indistinguishable from evaluating:</p><p><code>y’ = f(x)</code></p><p>where f is a randomly chosen function with outputs defined in the same domain as <code>F(K,-)</code>. Choosing a function f at random undoubtedly leads to random outputs, however for <code>F</code>, randomness is derived from the choice of key <code>K</code>. In practice, we would instantiate a PRF using something like HMAC-SHA256.</p>
    <div>
      <h4>Oblivious PRFs</h4>
      <a href="#oblivious-prfs">
        
      </a>
    </div>
    <p>An oblivious PRF (OPRF) is actually a protocol between a server S and a client C. In the protocol, S holds a key <code>K</code> for some PRF <code>F</code> and C holds an input <code>x</code>. The security goal is that C receives the output <code>y = F(K,x)</code> without learning the key <code>K</code> and S does not learn the value <code>x</code>.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5LG1M3dg4OwiUYd1TFWIWJ/8e26d23ae4dd905c599cece4cf9c1cbd/image3-1.png" />
            
            </figure><p>It may seem difficult to construct such a functionality without revealing the input x or the key K. However, there are numerous (and very efficient) constructions of OPRFs with applications to many different cryptographic problems such as <a href="https://eprint.iacr.org/2016/799">private set intersection</a>, <a href="https://eprint.iacr.org/2016/144">password-protected secret-sharing</a> and <a href="http://webee.technion.ac.il/~hugo/sphinx.pdf">cryptographic password storage</a> to name a few.</p>
    <div>
      <h4>OPRFs from elliptic curves</h4>
      <a href="#oprfs-from-elliptic-curves">
        
      </a>
    </div>
    <p>A simple instantiation of an OPRF from elliptic curves was given by Jarecki et al. <a href="#jkk14">JKK14</a>, we use this as the foundation for our blind signing protocol.</p><ul><li><p>Let <code><b>G</b></code> be a cyclic group of prime-order</p></li><li><p>Let <code>H</code> be a collision-resistant hash function hashing into <code>G</code></p></li><li><p>Let <code>k</code> be a private key held by S</p></li><li><p>Let <code>x</code> be a private input held by C</p></li></ul><p>The protocol now proceeds as:</p><ul><li><p>C sends <code>H(x)</code> to S</p></li><li><p>S returns <code>kH(x)</code> to C</p></li></ul><p>Clearly, this is an exceptionally simple protocol, security is established since:</p><ul><li><p>The collision-resistant hash function prevents S from reversing <code>H(x)</code> to learn <code>x</code></p></li><li><p>The hardness of the discrete log problem (DLP) prevents C from learning <code>k</code> from <code>kH(x)</code></p></li><li><p>The output <code>kH(x)</code> is pseudorandom since <code><b>G</b></code> is a prime-order group and <code>k</code> is chosen at random.</p></li></ul>
    <div>
      <h4>Blind signing via an OPRF</h4>
      <a href="#blind-signing-via-an-oprf">
        
      </a>
    </div>
    <p>Using the OPRF design above as the foundation, the research team wrote a variation that we can use for a blind signing protocol; we detail this construction below. In our ‘blind signing’ protocol we require that:</p><ul><li><p>The client/user can have random values signed obliviously by the edge server</p></li><li><p>The client can ‘unblind’ these values and present them in the future for verification</p></li><li><p>The edge can commit to the secret key publicly and prove that it is used for signing all tokens globally</p></li></ul><p>The blind signing protocol is split into two phases.</p><p>Firstly, there is a <b>blind signing phase</b> that is carried out between the user and the edge after the user has successfully solved a challenge. The result is that the user receives a number of <code>signed</code> tokens (default 30) that are unblinded and stored for future use. Intuitively, this mirrors the execution of the OPRF protocol above.</p><p>Secondly, there is a <b>redemption phase</b> where an unblinded token is used for bypassing a future iteration of the challenge.</p><p>Let <code><b>G</b></code> be a cyclic group of prime-order <code>q</code>. Let <code>H_1</code>,<code>H_2</code> be a pair of collision-resistant hash functions; <code>H_1</code> hashes into the group <code><b>G</b></code> as before, <code>H_2</code> hashes into a binary string of length <code>n</code>.</p><p>In the following, we will slightly different notation to make it consistent with existing literature. Let <code>x</code> be a private key held by the server S. Let <code>t</code> be the input held by the user/client C. Let <code>ZZ_q</code> be the ring of integers modulo <code>q</code>. We write all operations in their scalar multiplication form to be consistent with EC notation. Let <code>MAC_K()</code> be a <a href="https://en.wikipedia.org/wiki/Message_authentication_code">message-authentication code</a> algorithm keyed by a key <code>K</code>.</p>
    <div>
      <h4>Signing phase</h4>
      <a href="#signing-phase">
        
      </a>
    </div>
    <ul><li><p>C samples a random ‘blind’ <code>r ← ZZ_q</code></p></li><li><p>C computes <code>T = H_1(t)</code> and then blinds it by computing <code>rT</code></p></li><li><p>C sends <code>M = rT</code> to S</p></li><li><p>S computes <code>Z = xM</code> and returns <code>Z</code> to C</p></li><li><p>C computes <code>(1/r)*Z = xT = N</code> and stores the pair <code>(t,N)</code> for some point in the future</p></li></ul><p>We think of <code>T = H_1(t)</code> as a token, these objects form the backbone of the protocol that we use to bypass challenges.Notice, that the only difference between this protocol and the OPRF above is the blinding factor <code>r</code> that we use.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3LJYvqKAwDw1Rh6oPlGeZy/ef4c5d38cf87ce48480c6e7680d17444/image2.png" />
            
            </figure>
    <div>
      <h4>Redemption phase</h4>
      <a href="#redemption-phase">
        
      </a>
    </div>
    <ul><li><p>C calculates request binding data <code>req</code> and chooses an unspent token <code>(t,N)</code></p></li><li><p>C calculates a shared key <code>sk = H_2(t,N)</code> and sends <code>(t, MAC_sk(req))</code> to S</p></li><li><p>S recalculates <code>req'</code> based on the request data that it witnesses</p></li><li><p>S checks that <code>t</code> has not been spent already and calculates <code>T = H_1(t)</code>, <code>N = xT</code>, and <code>sk = H_2(t,N)</code></p></li><li><p>Finally S checks that <code>MAC_sk(req') =?= MAC_sk(req)</code>, and stores <code>t</code> to check against future redemptions</p></li></ul><p>If all the steps above pass, then the server validates that the user has a validly signed token. When we refer to ‘passes’ we mean the pair <code>(t, MAC_sk(req))</code> and if verification is successful the edge server grants the user access to the requested resource.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5UrTr7lpAY9Fin8rctVoLa/61fbb098340ac56a5012b6f03a13acc0/image1-1.png" />
            
            </figure>
    <div>
      <h3>Cryptographic security of protocol</h3>
      <a href="#cryptographic-security-of-protocol">
        
      </a>
    </div>
    <p>There are many different ways in which we need to ensure that the protocol remains “secure”. Clearly one of the main features is that the user remains anonymous in the transaction. Furthermore, we need to show that the client is unable to leverage the protocol in order to learn the private key of the edge, or arbitrarily gain infinite tokens. We give two security arguments for our protocol that we can easily reduce to cryptographic assumptions on the hardness of widely-used problems. There are a number of other security goals for the protocol but we consider the two arguments below as fundamental security requirements.</p>
    <div>
      <h4>Unlinkability in the presence of an adversarial edge</h4>
      <a href="#unlinkability-in-the-presence-of-an-adversarial-edge">
        
      </a>
    </div>
    <p>Similarly to the RSA blind signing protocol, the blind r is used to prevent the edge from learning the value of <code>T</code>, above. Since <code>r</code> is not used in the redemption phase of the protocol, there is no way that the server can link a blinded token <code>rT</code> in the signing phase to any token in a given redemption phase. Since S recalculates <code>T</code> during redemption, it may be tempting to think that S could recover <code>r</code> from <code>rT</code>. However, the hardness of the discrete log problem prevents S from launching this attack. Therefore, the server has no knowledge of <code>r</code>.</p><p>As mentioned and similarly to the <a href="#jkk14">JKK14</a> OPRF protocol above, we rely on the hardness of standard cryptographic assumptions such as the discrete log problem (DLP), and collision-resistant hash functions. Using these hardness assumptions it is possible to write a proof of security in the presence of a dishonest server. The proof of security shows that assuming that these assumptions are hard, then a dishonest server is unable to link an execution of the signing phase with any execution of the redemption phase with probability higher than just randomly guessing.</p><p>Intuitively, in the signing phase, C sends randomly distributed data due to the blinding mechanism and so S cannot learn anything from this data alone. In the redemption phase, C unveils their token, but the transcript of the signing phase witnessed by S is essentially random and so it cannot be used to learn anything from the redemption phase.</p><p>This is not a full proof of security but gives an idea as to how we can derive cryptographic hardness for the underlying protocol. We hope to publish a more detailed cryptographic proof in the near future to accompany our protocol design.</p>
    <div>
      <h3>Key privacy for the edge</h3>
      <a href="#key-privacy-for-the-edge">
        
      </a>
    </div>
    <p>It is also crucial to prove that the exchange does not reveal the secret key <code>x</code> to the user. If this were to happen, then the user would be able to arbitrarily sign their own tokens, giving them an effectively infinite supply.</p><p>Notice that the only time when the client is exposed to the key is when they receive <code>Z = xM</code>. In elliptic-curve terminology, the client receives their blinded token scalar multiplied with <code>x</code>. Notice, that this is also identical to the interaction that an adversary witnesses in the discrete log problem. In fact, if the client was able to compute <code>x</code> from <code>Z</code>, then the client would also be able to solve the DLP — which is thought to be very hard for established key sizes. In this way, we have a sufficient guarantee that an adversarial client would not be able to learn the key from the signing interaction.</p>
    <div>
      <h4>Preventing further deanonymization attacks using “Verifiable” OPRFs</h4>
      <a href="#preventing-further-deanonymization-attacks-using-verifiable-oprfs">
        
      </a>
    </div>
    <p>While the proof of security above gives some assurances about the cryptographic design of the protocol, it does not cover the possibility of possible out-of-band deanonymization. For instance, the edge server can sign tokens with a new secret key each time. Ignoring the cost that this would incur, the server would be able to link token signing and redemption phases by simply checking the validation for each private key in use.</p><p>There is a solution known as a ‘discrete log equivalence proof’ (DLEQ proof). Using this, a server commits to a secret key <code>x</code> by publicly posting a pair <code>(G, xG)</code> for a generator <code>G</code> of the prime-order group <code><b>G</b></code>. A DLEQ proof intuitively allows the server to prove to the user that the signed tokens <code>Z = xrT</code> and commitment <code>xG</code> both have the same discrete log relation <code>x</code>. Since the commitment is posted publicly (similarly to a <a href="https://www.certificate-transparency.org/">Certificate Transparency Log</a>) this would be verifiable by all users and so the deanonymization attack above would not be possible.</p>
    <div>
      <h4>DLEQ proofs</h4>
      <a href="#dleq-proofs">
        
      </a>
    </div>
    <p>The DLEQ proof objects take the form of a Chaum-Pedersen <a href="#cp93">CP93</a> non-interactive zero-knowledge (NIZK) proof. Similar proofs were used in <a href="#jkk14">JKK14</a> to show that their OPRF protocol produced “verifiable” randomness, they defined their construction as a VOPRF. In the following, we will describe how these proofs can be augmented into the signing phase above.</p><p><i>The DLEQ proof verification in the extension is still in development and is not completely consistent with the protocol below. We hope to complete the verification functionality in the near future.</i></p><p>Let <code>M = rT</code> be the blinded token that C sends to S, let <code>(G,Y) = (G,xG)</code> be the commitment from above, and let H_3 be a new hash function (modelled as a random oracle for security purposes). In the protocol below, we can think of S playing the role of the 'prover' and C the 'verifier' in a traditional NIZK proof system.</p><ul><li><p>S computes <code>Z = xM</code>, as before.</p></li><li><p>S also samples a random nonce <code>k ← ZZ_q</code> and commits to the nonce by calculating <code>A = kG</code> and <code>B = kM</code></p></li><li><p>S constructs a challenge <code>c ← H_3(G,Y,M,Z,A,B)</code> and computes <code>s = k-cx (mod q)</code></p></li><li><p>S sends <code>(c,s)</code> to the user C</p></li><li><p>C recalculates <code>A' = sG + cY</code> and <code>B' = s*M + c*Z</code> and hashes <code>c' = H_3(G,Y,M,Z,A’,B’)</code>.</p></li><li><p>C verifies that <code>c' =?= c</code>.</p></li></ul><p>Note that correctness follows since</p>
            <pre><code>A' = sG + cY = (k-cx)G + cxG = kG and B' = sM + cZ = r(k-cx)T + crxT = krT = kM </code></pre>
            <p>We write DLEQ(Z/M == Y/G) to denote the proof that is created by S and validated by C.In summary, if both parties have a consistent view of <code>(G,Y)</code> for the same epoch then the proof should verify correctly. As long as the discrete log problem remains hard to solve, then this proof remains zero-knowledge (in the random oracle model). For our use-case the proof verifies that the same key <code>x</code> is used for each invocation of the protocol, as long as <code>(G,Y)</code> does not change.</p>
    <div>
      <h4>Batching the proofs</h4>
      <a href="#batching-the-proofs">
        
      </a>
    </div>
    <p>Unfortunately, a drawback of the proof above is that it has to be instantiated for each individual token sent in the protocol. Since we send 30 tokens by default, this would require the server to also send 30 DLEQ proofs (with two EC elements each) and the client to verify each proof individually.</p><p>Interestingly, Henry showed that it was possible to batch the above NIZK proofs into one object with only one verification required <a href="#hen14">Hen14</a>. Using this batching technique substantially reduces the communication and computation cost of including the proof.</p><p>Let <code>n</code> be the number of tokens to be signed in the interaction, so we have <code>M_i = r_i*T_i</code> for the set of blinded tokens corresponding to inputs <code>t_i</code>.</p><ul><li><p>S generates corresponding <code>Z_i = x*M_i</code></p></li><li><p>S also computes a seed <code>z = H_3(G,Y,M_1,...,M_n,Z_1,...,Z_n)</code></p></li><li><p>S then initializes a pseudorandom number generator PRNG with the seed <code>z</code> and outputs <code>c_1, ... , c_n ← PRNG(z)</code> where the output domain of PRNG is <code>ZZ_q</code></p></li><li><p>S generates composite group elements:</p></li></ul>
            <pre><code>M = (c_1*M_1) + ... + (c_n*M_n), Z = (c_1*Z_1) + ... + (c_n*Z_n)</code></pre>
            <ul><li><p>S calculates <code>(c,s) ← DLEQ(M:Z == G:Y)</code> and sends <code>(c,s)</code> to C, where <code>DLEQ(Z/M == Y/G)</code> refers to the proof protocol used in the non-batching case.</p></li><li><p>C computes <code>c’_1, … , c’_n ← PRNG(z)</code> and re-computes <code>M’</code>, <code>Z’</code> and checks that <code>c’ =?= c</code></p></li></ul><p>To see why this works, consider the reduced case where m = 2:</p>
            <pre><code>Z_1 = x(M_1),
Z_2 = x(M_2),
(c_1*Z_1) = c_1(x*M_1) = x(c_1*M_1),
(c_2*Z_2) = c_2(x*M_2) = x(c_2*M_2),
(c_1*Z_1) + (c_2*Z_2) = x[(c_1*M_1) + (c_2*M_2)]
</code></pre>
            <p>Therefore, all the elliptic curve points will have the same discrete log relation as each other, and hence equal to the secret key that is committed to by the edge.</p>
    <div>
      <h4>Benefits of V-OPRF vs blind RSA</h4>
      <a href="#benefits-of-v-oprf-vs-blind-rsa">
        
      </a>
    </div>
    <p>While the blind RSA specification that we released fulfilled our needs, we make the following concrete gains</p><ul><li><p>Simpler, faster primitives</p></li><li><p>10x savings in pass size (~256 bits using P-256 instead of ~2048)</p></li><li><p>The only thing edge to manage is a private scalar. No certificates.</p></li><li><p>No need for public-key encryption at all, since the derived shared key used to calculate each MAC is never transmitted and cannot be found from passive observation without knowledge of the edge key or the user's blinding factor.</p></li><li><p>Exponentiations are more efficient due to use of elliptic curves.</p></li><li><p>Easier key rotation. Instead of managing certificates pinned in TBB and submitted to CT, we can use the DLEQ proofs to allow users to positively verify they're in the same anonymity set with regard to the edge secret key as everyone else.</p></li></ul>
    <div>
      <h4>Download</h4>
      <a href="#download">
        
      </a>
    </div>
    <p>Privacy Pass v1.0 is available as a browser extension for <a href="https://chrome.google.com/webstore/detail/privacy-pass/ajhmfdgkijocedmfjonnpjfojldioehi">Chrome</a> and <a href="https://addons.mozilla.org/en-US/firefox/addon/privacy-pass/">Firefox</a>. If you find any issues while using then <a href="https://privacypass.github.io">let us know</a>.</p>
    <div>
      <h4>Source code</h4>
      <a href="#source-code">
        
      </a>
    </div>
    <p>The code for the browser extension and server has been open-sourced and can be found at <a href="https://github.com/privacypass/challenge-bypass-extension">https://github.com/privacypass/challenge-bypass-extension</a> and <a href="https://github.com/privacypass/challenge-bypass-server">https://github.com/privacypass/challenge-bypass-server</a> respectively. We are welcoming contributions if you happen to notice any improvements that can be made to either component. If you would like to get in contact with the Privacy Pass team then find us at our <a href="https://privacypass.github.io">website</a>.</p>
    <div>
      <h4>Protocol details</h4>
      <a href="#protocol-details">
        
      </a>
    </div>
    <p>More information about the protocol can be found <a href="https://privacypass.github.io/protocol">here</a>.</p>
    <div>
      <h4>Acknowledgements</h4>
      <a href="#acknowledgements">
        
      </a>
    </div>
    <p>The creation of Privacy Pass has been a joint effort by the team made up of George Tankersley, Ian Goldberg, Nick Sullivan, Filippo Valsorda and myself.</p><p>I'd also like to thank Eric Tsai for creating the logo and extension design, Dan Boneh for helping us develop key parts of the protocol, as well as Peter Wu and Blake Loring for their helpful code reviews. We would also like to acknowledge Sharon Goldberg, Christopher Wood, Peter Eckersley, Brian Warner, Zaki Manian, Tony Arcieri, Prateek Mittal, Zhuotao Liu, Isis Lovecruft, Henry de Valence, Mike Perry, Trevor Perrin, Zi Lin, Justin Paine, Marek Majkowski, Eoin Brady, Aaran McGuire, and many others who were involved in one way or another and whose efforts are appreciated.</p>
    <div>
      <h4>References</h4>
      <a href="#references">
        
      </a>
    </div>
    <p>Cha82: Chaum. <a href="https://dl.acm.org/citation.cfm?doid=4372.4373">Blind signatures for untraceable payments. CRYPTO’82</a>CP93: Chaum, Pedersen. <a href="http://chaum.com/publications/Wallet_Databases.pdf">Wallet Databases with Observers. CRYPTO'92.</a>Hen14: Ryan Henry. <a href="https://uwspace.uwaterloo.ca/bitstream/handle/10012/8621/Henry_Ryan.pdf">Efficient Zero-Knowledge Proofs and Applications, August 2014.</a>JKK14: Jarecki, Kiayias, Krawczyk. <a href="https://eprint.iacr.org/2014/650.pdf">Round-Optimal Password-Protected Secret Sharing and T-PAKE in the Password-Only model.</a>JKKX16: Jarecki, Kiayias, Krawczyk, Xu. <a href="https://eprint.iacr.org/2016/144.pdf">Highly-Efficient and Composable Password-Protected Secret Sharing.</a></p> ]]></content:encoded>
            <category><![CDATA[Privacy Pass]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[CAPTCHA]]></category>
            <category><![CDATA[Chrome]]></category>
            <category><![CDATA[Firefox]]></category>
            <category><![CDATA[Research]]></category>
            <category><![CDATA[Cryptography]]></category>
            <guid isPermaLink="false">41Lr8xZtaEnIidX8Q0fvEX</guid>
            <dc:creator>Alex Davidson</dc:creator>
        </item>
    </channel>
</rss>