
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/">
    <channel>
        <title><![CDATA[ The Cloudflare Blog ]]></title>
        <description><![CDATA[ Get the latest news on how products at Cloudflare are built, technologies used, and join the teams helping to build a better Internet. ]]></description>
        <link>https://blog.cloudflare.com</link>
        <atom:link href="https://blog.cloudflare.com/" rel="self" type="application/rss+xml"/>
        <language>en-us</language>
        
        <lastBuildDate>Sat, 04 Apr 2026 12:27:23 GMT</lastBuildDate>
        <item>
            <title><![CDATA[International Women’s Day 2022]]></title>
            <link>https://blog.cloudflare.com/international-womens-day-2022/</link>
            <pubDate>Tue, 08 Mar 2022 13:55:53 GMT</pubDate>
            <description><![CDATA[ Welcome to International Women’s Day 2022! Here at Cloudflare, we are happy to celebrate it with you! Our celebration is not only this blog post, but many events prepared for the month of March: our way of honoring Women’s History Month by showcasing women’s empowerment ]]></description>
            <content:encoded><![CDATA[ <p></p><blockquote><p>“I would venture to guess that Anon,who wrote so many poems without signing them,was often a <i>woman.</i>” - <b>Virginia Woolf</b></p></blockquote><p><b>Welcome to International Women’s Day 2022!</b> Here at Cloudflare, we are happy to celebrate it with you! Our celebration is not only this blog post, but many events prepared for the month of March: our way of honoring <a href="https://en.wikipedia.org/wiki/Women%27s_History_Month">Women’s History Month</a> by showcasing women’s empowerment. We want to celebrate the achievements, ideas, passion and work that women bring to the world. We want to advocate for equality and to <a href="https://en.wikipedia.org/wiki/Gender_parity">achieve gender parity</a>. And we want to highlight the brilliant work that our women colleagues do every day. Welcome!</p><p>This is a time of celebration but also one to reflect on the current state. The global gender gap is not expected to close for <a href="https://www.weforum.org/agenda/2021/04/136-years-is-the-estimated-journey-time-to-gender-equality/#:~:text=COVID%2D19%20has%20set%20back%20progress%20for%20women's%20rights.&amp;text=The%20global%20gender%20gap%20is,Forum's%20Global%20Gender%20Gap%20report.">another 136 years</a>. This gap has also worsened due to the <a href="https://www.unwomen.org/sites/default/files/Headquarters/Attachments/Sections/Library/Publications/2020/Policy-brief-The-impact-of-COVID-19-on-women-en.pdf">COVID-19 pandemic</a>, which has negatively impacted the lives of women and girls by deepening pre-existing inequalities. Improving this state is a collective effort—we all need to get involved!</p>
    <div>
      <h2>Who are we? Womenflare!</h2>
      <a href="#who-are-we-womenflare">
        
      </a>
    </div>
    <p>First, let’s introduce ourselves. We are <b>Womenflare</b>—Cloudflare’s Employee Resource Group (ERG) for all who identify as and advocate for women. We’re an employee-led group that is here to empower, represent, and support.</p><p>Our purpose is not only to celebrate women’s achievements but also to shed a light on inequalities. That is why for International Women’s Day 2022, we’re joining in focusing on the theme of <a href="https://www.internationalwomensday.com/">#BreakTheBias</a> throughout our month of events and activities:</p>We can break the bias in our communities.<br />
We can break the bias in our workplaces.<br />We can break the bias in our schools, colleges, and universities.<br />Together, we can all break the bias -<br />on International Women's Day (IWD) and beyond
<p></p>
    <div>
      <h2>What are some of our internal activities for this month?</h2>
      <a href="#what-are-some-of-our-internal-activities-for-this-month">
        
      </a>
    </div>
    
    <div>
      <h3>Celebrating International Women’s Day</h3>
      <a href="#celebrating-international-womens-day">
        
      </a>
    </div>
    <p>Internally, we are kicking off our celebration on March 8. We will be joined by several women from <a href="https://www.northcoastnyc.com/">North Coast hip hop improv comedy group</a>. We hope this fun and freestyle event will encourage participants to think about unconscious biases, breaking them down, and how they can get more involved in empowering the women around them.</p>
    <div>
      <h3>Intersectionality and Allyship at Cloudflare</h3>
      <a href="#intersectionality-and-allyship-at-cloudflare">
        
      </a>
    </div>
    <p>Following our kick-off celebrations, we’re hosting open discussions about intersectionality and allyship alongside some of our fellow Employee Resource Groups including Afroflare, Asianflare, Flarability, and Nativeflare. It’s important to us to include other ERGs in these conversations since the goal of empowerment, representation, and support is shared among us and can’t be done alone. And we want to play closer attention to the layers that form a person’s social identity, creating compounding experiences of discrimination. “All inequality is not created equal,” <a href="https://www.unwomen.org/en/news/stories/2020/6/explainer-intersectional-feminism-what-it-means-and-why-it-matters">says</a> Kimberlé Crenshaw, the law professor who coined “intersectional feminism” term in 1989. Understanding the way different inequalities play a role in a person’s life means understanding the history, systematic discrimination, and the non-uniformity of it.</p>
    <div>
      <h3>Internal Leadership Panel</h3>
      <a href="#internal-leadership-panel">
        
      </a>
    </div>
    <p>Last year, we brought together an internal panel of women leaders at Cloudflare to share their journeys and lessons learned. It was extremely well received, so we decided to build upon its success by inviting another group of internal women leaders to discuss their experiences and insights with us. Some important takeaways from these panel discussions have been the realization that most backgrounds and journeys are vastly different, paths to success are often rocky but rewarding, and perseverance, tenacity, and an open mind, often rule the day. What better way to learn from others and encourage more women to lead!</p>
    <div>
      <h2>What can we all do?</h2>
      <a href="#what-can-we-all-do">
        
      </a>
    </div>
    <p>Allyship is integral to systemic change. An ally is someone who recognizes unearned privileges in their lives and takes responsibility to end patterns of injustice. At Cloudflare, we’re working hard to build more diverse and equitable teams, as well as create and maintain an environment that is inclusive and welcoming. There are many actions you can take as an ally; some include:</p><ul><li><p><b>Educating yourself:</b> listen to the experiences of your women colleagues and work with them to understand their perspectives.</p></li><li><p><b>Amplifying women’s opinions and advocating for them:</b> speak up for others and champion them when they need support and encouragement.</p></li><li><p><b>Taking action in the workplace:</b> if you see inequality or discrimination happening, reach out to discuss further and understand what can be done.</p></li><li><p><b>Advocating for diversity:</b> talk with your peers and leaders about the ways you can get involved with improving diversity, equity, and inclusion.</p></li></ul><p>Celebrate International Women’s Day and Women’s Empowerment Month in your own creative ways! And all throughout the year, remember to empower women and to recognize them in such a way that their work is no longer anonymous. Join the #IWD2022 movement — #BreakTheBias this month and beyond!</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1HGqCEx50S6QzJynGjY5HR/27527631d69af50dc2c49230f5c1dfa3/image2-4.png" />
            
            </figure> ]]></content:encoded>
            <category><![CDATA[IWD]]></category>
            <category><![CDATA[Diversity]]></category>
            <category><![CDATA[Womenflare]]></category>
            <category><![CDATA[Life at Cloudflare]]></category>
            <category><![CDATA[Employee Resource Groups]]></category>
            <guid isPermaLink="false">277RJpSHdnfKpgu6sDlfHc</guid>
            <dc:creator>Sofía Celi</dc:creator>
            <dc:creator>Angela Huang</dc:creator>
        </item>
        <item>
            <title><![CDATA[The post-quantum future: challenges and opportunities]]></title>
            <link>https://blog.cloudflare.com/post-quantum-future/</link>
            <pubDate>Fri, 25 Feb 2022 16:03:25 GMT</pubDate>
            <description><![CDATA[ The story and path of post-quantum cryptography is clear. But, what are the future challenges? In this blog post, we explore them ]]></description>
            <content:encoded><![CDATA[ <p></p><blockquote><p><i>“People ask me to predict the future, when all I want to do is prevent it. Better yet, build it. Predicting the future is much too easy, anyway. You look at the people around you, the street you stand on, the visible air you breathe, and predict more of the same. To hell with more. I want better.”— </i><b><i>Ray Bradbury</i></b><i>, from Beyond 1984: The People Machines</i></p></blockquote><p>The <a href="/post-quantum-taxonomy/">story and the path are clear</a>: quantum computers are coming that will have the ability to break the cryptographic mechanisms we rely on to secure modern communications, but there is hope! The cryptographic community has designed new mechanisms to safeguard against this disruption. There are challenges: will the new safeguards be practical? How will the fast-evolving Internet migrate to this new reality? In <a href="/making-protocols-post-quantum/">other</a> <a href="/post-quantum-key-encapsulation/">blog</a> <a href="/post-quantum-key-encapsulation/">posts</a> in this series, we have outlined some potential solutions to these questions: there are new algorithms for maintaining confidentiality and authentication (in a “post-quantum” manner) in the protocols we use. But will they be fast enough to deploy at scale? Will they provide the required properties and work in all protocols? Are they easy to use?</p><p>Adding <a href="https://www.cloudflare.com/learning/ssl/quantum/what-is-post-quantum-cryptography/">post-quantum cryptography</a> into architectures and networks is not only about being novel and looking at interesting research problems or exciting engineering challenges. It is primarily about protecting people’s communications and data because a quantum adversary can not only decrypt future traffic but, if they want to, past traffic. Quantum adversaries could also be capable of other attacks (by using quantum algorithms, for example) that we may be unaware of now, so protecting against them is, in a way, the challenge of facing the unknown. We can’t fully predict everything that will happen with the advent of quantum computers<sup>1</sup>, but we can prepare and build greater protections than the ones that currently exist. We do not see the future as apocalyptic, but as an opportunity to reflect, discover and build better.</p><p>What are the challenges, then? And related to this question: what have we learned from the past that enables us to <i>build better</i> in a post-quantum world?</p>
    <div>
      <h3>Beyond a post-quantum TLS</h3>
      <a href="#beyond-a-post-quantum-tls">
        
      </a>
    </div>
    <p>As we have shown in <a href="/post-quantum-taxonomy/">other</a> <a href="/making-protocols-post-quantum/">blog posts</a>, the most important security and privacy properties to protect in the face of a quantum computer are confidentiality and authentication. The <a href="https://en.wikipedia.org/wiki/Threat_model">threat model</a> of confidentiality is clear: quantum computers will not only be able to decrypt on-going traffic, but also any traffic that was recorded and stored prior to their arrival. The threat model for authentication is a little more complex: a quantum computer could be used to impersonate a party (by successfully mounting a <a href="https://en.wikipedia.org/wiki/Man-in-the-middle_attack">monster-in-the-middle attack</a>, for example) in a connection or conversation, and it could also be used to retroactively modify elements of the past message, like the identity of a sender (by, for example, changing the authorship of a past message to a different party). Both threat models are important to consider and pose a problem not only for future traffic but also for any traffic sent now.</p><p>In the case of using a quantum computer to impersonate a party: how can this be done? Suppose an attacker is able to use a quantum computer to compromise a user’s TLS certificate private key (which has been used to sign lots of past connections). The attacker can then forge connections and pretend that they come from the honest user (by signing with the user’s key) to another user, let’s say Bob. Bob will think the connections are coming from the honest user (as they all did in the past), when, in reality, they are now coming from the attacker.</p><p>We have algorithms that protect confidentiality and authentication in the face of quantum threats. We know how to integrate them into TLS, as we have seen in <a href="/making-protocols-post-quantum/">this blog post</a>, so is that it? Will our connections then be safe? We argue that we will not yet be done, and these are the future challenges we see:</p><ul><li><p>Changing the key exchange of the TLS handshake is simple; changing the authentication of TLS, in practice, is hard.</p></li><li><p><a href="/monsters-in-the-middleboxes/">Middleboxes and middleware</a> in the network, such as antivirus software and corporate proxies, can be slow to upgrade, <a href="/why-tls-1-3-isnt-in-browsers-yet/">hindering the rollout</a> of new protocols.</p></li><li><p>TLS is not the only foundational protocol of the Internet, there are other protocols to take into account: some of them are very similar to TLS and they are easy to fix; others such as DNSSEC or QUIC are more challenging.</p></li></ul><p><a href="https://datatracker.ietf.org/doc/html/rfc8446">TLS</a> (we will be focusing on its current version, which is 1.3) is a protocol that aims to achieve three primary security properties:</p><ul><li><p><a href="https://www.cloudflare.com/learning/ssl/what-happens-in-a-tls-handshake/"><b>Confidentiality: communication can be read only by the intended recipient,</b></a></p></li><li><p><a href="https://www.cloudflare.com/learning/ssl/what-happens-in-a-tls-handshake/"><b>Integrity: communication cannot be changed in transit, and</b></a></p></li><li><p><a href="https://www.cloudflare.com/learning/ssl/what-happens-in-a-tls-handshake/"><b>Authentication: we are assured communication comes from the peer we are talking to</b></a>.</p></li></ul><p>The first two properties are easy to maintain in a quantum-computer world: confidentiality is maintained by swapping the existing non-quantum-safe algorithm for a post-quantum one; integrity is maintained because the algorithms are intractable on a quantum computer. What about the last property, authentication? There are three ways to achieve authentication in a TLS handshake, depending on whether server-only or mutual authentication is required:</p><ol><li><p>By using a ‘pre-shared’ key (PSK) generated from a previous run of the TLS connection that can be used to establish a new connection (this is often called "session resumption" or "resuming" with a PSK),</p></li><li><p>By using a <a href="/opaque-oblivious-passwords/">Password-Authenticated Key Exchange (PAKE)</a> for handshake authentication or post-handshake authentication (with the usage of <a href="https://datatracker.ietf.org/doc/draft-ietf-tls-exported-authenticator/">exported authenticators</a>, for example). This can be done by using the <a href="https://github.com/grittygrease/draft-sullivan-tls-opaque/blob/master/draft-sullivan-tls-opaque.md">OPAQUE</a> or the <a href="http://www.watersprings.org/pub/id/draft-barnes-tls-pake-00.html">SPAKE</a> protocols,</p></li><li><p>By using public certificates that advertise parameters that are employed to assure (i.e., create a proof of) the identity of the party you are talking to (this is by far the most common method).</p></li></ol><p>Securing the first authentication mechanism is easily achieved in the post-quantum sphere as a unique key is derived from the initial quantum-protected handshake. The second authentication mechanism does not pose a theoretical challenge (as public and private parameters are replaced with post-quantum counterparts) but rather faces practical limitations: certificate-based authentication involves multiple actors and it is difficult to properly synchronize this change with them, as we will see next. It is not only one public parameter and one public certificate: certificate-based authentication is achieved through the use of a certificate chain with multiple entities.</p><p>A certificate chain is a chain of trust by which different entities attest to the validity of public parameters to provide verifiability and confidence. Typically, for one party to authenticate another (for example, for a client to authenticate a server), a chain of certificates starting from a root’s Certificate Authority (CA) certificate is used, followed by at least one intermediate CA certificate, and finally by the leaf (or end-entity) certificate of the actual party. This is what you usually find in real-world connections. It is worth noting that the order of this chain (for TLS 1.3) does not require each certificate to certify the one immediately preceding it. Servers sometimes send both a current and deprecated intermediate certificate for transitional purposes, or are configured incorrectly.</p><p>What are these certificates? Why do multiple actors need to validate them? A (digital) certificate certifies the ownership of a public key by the named party of the certificate: it attests that the party owns the private counterpart of the public parameter through the use of <a href="https://en.wikipedia.org/wiki/Digital_signature">digital signatures</a>. A CA is the entity that issues these certificates. Browsers, operating systems or mobile devices operate CA “membership” programs where a CA must meet certain criteria to be incorporated into the trusted set. Devices accept their CA root certificates as they come “pre-installed” in a root store. Root certificates, in turn, are used to generate a number of intermediate certificates which will be, in turn, used to generate leaf certificates. This certificate chain process of generation, validation and revocation is not only a procedure that happens at a software level but rather an amalgamation of policies, rules, roles, and hardware<sup>2</sup> and software needs. This is what is often called the <a href="https://en.wikipedia.org/wiki/Public_key_infrastructure">Public Key Infrastructure (PKI)</a>.</p><p>All of the above goes to show that while we can change all of these parameters to post-quantum ones, it is not as simple as just modifying the TLS handshake. Certificate-based authentication involves many actors and processes, and it does not only involve one algorithm (as it happens in the key exchange phase) but typically at least six signatures: one handshake signature; two in the certificate chain; one <a href="/high-reliability-ocsp-stapling/">OCSP staple</a> and <a href="/introducing-certificate-transparency-and-nimbus/">two SCTs</a>. The last five signatures together are used to prove that the server you’re talking to is the right server for the website you’re visiting, for example. Of these five, the last three are essentially patches: the OCSP staple is used to deal with revoked certificates and the SCTs are to detect rogue CA’s. Starting with a clean slate, could we improve on the <i>status quo</i> with an efficient solution?</p><p>More pointedly, we can ask if indeed we still need to use this system of public attestation. The migration to post-quantum cryptography is also an opportunity to modify this system. The PKI as it exists is difficult to maintain, update, revoke, <a href="https://hal.inria.fr/hal-01625766/file/CSF2017-PKI%281%29.pdf">model, or compose.</a> We have an opportunity, perhaps, to rethink this system.</p><p>Even without considering making fundamental changes to public attestation, updating the existing complex system presents both technical and management/coordination challenges:</p><ul><li><p>On the technical side: are the post-quantum signatures, which have larger sizes and bigger computation times, usable in our handshakes? We explore <a href="/sizing-up-post-quantum-signatures/">this idea in this experiment</a>, but we need more information. One potential solution is to cache intermediate certificates or to use other forms of authentication beyond digital signatures (like <a href="/kemtls-post-quantum-tls-without-signatures/">KEMTLS</a>).</p></li><li><p>On the management/coordination side: how are we going to coordinate the migration of this complex system? Will there be some kind of ceremony to update algorithms? How will we deal with the situation where some systems have updated but others have not? How will we revoke past certificates?</p></li></ul><p>This challenge brings into light that the migration to post-quantum cryptography is not only about the technical changes but is dependent on how the Internet works as the interconnected community that it is. Changing systems involves coordination and the collective willingness to do so.</p><p>On the other hand, post-quantum password-based authentication for TLS is still an open discussion. Most PAKE systems nowadays use <a href="https://en.wikipedia.org/wiki/Decisional_Diffie%E2%80%93Hellman_assumption">Diffie-Hellman assumptions</a>, which can be broken by a quantum computer. There are <a href="https://eprint.iacr.org/2020/1532">some</a> <a href="https://eprint.iacr.org/2019/1271.pdf">ideas</a> on how to transition their underlying algorithms to the post-quantum world; but these seem to be so inefficient as to render their deployment infeasible. It seems, though, that password authentication has an interesting property called “<a href="https://eprint.iacr.org/2021/696.pdf">quantum annoyance</a>”. A quantum computer can compromise the algorithm, but only one instance of the problem at the time for each guess of a password: “Essentially, the adversary must guess a password, solve a discrete logarithm based on their guess, and then check to see if they were correct”, as <a href="https://eprint.iacr.org/2021/696.pdf">stated in the paper</a>. Early quantum computers might take quite a long time to solve each guess, which means that a quantum-annoying PAKE combined with a large password space could delay quantum adversaries considerably in their goal of recovering a large number of passwords. Password-based authentication for TLS, therefore, could be safe for a longer time. But this does not mean, however, that it is not threatened by quantum adversaries.</p><p>The world of security protocols, though, does not end with TLS. There are many other security protocols (such as <a href="https://www.cloudflare.com/learning/dns/dnssec/ecdsa-and-dnssec/">DNSSEC</a>, <a href="https://www.wireguard.com/">WireGuard</a>, SSH, <a href="https://en.wikipedia.org/wiki/QUIC">QUIC</a>, and more) that will need to transition to post-quantum cryptography. For DNSSEC, the challenge is complicated by the protocol not seeming to be able to <a href="https://github.com/claucece/PQNet-Workshop/blob/main/slides/PQC%20and%20DNSSEC%20(with%20animations).pdf">deal with large signatures or high computation costs on verification time</a>. According to <a href="https://www.sidnlabs.nl/downloads/7qGFW0DiOkov0vWyDK9qaK/de709198ac34477797b381f146639e27/Retrofitting_Post-Quantum_Cryptography_in_Internet_Protocols.pdf">research</a> from SIDN Labs, it seems like only Falcon-512 and Rainbow-I-CZ can be used in DNSSEC (note that, though, there is a <a href="https://eprint.iacr.org/2022/214">recent attack</a> on Rainbow).</p>
<table>
<colgroup>
<col></col>
<col></col>
<col></col>
<col></col>
</colgroup>
<thead>
  <tr>
    <th>Scheme</th>
    <th>Public key size</th>
    <th>Signature size</th>
    <th>Speed of operations</th>
  </tr>
</thead>
<tbody>
  <tr>
    <td><span>Finalists</span><br /></td>
  </tr>
  <tr>
    <td>Dilithium2</td>
    <td>1,312</td>
    <td>2,420</td>
    <td>Very fast</td>
  </tr>
  <tr>
    <td>Falcon-512</td>
    <td>897</td>
    <td>690</td>
    <td>Fast, if you have the right hardware</td>
  </tr>
  <tr>
    <td>Rainbos-I-CZ</td>
    <td>103,648</td>
    <td>66</td>
    <td>Fast</td>
  </tr>
  <tr>
    <td><span>Alternate Candidates</span></td>
  </tr>
  <tr>
    <td><span>SPHINCS</span><sup>+</sup>-128f</td>
    <td>32</td>
    <td>17,088</td>
    <td>Slow</td>
  </tr>
  <tr>
    <td><span>SPHINCS</span><sup>+</sup><span>-128s</span></td>
    <td>32</td>
    <td>7,856</td>
    <td>Very slow</td>
  </tr>
  <tr>
    <td>GeMSS-128</td>
    <td>352,188</td>
    <td>33</td>
    <td>Very slow</td>
  </tr>
  <tr>
    <td>Picnic3</td>
    <td>35</td>
    <td>14,612</td>
    <td>Very slow</td>
  </tr>
</tbody>
</table><p>Table 1: Signature post-quantum algorithms. The orange rows show the suitable algorithms for DNSSEC.</p><p>What are the alternatives for a post-quantum DNSSEC? Perhaps, the isogeny-based signature scheme, <a href="https://eprint.iacr.org/2020/1240">SQISign</a>, might be a solution if its verification time can be improved (which currently is 42 ms as <a href="https://eprint.iacr.org/2020/1240.pdf">noted in the original paper</a> when running on a 3.40GHz Intel Core i7-6700 CPU over 250 runs for verification. Still slower than <a href="https://en.wikipedia.org/wiki/P-384">P-384</a>. Recently, it <a href="https://eprint.iacr.org/2022/234.pdf">has improved to 25ms</a>). Another solution might be the usage of <a href="https://eprint.iacr.org/2021/1144.pdf">MAYO</a>, which on an Intel i5-8400H CPU at 2.5GHz, a signing operation can take 2.50 million cycles, and a verification operation can take 1.3 million cycles. There is a lot of research that needs to be done to make isogeny-based cryptography faster so it will fit the protocol’s needs (research on this area is currently ongoing —see, for example, the <a href="https://isogenyschool2020.co.uk/">Isogeny School</a>) and provide assurance of their security properties. Another alternative could be using other forms of authentication for this protocol’s case, like using <a href="https://blog.verisign.com/security/securing-the-dns-in-a-post-quantum-world-hash-based-signatures-and-synthesized-zone-signing-keys/">hash-based signatures</a>.</p><p>DNSSEC is just one example of a protocol where post-quantum cryptography has a long road ahead, as we need hands-on experimentation to go along with technical updates. For the other protocols, there is timely research: there are, for example, proposals for a <a href="https://eprint.iacr.org/2020/379.pdf">post-quantum WireGuard</a> and for a <a href="https://openquantumsafe.org/applications/ssh.html">post-quantum SSH</a>. More research, though, needs to be done on the practical implications of these changes over real connections.</p><p>One important thing to note here is that there will likely be an intermediate period in which security protocols provide a “hybrid” set of algorithms for transitional purposes, compliance and security. “Hybrid” means that both a pre-quantum (or classical) algorithm and a post-quantum one are used to generate the secret used to encrypt or provide authentication. The security reason for using this hybrid mode is due to safeguarding in case post-quantum algorithms are broken. There are still many unknowns here (single code point, multiple codepoints, contingency plans) that we need to consider.</p>
    <div>
      <h3>The failures of cryptography in practice</h3>
      <a href="#the-failures-of-cryptography-in-practice">
        
      </a>
    </div>
    <p>The Achilles heel for cryptography is often introducing it into the real-world. Designing, implementing, and deploying cryptography is notoriously <a href="https://www.usenix.org/system/files/conference/usenixsecurity16/sec16_paper_garman.pdf">hard to get right</a> due to flaws in security proofs, implementation bugs, and vulnerabilities<sup>3</sup> of software and hardware. We often deploy cryptography that is found to be flawed and the cost of fixing it is immense (it involves resources, coordination, and more). We have some previous lessons to draw from it as a community, though. For TLS 1.3, we pushed for <a href="https://ieeexplore.ieee.org/document/7958593">verifying implementations</a> of the standard, and for using tools to <a href="https://bblanche.gitlabpages.inria.fr/publications/BlanchetETAPS12.pdf">analyze the symbolic and computational models</a>, as seen in <a href="/post-quantum-formal-analysis/">other</a> <a href="/post-quantum-easycrypt-jasmin/">blog posts</a>. Every time we design a new algorithm, we should aim for this same level of confidence, especially for the big migration to post-quantum cryptography.</p><p>In other blog posts, we have discussed our formal verification <a href="/post-quantum-easycrypt-jasmin/">efforts</a>, so we will not repeat these here. Rather let’s focus on what remains to be done on the formal verification front. Verification, analysis and implementation are not yet complete and we still need to:</p><ul><li><p>Create easy-to-understand guides into what formal analysis is and how it can be used (as formal languages are unfamiliar to developers).</p></li><li><p>Develop user-tested APIs.</p></li><li><p>Flawless integration of a post-quantum algorithm’s API into protocols’ APIs.</p></li><li><p>Test and analyze the boundaries between verified and unverified code.</p></li><li><p>Verifying specifications at the standard level by, for example, integrating <a href="https://hal.inria.fr/hal-03176482/document">hacspec</a> into IETF drafts.</p></li></ul><p>Only in doing so can we prevent some of the security issues we have had in the past. Post-quantum cryptography will be a big migration and we can, if we are not careful, repeat the same issues of the past. We want a future that is better. We want to mitigate bugs and provide high assurance of the security of connections users have.</p>
    <div>
      <h3>A post-quantum tunnel is born</h3>
      <a href="#a-post-quantum-tunnel-is-born">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5txFSxUXLbi6kiuyOjyTH9/47f7fe733085d94d038e1067862eaa67/image3-27.png" />
            
            </figure><p>We’ve described the challenges of post-quantum cryptography from a theoretical and practical perspective. These are problems we are working on and issues that we are analyzing. They will take time to solve. But what can you expect in the short-term? What new ideas do we have? Let’s look at where else we can put post-quantum cryptography.</p><p><a href="/tunnel-for-everyone/">Cloudflare Tunnel</a> is a service that creates a secure, outbound-only connection between your services and Cloudflare by deploying a lightweight connector in your environment. This is the end at the server side. At the other end, at the client side, we have <a href="/1111-warp-better-vpn/">WARP</a>, a ‘VPN’ for client devices that can secure and accelerate all HTTPS traffic. So, what if we add post-quantum cryptography to all our <a href="/post-quantumify-cloudflare">internal infrastructure</a>, and also add it to this server and the client endpoints? We would then have a post-quantum server to client connection where any request from the WARP client to a private network (one that uses Tunnel) is secure against a quantum adversary (how to do it will be similar to what is <a href="https://developers.cloudflare.com/cloudflare-one/tutorials/warp-to-tunnel">detailed here</a>). Why would we want to do this? First, because it is great to have a connection that is fully protected against quantum computers. Second, because we can better measure the impacts of post-quantum cryptography in this environment (and even measure them in a mobile environment). This will also mean that we can provide guidelines to clients and servers on how to migrate to post-quantum cryptography. It would also be the first available service to do so at this scale. How will all of us experience this transition? Only time will tell, but we are excited to work towards this vision.</p><p>And, furthermore, as Tunnel uses the <a href="https://en.wikipedia.org/wiki/QUIC">QUIC protocol</a> <a href="/getting-cloudflare-tunnels-to-connect-to-the-cloudflare-network-with-quic/">in some cases</a> and WARP uses the WireGuard protocol, this means that we can experiment with post-quantum cryptography in protocols that are novel and have not seen much experimentation in the past.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6eSXBxWZXlVqSd1tnMTbqp/5b9bca705afc12678a743deb1a741a03/image2-25.png" />
            
            </figure><p>So, what is the future in the post-quantum cryptography era? The future is better and not just the same. The future is fast and more secure. Deploying cryptography can be challenging and we have had problems with it in the past; but post-quantum cryptography is the opportunity to dream of better security, it is the path to explore, and it is the reality to make it happen.</p><p>Thank you for reading our post-quantum blog post series and expect more post-quantum content and updates from us!</p><hr /><p><i>If you are a student enrolled in a PhD or equivalent research program and looking for an internship for 2022, see </i><a href="https://research.cloudflare.com/outreach/academic-programs/interns/"><b><i>open opportunities</i></b></a><b><i>.</i></b></p><p><i>If you’re interested in contributing to projects helping Cloudflare, </i><a href="https://www.cloudflare.com/en-gb/careers/jobs/?department=Engineering&amp;location=default"><i>our engineering teams are hiring</i></a><i>.</i></p><p><i>You can reach us with questions, comments, and research ideas at </i><a><i>ask-research@cloudflare.com</i></a><i>.</i></p><p>.....</p><p><sup>1 </sup>And when we do predict, we often predict more of the same attacks we are accustomed to: adversaries breaking into connections, security being tampered with. Is this all that they will be capable of?</p><p><sup>2</sup>The private part of a public key advertised in a certificate is often the target of attacks. An attacker who steals a certificate authority's private keys is able to forge certificates, for example. Private keys are almost always stored on a hardware security module (HSM), which prevents key extraction. This is a small example of how hardware is involved in the process.</p><p><sup>3</sup>Like constant-time failures, side-channel, and timing attacks.</p> ]]></content:encoded>
            <category><![CDATA[Research]]></category>
            <category><![CDATA[Cryptography]]></category>
            <category><![CDATA[Post-Quantum]]></category>
            <guid isPermaLink="false">5qmZiK0vIXo8sNj0Aj25ZB</guid>
            <dc:creator>Sofía Celi</dc:creator>
            <dc:creator>Nick Sullivan</dc:creator>
        </item>
        <item>
            <title><![CDATA[Post-quantumify internal services: Logfwrdr, Tunnel, and gokeyless]]></title>
            <link>https://blog.cloudflare.com/post-quantumify-cloudflare/</link>
            <pubDate>Fri, 25 Feb 2022 16:03:12 GMT</pubDate>
            <description><![CDATA[ A big challenge is coming: to change all internal connections at Cloudflare to use post-quantum cryptography. Read how we are tackling this challenge! ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Theoretically, there is no impediment to adding <a href="/post-quantum-taxonomy">post-quantum cryptography</a> to any system. But the reality is harder. In the middle of last year, we posed ourselves a big challenge: <b>to change all</b> <b><i>internal connections</i></b> <b>at Cloudflare to use post-quantum cryptography</b>. We call this, in a cheeky way, “post-quantum-ifying” our services. Theoretically, this should be simple: swap algorithms for post-quantum ones and move along. But with dozens of different services in various programming languages (as we have at Cloudflare), it is not so simple. The challenge is big, but we are here and up for the task! In this blog post, we will look at what our plan was, where we are now, and what we have learned so far. Welcome to the first announcement of a post-quantum future at Cloudflare: our connections are going to be quantum-secure!</p>
    <div>
      <h3>What are we doing?</h3>
      <a href="#what-are-we-doing">
        
      </a>
    </div>
    <p>The life of most requests at Cloudflare <a href="/more-data-more-data/">begins and ends at the edge</a> of our global network. Not all requests are equal and on their path they are transmitted by several protocols. Some of those protocols provide security properties whilst others do not. For the protocols that do, for context, Cloudflare uses: <a href="/rfc-8446-aka-tls-1-3/">TLS</a>, <a href="/tag/quic/">QUIC</a>, <a href="/1111-warp-better-vpn/">WireGuard</a>, <a href="/tag/dnssec/">DNSSEC</a>, <a href="/anycast-ipsec/">IPsec</a>, <a href="/privacy-pass-v3/">Privacy Pass</a>, and more. Migrating all of these protocols and connections to use post-quantum cryptography is a formidable task. It is also a task that we do not treat lightly because:</p><ul><li><p>We have to be assured that the security properties provided by the protocols are not diminished.</p></li><li><p>We have to be assured that performance is not negatively affected.</p></li><li><p>We have to be wary of other requirements of our ever-changing ecosystem (like, for example, keeping in mind our <a href="https://www.cloudflare.com/en-gb/press-releases/2021/cloudflare-hits-milestone-in-fedramp-approval/">FedRAMP certification efforts</a>).</p></li></ul><p>Given these requirements, we had to decide on the following:</p><ul><li><p>How are we going to introduce post-quantum cryptography into the protocols?</p></li><li><p>Which protocols will we be migrating to post-quantum cryptography?</p></li><li><p>Which Cloudflare services will be targeted for this migration?</p></li></ul><p>Let’s explore now what we chose: welcome to our path!</p>
    <div>
      <h3>TLS and post-quantum in the real world</h3>
      <a href="#tls-and-post-quantum-in-the-real-world">
        
      </a>
    </div>
    <p>One of the most used security protocols is Transport Layer Security (TLS). It is the vital protocol that protects most of the data that flows over the Internet today. Many of Cloudflare’s internal services also rely on TLS for security. It seemed natural that, for our migration to post-quantum cryptography, we would start with this protocol.</p><p>The protocol provides three security properties: integrity, authentication, and confidentiality. The algorithms used to provide the first property, integrity, seem to not be <a href="https://research.kudelskisecurity.com/2017/02/01/defeating-quantum-algorithms-with-hash-functions/">quantum-threatened</a> (there is <a href="https://eprint.iacr.org/2012/606.pdf">some research</a> on the matter). The second property, authentication, is under quantum threat, but we will not focus on it for reasons detailed later. The third property, confidentiality, is the one that we are interested in protecting as it is urgent to do this now.</p><p>Confidentiality assures that no one other than the intended receiver and sender of a message can read the transmitted message. Confidentiality is especially threatened by quantum computers as an attacker can record traffic now and decrypt it in the future (when they get access to a quantum computer): this means that all past and current traffic, not just future traffic, is vulnerable to be read by anyone who obtains a quantum computer (and has stored the encrypted traffic captured today).</p><p>At Cloudflare, to protect many of our connections, we use TLS. We mainly use the latest version of the protocol, TLS 1.3, but we sometimes do still use TLS 1.2 (as seen in the image, though, it only shows the connections between websites to our network).  As we are a company that pushes for innovation, this means that we are intent on using this time of migration to post-quantum cryptography as an opportunity to also update TLS handshakes to 1.3 and be assured that we are using TLS in the right way (by, for example, ensuring that we are not using deprecated features of TLS).</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5toLKjOnOcTdY4jkqTodFb/80452cac0010e2311e76e31c0a138820/image2-24.png" />
            
            </figure><p>Cloudflare’s TLS and QUIC usage <a href="https://radar.cloudflare.com/#anchor-tls-versions-vs">taken on</a> the 17/02/2022 and showing the last 7 days.</p><p>Changing TLS 1.3 to provide quantum-resistant security for its confidentiality means changing the ‘key exchange’ phase of the TLS handshake. Let’s briefly look at how the TLS 1.3 handshake works.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5qFzOkZw6F47b7q1FeoJrf/c781aa2f7621d61664b7f7efafe3927e/image3-26.png" />
            
            </figure><p>The TLS 1.3 handshake</p><p>In <a href="https://datatracker.ietf.org/doc/html/rfc8446">TLS 1.3</a>, there will always be two parties: a client and a server. A client is a party that wants the server to “serve” them something, which could be a website, emails, chat messages, voice messages, and more. The handshake is the process by which the server and client attempt to agree on a shared secret, which will be used to encrypt the subsequent exchange of data (this shared secret is called the “master secret”). The client selects their favorite key exchange algorithms and submits one or more “key shares” to the server (they send both the name of the key share and its public key parameter). The server picks one of the key exchange algorithms (assuming that it supports one of them), and replies with their own key share. Both the server and the client then combine the key shares to compute a shared secret (the “master secret”), which is used to protect the remainder of the connection. If the client only chooses algorithms that the server does not support, the server instead replies with the algorithms that it does support and asks the client to try again. During this initial conversation, the client and server also agree on authentication methods and the parameters for encryption, but we can leave that aside for today in this blog post. This description is also simplified to focus only in the “key exchange” phase.</p><p>There is a mechanism to add post-quantum cryptography to this procedure: you advertise post-quantum algorithms in the list of key shares, so the final derived shared key (the “master secret”) is quantum secure. But there are requirements we had to take into account when doing this with our connections: the security of much post-quantum cryptography is still under debate, and we need to respect our <a href="https://www.cloudflare.com/en-gb/trust-hub/compliance-resources/">compliance efforts</a>. The solution to these requirements is to use a <a href="https://datatracker.ietf.org/doc/draft-ietf-tls-hybrid-design/">“hybrid” mechanism</a>.</p><p>A “hybrid” mechanism means to use both a “pre-quantum” or “classical” algorithm and a “post-quantum” algorithm, and mixing both generated shared secrets into the derivation of the “master secret”. The combination of both shared secrets is of the form \Z′ = Z || T\ (for TLS and with the fixed size shared secrets, simple concatenation is secure. In other cases, you have to be a bit more careful). This procedure is a concatenation consisting of:</p><ul><li><p>A “classical” shared secret <i>Z</i>, derived following the guidelines of <a href="https://www.nist.gov/standardsgov/compliance-faqs-federal-information-processing-standards-fips#:~:text=are%20FIPS%20developed%3F-,What%20are%20Federal%20Information%20Processing%20Standards%20(FIPS)%3F,by%20the%20Secretary%20of%20Commerce.">Federal Information Processing Standards</a> (FIPS 104-2) <a href="https://en.wikipedia.org/wiki/FIPS_140-2">approved mechanisms</a> (as recommended <a href="https://csrc.nist.gov/publications/detail/sp/800-56a/rev-3/final">over</a> <a href="https://csrc.nist.gov/publications/detail/sp/800-56b/rev-2/final">here</a>), like, for example, the <a href="https://csrc.nist.gov/csrc/media/events/workshop-on-elliptic-curve-cryptography-standards/documents/papers/session6-adalier-mehmet.pdf">P-256 elliptic curve</a>.</p></li><li><p>An auxiliary shared secret <i>T</i>, derived with some other method: in this case, in a quantum-secure way.</p></li></ul><p>The usage of a “hybrid” approach allows us to safeguard our connections in case the security of the post-quantum algorithm fails. It also results in a suitable-for-FIPS secret, as it is approved in the <a href="https://csrc.nist.gov/publications/detail/sp/800-56c/rev-2/final">“Recommendation for Key-Derivation Methods in Key-Establishment Schemes”</a> (SP 800-56C Rev. 2), which is listed in the <a href="https://csrc.nist.gov/csrc/media/publications/fips/140/2/final/documents/fips1402annexd.pdf">Annex D</a>, as an approved key establishing technique for FIPS 140-2.</p><p>At Cloudflare, we are using different TLS libraries. We decided to add post-quantum cryptography to those, specifically, to the <a href="https://csrc.nist.gov/projects/cryptographic-module-validation-program/Certificate/3318"><i>BoringCrypto</i></a> library or the compiled version of <a href="https://github.com/golang/go/tree/dev.boringcrypto.go1.8">Golang with <i>BoringCrypto</i></a>. We added <a href="https://github.com/cloudflare/circl/tree/master/kem/kyber">our implementation</a> of the <a href="https://pq-crystals.org/kyber/resources.shtml">Kyber-512</a> algorithm (this algorithm can be eventually swapped by another one; we’re not picking any here. Not only that, but we are using it for our testing phase) to those libraries and implemented the “hybrid” mechanism as part of the TLS handshake. For the “classical” algorithm we used <a href="https://en.wikipedia.org/wiki/Elliptic-curve_cryptography">curve P-256</a>. We then compiled certain services with these new TLS libraries.</p><table><tr><td><p><b>Name of algorithm</b></p></td><td><p><b>Number of times loop executed</b></p></td><td><p><b>Average runtime per operation</b></p></td><td><p><b>Number of bytes required per operation</b></p></td><td><p><b>Number of allocations</b></p></td></tr><tr><td><p>Curve P-256</p></td><td><p>23,056</p></td><td><p>52,204 ns/op</p></td><td><p>256™p</p></td><td><p>5 allocs/op</p></td></tr><tr><td><p>Kyber-512</p></td><td><p>100,977</p></td><td><p>11,793 ns/op</p></td><td><p>832 B/op</p></td><td><p>3 allocs/op</p></td></tr></table><p>Table 1: Benchmarks of the “key share” operation of Curve P-256 and Kyber-512: Scalar Multiplication and Encapsulation respectively. Benchmarks ran on Darwin, amd64, Intel(R) Core(TM) i7-9750H CPU @ 2.60 GHz.</p><p>Note that as TLS supports the described “negotiation” mechanism for the key exchange, the client and server have a way of mutually deciding what algorithms they want to use. This means that it is not required that both a client or a server support or even prefer the exact same algorithms: they just need to share support for a single algorithm for a handshake to succeed. In turn, herewith, even if we advertise post-quantum cryptography and a server/client does not support it, they will not fail but rather agree on some other algorithm they share.</p><p>A note on a matter we left on hold above: why are we not migrating the authentication phase of TLS to post-quantum? Certificate-based authentication in TLS, which is the one we commonly use at Cloudflare, also depends on systems on the wider Internet. Thus, changes to authentication require a coordinated and much wider effort to change. Certificates are attested as proofs of identity by outside parties: migrating authentication means coordinating a ceremony of migration with these outside parties. Note though that at Cloudflare we use a <a href="/how-to-build-your-own-public-key-infrastructure/">PKI with internally-hosted Certificate Authorities (CAs)</a>, which means that we can more easily change our algorithms. This will still need careful planning. We will not do this today, but we will in the near future.</p>
    <div>
      <h3>Cloudflare services</h3>
      <a href="#cloudflare-services">
        
      </a>
    </div>
    <p>The first step of our post-quantum migration is done. We have TLS libraries with post-quantum cryptography using a hybrid mechanism. The second step is to test this new mechanism in specific Cloudflare connections and services. We will look at three systems from Cloudflare that we have started migrating to post-quantum cryptography. The services in question are: Logfwrdr, Cloudflare Tunnel, and GoKeyless.</p>
    <div>
      <h3>A post-quantum Logfwdr</h3>
      <a href="#a-post-quantum-logfwdr">
        
      </a>
    </div>
    <p><a href="/more-data-more-data/">Logfwdr</a> is an internal service, written in Golang, that handles structured logs, and sends them from our servers for processing (to a subservice called ‘Logreceiver’) where they write them to Kafka. The connection between Logfwdr and Logreceiver is protected by TLS. The same goes for the connection between Logreceiver and <a href="https://kafka.apache.org/">Kafka</a> in core. Logfwdr pushes its logs through “streams” for processing.</p><p>This service seemed an ideal candidate for migrating to post-quantum cryptography as its architecture is simple, it has long-lived connections, and it handles a lot of traffic. In order to first test the viability of using post-quantum cryptography, and, we created our own instance of Logreceiver and deployed it. We also created our own stream (the “pq-stream”), which is basically a copy of a HTTP stream (which was remarkably easy to add). We then compiled these services with the modified TLS library, and we got a post-quantum protected Logfwdr.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2YlRJqczYHpPshoEZzpu9b/160d794b37b407bb09fd345b889101c7/image7-5.png" />
            
            </figure><p>Figure 1: TLS latency of Logfwdr for selected metals. Notice how post-quantum cryptography is faster than non-post-quantum one (it is labeled with a “PQ” acronym).</p><p>What we found was that using post-quantum cryptography is faster than using “classical” cryptography! This was expected, though, as we are using a lattice-based post-quantum algorithm (Kyber512). The TLS latency of both post-quantum handshakes and “classical” ones can be noted in Figure 1. The figure shows more handshakes are executed than usual behavior as these servers are frequently restarted.</p><p>Note though that we are not using “only” post-quantum cryptography but rather the “hybrid” mechanism described above. This could increase performance times: in this case, the increase was minimal and still kept the post-quantum handshakes faster than the classical ones. Perhaps what makes the TLS handshakes faster in the post-quantum case is the usage of TLS 1.3, as the “classical” Logfwdr is using TLS 1.2. Logfwdr, though, is a service that executes long-lived handshakes, so in aggregate TLS 1.2 is not “slower” but it does have a slower start time.</p><p>As shown in Figure 2, the average batch duration of the post-quantum stream is lower than when not using post-quantum cryptography. This may be in part due to the fact that we are not sending the quantum-protected data all the way to Kafka (as the non-post-quantum stream is doing). We didn’t yet change the connection to Kafka post-quantum.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7s0OprfksLusP0685ydiYl/69af1192a703c1224f746673d83c44d2/image8-3.png" />
            
            </figure><p>Figure 2: Average batch send duration: post-quantum (orange) and non-post-quantum streams (green).</p><p>We didn’t encounter any failures during this testing that ran for about some weeks. This gave us good insight that putting post-quantum cryptography into our internal network with actual data is possible. It also gave us confidence to begin migrating codebases to modified TLS libraries, which we will maintain.</p><p>What are the next steps for Logfwdr? Now that we confirmed it is possible, we will first start migrating stream by stream to this hybrid mechanism until we reach full post-quantum migration.</p>
    <div>
      <h3>A post-quantum gokeyless</h3>
      <a href="#a-post-quantum-gokeyless">
        
      </a>
    </div>
    <p><a href="/going-keyless-everywhere/"><i>gokeyless</i></a> is our own way to separate servers and TLS long-term private keys. With it, private keys are kept on a specialized key server operated by customers on their own architecture or, if using <a href="/introducing-cloudflare-geo-key-manager/">Geo Key Manager</a>, in selected Cloudflare locations. We also use it for Cloudflare-held private keys with a service creatively known as <i>gokeyless-internal</i>. The final piece of this architecture is another service called <a href="/scaling-geo-key-manager/"><i>Keynotto</i></a>. Keynotto is a service written in Rust that only mints RSA and <a href="https://www.cloudflare.com/learning/dns/dnssec/ecdsa-and-dnssec/">ECDSA</a> key signatures (that are executed with the stored private key).</p><p>How does the overall architecture of gokeyless work? Let’s start with a request. The request arrives at the Cloudflare network, and we perform TLS termination. Any signing request is forwarded to Keynotto. A small portion of requests (specifically from GeoKDL or external gokeyless) cannot be handled by Keynotto directly, and are instead forwarded to gokeyless-internal. gokeyless-internal also acts as a key server proxy, as it redirects connections to the customer’s keyservers (external gokeyless). <a href="https://github.com/cloudflare/gokeyless">External gokeyless</a> is both the server that a customer runs and the client that will be used to contact it. The architecture can be seen in Figure 3.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5c2XtFKTmHLgy9XNmAfJsh/496e49511e6aea34309abbae92b846f5/image5-10.png" />
            
            </figure><p>Figure 3: The life of a gokeyless request.</p><p>Migrating the transport that this architecture uses to post-quantum cryptography is a bigger challenge, as it involves migrating a service that lives on the customer side. So, for our testing phase, we decided to go for the simpler path that we are able to change ourselves: the TLS handshake between Keynotto and gokeyless-internal. This small test-bed means two things: first, that we needed to change another TLS library (as Keynotto is written in Rust) and, second, that we needed to change gokeyless-internal in such a way that it used post-quantum cryptography only for the handshakes with Keynotto and for nothing else. Note that we did not migrate the signing operations that gokeyless or Keynotto executes with the stored private key; we just migrated the transport connections.</p><p>Adding post-quantum cryptography to the <a href="https://github.com/rustls/rustls">rustls codebase</a> was a straightforward exercise and we exposed an easy-to-use API call to signal the usage of post-quantum cryptography (as seen in Figure 4 and Figure 5). One thing that we noted when reviewing the TLS usage in several Cloudflare services is that giving the option to choose the algorithms for a ciphersuite, key share, and authentication in the TLS handshake confuses users. It seemed more straightforward to define the algorithm at the library level, and have a boolean or API call signal the need for this post-quantum algorithm.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4i2qzprdHeQ5F7Y9CLn1S2/354947695ac81ead9fd7e414a562bc37/image9-3.png" />
            
            </figure><p>Figure 4: post-Quantum API for rustls.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5rk2okKUdjZmpisim3f5bk/bd2b2bec41309221236591c53b071d6d/image4-16.png" />
            
            </figure><p>Figure 5: usage of the post-quantum API for rustls.</p><p>We ran a small test between Keynotto and gokeyless-internal with much success. Our next steps are to integrate this test into the real connection between Keynotto and gokeyless-internal, and to devise a plan for a customer post-quantum protected gokeyless external. This is the first instance in which our migration to post-quantum will not be ending at our edge but rather at the customer’s connection point.</p>
    <div>
      <h3>A post-quantum Cloudflare Tunnel</h3>
      <a href="#a-post-quantum-cloudflare-tunnel">
        
      </a>
    </div>
    <p><a href="https://www.cloudflare.com/en-gb/products/tunnel/">Cloudflare Tunnel</a> is a reverse proxy that allows customers to quickly connect their private services and networks to the Cloudflare network without having to expose their public IPs or ports through their firewall. It is mainly managed at the customer level through the <a href="https://github.com/cloudflare/cloudflared">usage of <i>cloudflared</i></a><i>,</i> a lightweight server-side daemon_,_ in their infrastructure. <i>cloudflared</i> opens several long-lived TCP connections (although, <i>cloudflared</i> is <a href="/getting-cloudflare-tunnels-to-connect-to-the-cloudflare-network-with-quic/">increasingly using the QUIC</a> protocol) to servers on <a href="https://www.cloudflare.com/en-gb/learning/serverless/glossary/what-is-edge-computing/">Cloudflare’s global network</a>. When a request to a hostname comes, it is proxied through these connections to the origin service behind <i>cloudflared</i>.</p><p>The easiest part of the service to make post-quantum secure appears to be the connection between our network (with a service part of Tunnel called <i>origintunneld</i> located there) and <i>cloudflared</i>, which we have started migrating. While exploring this path and looking at the whole life of a Tunnel connection, we found something more interesting, though. When the Tunnel connections eventually reach core, they end up going to a service called <i>Tunnelstore</i>. <i>Tunnelstore</i> runs as a stateless application in a Kubernetes deployment, and to provide TLS termination (alongside load balancing and more) it uses a <a href="https://kubernetes.io/">Kubernetes</a> <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/">ingress</a>.</p><p>The <a href="/cloudflare-ingress-controller/">Kubernetes ingress</a> we use at Cloudflare is made of <a href="https://www.envoyproxy.io/">Envoy</a> and <a href="https://projectcontour.io/">Contour</a>. The latter configures the former depending on Kubernetes resources. Envoy uses the <a href="https://www.envoyproxy.io/docs/envoy/latest/faq/build/boringssl">BoringSSL library</a> for TLS. Switching TLS libraries in Envoy seemed difficult: there are <a href="https://github.com/envoyproxy/envoy-openssl">thoughts</a> on how to integrate OpenSSL to it (and <a href="https://github.com/open-quantum-safe/oqs-demos/issues/79">even some thoughts</a> on adding post-quantum cryptography) and <a href="https://github.com/envoyproxy/envoy/pull/7377">ways to switch TLS libraries</a>. Adding post-quantum cryptography to a modified version of BoringSSL, and then specifying <a href="https://github.com/google/boringssl/blob/master/INCORPORATING.md">that dependency</a> in the <a href="https://github.com/envoyproxy/envoy/blob/main/bazel/repository_locations.bzl#L74">Bazel file of Envoy</a> seems to be the path to go for, as our internal test has confirmed (as seen in Figure 6). As for Contour, for many years, Cloudflare has been running their own patched version of it: we will have to again patch this version with our Golang library to provide post-quantum cryptography. We will make these libraries (and the TLS ones) available for usage.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/itO6WZ2vhxBBR8Jy8iWRp/5c435ec9ccac61b74fe0f9499b171011/image1-27.png" />
            
            </figure><p>Figure 6: Option to allow post-quantum cryptography in Envoy.</p><p>Changing the Kubernetes ingress at Cloudflare not only makes Tunnel completely quantum-safe (beyond the connection between our global network and <i>cloudflared</i>), but it also makes any other services using ingress safe. Our first tests on migrating Envoy and Contour to TLS libraries that contain post-quantum protections have been successful, and now we have to test how it behaves in the whole ingress ecosystem.</p>
    <div>
      <h3>What is next?</h3>
      <a href="#what-is-next">
        
      </a>
    </div>
    <p>The main tests are now done. We now have TLS libraries (in Go, Rust, and C) that give us post-quantum cryptography. We have two systems ready to deploy post-quantum cryptography, and a shared service (Kubernetes ingress) that we can change. At the beginning of the blog post, we said that “the life of most requests at Cloudflare begins and ends at the edge of our global network”: our aim is that post-quantum cryptography does not end there, but rather reaches all the way to where customers connect as well. Let’s explore the future challenges and this customer post-quantum path in this <a href="/post-quantum-future">other blog post</a>!</p> ]]></content:encoded>
            <category><![CDATA[Research]]></category>
            <category><![CDATA[Cryptography]]></category>
            <category><![CDATA[Post-Quantum]]></category>
            <guid isPermaLink="false">6Idg7A13yPnfsxtQpwQXs1</guid>
            <dc:creator>Sofía Celi</dc:creator>
            <dc:creator>Goutam Tamvada</dc:creator>
            <dc:creator>Thom Wiggers</dc:creator>
        </item>
        <item>
            <title><![CDATA[Using EasyCrypt and Jasmin for post-quantum verification]]></title>
            <link>https://blog.cloudflare.com/post-quantum-easycrypt-jasmin/</link>
            <pubDate>Thu, 24 Feb 2022 16:23:46 GMT</pubDate>
            <description><![CDATA[ This blogpost will touch upon how to practically use Jasmin and EasyCrypt to achieve better security guarantees when verifying KEMs ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Cryptographic code is everywhere: it gets run when we connect to the bank, when we send messages to our friends, or when we <a href="https://www.sciencedirect.com/science/article/abs/pii/S0747563215004343">watch cat videos</a>. But, it is not at all easy to take a cryptographic specification written in a natural language and produce running code from it, and it is even harder to validate both the theoretical assumptions and the correctness of the implementation itself. Mathematical <a href="https://en.wikipedia.org/wiki/Mathematical_proof">proofs</a>, as we talked about in <a href="/post-quantum-formal-analysis">our previous blog post</a>, and <a href="https://en.wikipedia.org/wiki/Code_review">code inspection</a> are simply not enough. <a href="https://en.wikipedia.org/wiki/Software_testing">Testing</a> and <a href="/a-gentle-introduction-to-linux-kernel-fuzzing/">fuzzing</a> can catch common or well-known bugs or mistakes, but might miss rare ones that can, nevertheless, be triggered by an attacker. <a href="https://en.wikipedia.org/wiki/Static_program_analysis">Static analysis</a> can detect mistakes in the code, but cannot check whether the code behaves as described by the specification in natural-language (for functional correctness). This gap between implementation and validation can have <a href="https://www.mitls.org/pages/attacks">grave consequences</a> in terms of security in the real world, and we need to bridge this chasm.</p><p>In this blog post, we will be talking about ways to make this gap smaller by making the code we deploy better through analyzing its security properties and its implementation. This blog post continues our work on high assurance cryptography, for example, on using Tamarin to <a href="/post-quantum-formal-analysis">analyze entire protocol specifications</a>. In this one, we want to look more on the side of verifying implementations. Our desire for high assurance cryptography isn’t specific to <a href="https://www.cloudflare.com/learning/ssl/quantum/what-is-post-quantum-cryptography/">post-quantum cryptography</a>, but because quantum-safe algorithms and protocols are so new, we want extra reassurance that we’re doing the best we can. The post-quantum era also gives us a great opportunity to try and apply all the lessons we’ve learned while deploying classical cryptography, which will hopefully prevent us from making the same mistakes all over again.</p><p>This blog post will discuss <a href="https://eprint.iacr.org/2019/1393.pdf">formal verification</a>. Formal verification is a technique we can use to prove that a piece of code correctly implements a specification. Formal verification, and <a href="https://en.wikipedia.org/wiki/Formal_methods">formal methods</a> in general, have been around for a long time, appearing as early as the 1950s. Today, they are being applied in a variety of ways: from <a href="https://csrc.nist.gov/CSRC/media/Events/third-pqc-standardization-conference/documents/accepted-papers/meijers-formal-verification-pqc2021.pdf">automating the checking of security proofs</a> to automating checks for functional correctness and the absence of side-channels attacks. Code verified using such formal verification has been deployed in popular products like <a href="https://blog.mozilla.org/security/2020/07/06/performance-improvements-via-formally-verified-cryptography-in-firefox/">Mozilla Firefox</a> and <a href="https://boringssl.googlesource.com/boringssl/+/refs/heads/master/third_party/fiat/">Google Chrome</a>.</p><p><i>Formal verification</i>, as opposed to <i>formal analysis</i>, the topic of <a href="/post-quantum-formal-analysis">other blog posts</a>, deals with verifying code and checking that it correctly implements a specification. <i>Formal analysis</i>, on the other hand, deals with establishing that a specification has the desired properties, for example, having a specific security guarantee.</p><p>Let’s explore what it means for an algorithm to have a proof that it achieves a certain security goal and what it means to have an implementation we can prove correctly implements that algorithm.</p>
    <div>
      <h3>Goals of a formal analysis and verification process</h3>
      <a href="#goals-of-a-formal-analysis-and-verification-process">
        
      </a>
    </div>
    <p>Our goal, given a description of an algorithm in a natural language, is to produce two proofs: first, one that shows that the algorithm has the security properties we want and, second, that we have a correct implementation of it. We can go about this in four steps:</p><ol><li><p>Turn the algorithm and its security goals into a formal specification. This is us defining the problem.</p></li><li><p>Use formal analysis to prove, in our case using a computer-aided proof tooling, that the algorithm attains the specified properties.</p></li><li><p>Use formal verification to prove that the implementation correctly implements the algorithm.</p></li><li><p>Use formal verification to prove that our implementation has additional properties, like memory safety, running in constant time, efficiency, etc.</p></li></ol><p>Interestingly we can do step 2 in parallel with steps 3 and 4, because the two proofs are actually independent. As long as they are both building from the same specification established in step 1, the properties we establish in the formal analysis should flow down to the implementation.</p><p>Suppose, more concretely, we’re looking at an implementation and specification of a Key Encapsulation Mechanism (a KEM, such as <a href="https://frodokem.org/files/FrodoKEM-specification-20210604.pdf"><i>FrodoKEM</i></a>). FrodoKEM is designed to achieve <a href="https://en.wikipedia.org/wiki/Ciphertext_indistinguishability">IND-CCA security</a>, so we want to prove that it does, and that we have an efficient, side-channel resistant and correct implementation of it.</p><p>As you might imagine, achieving even one of these goals is no small feat. Achieving all, especially given the way they conflict (efficiency clashes with side-channel resistance, for example), is a Herculean task. <a href="https://eprint.iacr.org/2019/1393.pdf">Decades of research have gone into this space</a>, and it is huge; so let’s carve out and examine a small subsection to look at: we’ll look at two tools, <a href="https://github.com/EasyCrypt/easycrypt">EasyCrypt</a> and <a href="https://acmccs.github.io/papers/p1807-almeidaA.pdf">Jasmin</a>.</p><p>Before we jump into the tools, let’s take a brief aside to discuss why we’re not using Tamarin, which we’ve talked about in our <a href="/post-quantum-formal-analysis">other blog posts</a>. Like EasyCrypt, <a href="https://tamarin-prover.github.io/">Tamarin</a> is also a tool used for formal analysis, but beyond that, the two tools are quite different. Formal analysis broadly splits into two camps, symbolic analysis and computational analysis. Tamarin, as <a href="/post-quantum-formal-analysis">we saw</a>, uses symbolic analysis, which treats all functions effectively as black boxes, whereas EasyCrypt uses computational analysis. Computational analysis is much closer to how we program, and functions are given specific implementations. This gives computational analysis a much higher “resolution”: we can study properties in much greater detail and, perhaps, with greater ease. This detail, of course, comes at a cost. As functions grow into full protocols, with multiple modes, branching paths, and in the case of the Transport Layer Security (TLS), sometimes even resumption, computational models become unwieldy and difficult to work with, even with computer-assisted tooling. We therefore have to pick the correct tool for the job. When we need maximum assurance, sometimes both computational and symbolic proofs are constructed, with each playing to its strengths and compensating for the other’s drawbacks.</p>
    <div>
      <h3>EasyCrypt</h3>
      <a href="#easycrypt">
        
      </a>
    </div>
    <p>EasyCrypt is a <i>proof assistant</i> for cryptographic algorithms and <a href="https://en.wikipedia.org/wiki/Imperative_programming">imperative programs</a>. A proof is basically a formal demonstration that some statement is true. EasyCrypt is called a proof assistant because it “assists” you with creating a proof; it does not create a proof for you, but rather, helps you come to it and gives you the power to have a machine check that each step logically follows from the last. It provides a language to write definitions, programs, and theorems along with an environment to develop machine-checked proofs.</p><p>A proof starts from a set of assumptions, and by taking a series of logical steps demonstrates that some statement is true. Let’s imagine for a moment that we are <a href="https://en.wikipedia.org/wiki/Perseus">the hero Perseus</a> on a quest to kill a mythological being, the terrifying <a href="https://en.wikipedia.org/wiki/Medusa">Medusa</a>. How can we prove to everyone that we’ve succeeded? No one is going to want to enter Medusa's cave to check that she is dead because they’ll be turned to stone. And we cannot just state, “I killed the Medusa.” Who will believe us without proof? After all, is this not a <a href="https://en.wikipedia.org/wiki/Leap_of_faith">leap of faith</a>?</p><p>What we can do is bring the head of the Medusa as proof. Providing the head as a demonstration is our proof because no mortal <a href="https://en.wikipedia.org/wiki/Gorgon">Gorgon</a> can live without a head. Legend has it that Perseus completed the proof by demonstrating that the head was indeed that of the Medusa: Perseus used the head’s powers to turn <a href="https://en.wikipedia.org/wiki/Polydectes">Polydectes</a> to stone (the latter was about to force Perseus’ mother to marry him, so let’s just say it wasn’t totally unprovoked). One can say that this proof was done “by hand” in that it was done without any mechanical help. For computer security proofs, sometimes the statements we want to prove are so cumberstone to prove and are so big that having a machine to help us is needed.</p><p>How does EasyCrypt achieve this? How does it help you? As we are dealing with cryptography here, first, let’s start by defining how one can reason about cryptography, the security it provides, and the proofs one uses to corroborate them.</p><p>When we encrypt something, we do this to hide whatever we want to send. In a perfect world, it would be <i>indistinguishable</i> from noise. Unfortunately, only the <a href="https://en.wikipedia.org/wiki/One-time_pad">one-time-pad</a> truly offers this property, so most of the time we make do with “close enough”: it should be infeasible to differentiate a true encrypted value from a random one.</p><p>When we want to show that a certain cryptographic protocol or algorithm has this property, we write it down as an “indistinguishability game.” The idea of the game is as follows:</p><blockquote><p>Imagine a gnome is sitting in a box. The gnome takes a message as input to the box and produces a ciphertext. The gnome records each message and the ciphertext they see generated. A troll outside the box chooses two messages (m1 and m2) of the same length and sends them to the box. The gnome records the box operations and flips a coin. If the coin lands on its face, then the gnome sends the ciphertext (c1) corresponding to m1. Otherwise, they send c2 corresponding to m2. In order to win, the troll, knowing the messages and the ciphertext, has to guess which message was encrypted.</p></blockquote><p>In this example, we can see two things: first, choices are random as the ciphertext sent is chosen by flipping a coin; second, the goal of the adversary is to win a game.</p><p>EasyCrypt takes this approach. Security goals are modeled as probabilistic programs (basically, as games) played by an adversary. Tools from program verification and programming language theory are used to justify the cryptographic reasoning. EasyCrypt relies on a “<a href="https://arxiv.org/abs/cs/0603118">goal directed proof” approach</a>, in which two important mechanisms occur: lemmas and tactics. Let’s see how this approach works (following this <a href="https://arxiv.org/abs/cs/0603118">amazing paper</a>):</p><ol><li><p>The prover (in this case, you) enters a small <i>statement</i> to prove. For this, one uses the command <i>lemma</i> (meaning this is a <a href="https://en.wikipedia.org/wiki/Lemma_(mathematics)">minor statement</a> needed to be proven)<i>.</i></p></li><li><p>EasyCrypt will display the formula as a statement to be proved (i.e <i>the goal</i>) and will also display all the known hypotheses (unproven lemmas) at any given point.</p></li><li><p>The prover enters a command (a <i>tactic</i>) to either decompose the statement into simpler ones, apply a hypothesis, or make progress in the proof in some other way.</p></li><li><p>EasyCrypt displays a new set of hypotheses and the parts that still need to be proved.</p></li><li><p>Back to step 3.</p></li></ol><p>Let’s say you want to prove something small, like the statement “if <i>p</i> <i>in conjunction</i> with <i>q</i>, then <i>q</i> <i>in conjunction with p.</i>” In <a href="https://www.site.uottawa.ca/~lucia/courses/2101-10/lecturenotes/03PredicateLogic.pdf">predicate logic terms</a>, this will be written as <code><i>(p ∧ q) → (q ∧ p)</i></code>. If we translate this into English statements, as Alice will say in <a href="https://en.wikipedia.org/wiki/Alice%27s_Adventures_in_Wonderland">Alice in Wonderland</a>, it could be:</p><p> <i>p</i>: I have a world of my own. <i>q:</i> Everything is nonsense. <i>p∧q:</i> I have a world of my own and everything is nonsense. <i>(p ∧ q) → (q ∧ p):</i> If I have a world of my own and everything is nonsense, then, everything is nonsense, and I have a world of my own.</p><p>We will walk through such a <a href="https://github.com/alleystoughton/EasyTeach/blob/master/SimpLogic.ec">statement and its proof</a> in EasyCrypt. For more of these examples, see these <a href="https://github.com/alleystoughton/EasyTeach">given by</a> the marvelous Alley Stoughton.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4WeNK8JivTymXgaqSVJwf2/a8e168b85e1c03381099770f71638919/image9-2.png" />
            
            </figure><p>Our lemma and proof in EasyCrypt.</p>
            <pre><code>lemma implies_and () :

This line introduces the stated lemma and creatively calls it “implies_and”. It takes no parameters.

(forall (P, Q: bool) =&gt; P /\ Q =&gt; Q /\ P.

This is the statement we want to prove. We use the variables P and Q of type bool (booleans), and we state that if P and Q, then Q and P.</code></pre>
            <p>Up until now we have just declared our statement to prove to EasyCrypt. Let’s see how we write the proof:</p>
            <pre><code>proof.
This line demarcates the start of the proof for EasyCrypt.


move =&gt; p q H.
We introduce the hypothesis we want to prove (we move them to the “context”). We state that P and Q are both booleans, and that H is the hypothesis P /\ Q.


elim H.

We eliminate H (the conjunctive hypothesis) and we get the components: “p =&gt; q =&gt; q /\ p”.

trivial.

The proof is now trivial.

qed.</code></pre>
            <p><i>Quod erat demonstrandum</i> (QED) denotes the end of the proof (if both are true, then the conjunction holds). Whew! For such a simple statement, this took quite a bit of work, because EasyCrypt leaves no stone unturned. If you get to this point, you can be sure your proof is absolutely correct, unless there is a bug in EasyCrypt itself (or unless we are proving something that we weren't supposed to).</p><p>As you see, EasyCrypt helped us by guiding us in decomposing the statement into simpler terms, and stating what still needed to be proven. And by strictly following logical principles, we managed to realize a proof. If we are doing something wrong, and our proof is incorrect, EasyCrypt will let us know, saying something like:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1kHg8ZYbefNDYuO5ys4IeA/f6af0a3ef1a3b566134fc49dc4f666ca/image2-23.png" />
            
            </figure><p>Screenshot of EasyCrypt showing us that we did something wrong.</p><p>What we have achieved is a computer-checked proof of the statement, giving us far greater confidence in the proof than if we had to scan over one written with pen and paper. But what makes EasyCrypt particularly attractive in addition to this is its tight integration with the <a href="https://github.com/jasmin-lang">Jasmin programming language</a> as we will see later.</p><p>EasyCrypt will also interactively guide us to the proof, as it easily works with <a href="https://proofgeneral.github.io/">ProofGeneral</a> in Emacs. In the image below we see, for example, that EasyCrypt is guiding us by showing the variables we have declared (p, q, and H) and what is missing to be proven (after the dashed line).</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3ZxKXX1WhxdT4ooPSWct8b/1e8bc66f731517bfec8e8c2d74392f30/image8-2.png" />
            
            </figure><p>EasyCrypt interactively showing us at which point of the proof we are at: the cyan section shows us up until which point we have arrived.</p><p>If one is more comfortable with <a href="https://cs-people.bu.edu/gaboardi/teaching/S21-CS591/labs/week3/hoare.pdf">Coq</a> proof assistant (you can find <a href="https://www.youtube.com/watch?v=z861PoZPGqk&amp;list=PLDD40A96C2ED54E99&amp;index=5&amp;ab_channel=AndrejBauer">very good tutorials</a> of it), a similar proof can be given:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4UKDlNoMQOtzkKB9qTxF3W/dc414f9f886dbd86df346a031e59180e/image3-25.png" />
            
            </figure><p>Our lemma and proof in Coq.</p><p>EasyCrypt allows us to prove statements in a faster and more assured manner than if we do proofs by hand. Proving the truthness of the statement we just showed would be easy with the usage of truth tables, for example. But, it is only easy to find these truth tables or proofs when the statement is small. If one is given a complex cryptography algorithm or protocol, the situation is much harder.</p>
    <div>
      <h3>Jasmin</h3>
      <a href="#jasmin">
        
      </a>
    </div>
    <p><a href="https://acmccs.github.io/papers/p1807-almeidaA.pdf">Jasmin</a> is an assembly-like programming language with some high-level syntactic conveniences such as loops and procedure calls while using assembly-level instructions. It does support function calls and functional arrays, as well. The Jasmin compiler predictably transforms source code into assembly code for a chosen platform (currently only <a href="https://www.intel.com/content/dam/develop/external/us/en/documents/introduction-to-x64-assembly-181178.pdf">x64</a> is supported). This transformation is verified: the correctness of some compiler passes (like function inlining or loop unrolling) are proven and verified in the Coq proof assistant. Other passes are programmed in a conventional programming language and the results are validated in Coq. The compiler also comes with a built-in checker for memory safety and constant-time safety.</p><p>This assembly-like syntax, combined with the stated assurances of the compiler, means that we have deep control over the output, and we can optimize it however we like without compromising safety. Because low-level cryptographic code tends to be concise and non-branching, Jasmin doesn’t need full support for general purpose language features or to provide lots of libraries. It only needs to support a set of basic features to give us everything we need.</p><p>One reason Jasmin is so powerful is that it provides a way to formally verify low-level code. The other reason is that Jasmin code can be automatically converted by the compiler into equivalent EasyCrypt code, which lets us reason about its security. In general terms, whatever guarantees apply to the EasyCrypt code also flow into the Jasmin code, and subsequently into the assembly code.</p><p>Let’s use the example of a very simple Jasmin function that performs multiplication to see Jasmin in action:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6qKbsZ1rUwceP0AXOM23ms/c3032102605dd62282bbe1f4548f72bc/image1-23.png" />
            
            </figure><p>A multiplication function written in Jasmin.</p><p>What the function (“fn”) “mul” does, in this case, is to multiply by whatever number is provided as an argument to the function (the variable <code><i>a</i></code>). The syntax of this small function should feel very familiar to anyone that has worked with the <a href="https://en.wikipedia.org/wiki/List_of_C-family_programming_languages">C family of programming languages</a>. The only big difference is the use of the words <code><i>reg</i></code> and <code><i>u64</i></code>. What they state is that the variable <code><i>a</i></code>, for example, is <a href="https://en.wikipedia.org/wiki/Register_allocation">allocated in registers</a> (hence, the use of <i>reg</i>: this defines the storage of the variable) and that it is <a href="https://en.wikipedia.org/wiki/Word_(computer_architecture)">64-bit machine-word</a> (hence, the use of <i>u64</i>). We can convert now this to “pure” x64 assembly:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2TFayfomwD2qaL1cR1Dx3b/22f7cb3762b6db4970e8cfc7080f49a0/image7-4.png" />
            
            </figure><p>A multiplication function written in Jasmin and transformed to Assembly.</p><p>The first lines of the assembly code are just “setting all up”. They are then followed by the <a href="https://cs.brown.edu/courses/cs033/docs/guides/x64_cheatsheet.pdf">“imulq” instruction</a>, which just multiplies the variable by the constant (which in this case is labeled as “param”). While this small function might not show the full power of having the ability of safely translating to assembly, it can be seen when more complex functions are created. Functions that use while loops, arrays, calls to other functions are accepted by Jasmin and will be safely translated to assembly.</p><p>Assembly language has a <a href="http://flint.cs.yale.edu/cs421/papers/art-of-asm/pdf/FORWARD.PDF">little bit of a bad reputation</a> because it is thought to be hard to learn, hard to read, and hard to maintain. Having a tool that helps you with translation is very useful to a programmer, and it is also useful as you can manually or automatically check what the assembly code looks like.</p><p>We can further check the code for its safety:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/70mzl8XIVNieFBNw1FShm7/8e3919506941636d7956eb62b78842e6/image6-9.png" />
            
            </figure><p>A multiplication function written and checked in Jasmin.</p><p>In this check, there are many things to understand. First, it checks that the inputs are allocated in a memory region of at least 0 bytes. Second, the “Rel” entry checks the allocated memory region safety pre-condition: for example, <i>n</i> must point to an allocated memory region of sufficient length.</p><p>You can then <a href="https://github.com/jasmin-lang/jasmin/wiki/Extraction-to-EasyCrypt">extract this functionality to EasyCrypt</a> (and even configure EasyCrypt to verify Jasmin programs). Here is the corresponding EasyCrypt code, automatically produced by the Jasmin compiler:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3wYndYnFsKrBcIPhDIDizI/84d6571ef41d3f4ea060199564f2b0c3/image4-15.png" />
            
            </figure><p>A multiplication function written in Jasmin and extracted to EasyCrypt.</p><p>Here’s a slightly more involved example, that of a FrodoKEM utility function written in Jasmin.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2kXt7r2F4RguoMalUYiUWl/3698b3ef4e117d181d6fcfe2f09aadf6/image5-9.png" />
            
            </figure><p>A utility function for addition for FrodoKEM.</p><p>With a C-like syntax,  this function adds two arrays (<code><i>a</i></code> and <code><i>b</i></code>), and returns the result (in <i>out</i>). The value <i>NBAR</i> is just a parameter you can specify in a C-like manner. You can then take this function and <a href="https://github.com/jasmin-lang/jasmin/wiki/Compilation-to-assembly">compile it to assembly</a>. You can also use the Jasmin compiler to analyze <a href="https://github.com/jasmin-lang/jasmin/wiki/Safety-checker">the safety of the code</a> (for example, that array accesses are in bounds, that memory accesses are valid, that arithmetic operations are applied to valid arguments) and <a href="https://github.com/jasmin-lang/jasmin/wiki/Constant-time-verification">verify the code runs in constant-time</a>.</p><p>The addition function as used by FrodoKEM can also be extracted to EasyCrypt:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/9IZw1XeDt61yAapX1P1za/b92bb7634516c873a3328d611a43e52c/image10-2.png" />
            
            </figure><p>The addition function as extracted to EasyCrypt.</p><p>A theorem expressing the correctness (meaning that addition is correct) is expressed in EasyCrypt as so:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5SDmCcTq5mHAJV73C8dzdd/02dee858a0fe99c404b5d2676ed41738/image12-2.png" />
            
            </figure><p>The theorem of addition function as extracted to EasyCrypt.</p><p>Note that EasyCrypt uses <a href="https://cs-people.bu.edu/gaboardi/teaching/S21-CS591/labs/week3/hoare.pdf">While Language and Hoare Logic</a>. The corresponding proof that states that addition is correct:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3JiYmQNHCVA7MHZv4xOPB4/385433d548dc3f1be1d216c4c54e4abb/image11-2.png" />
            
            </figure><p>The proof of the addition function as extracted to EasyCrypt.</p>
    <div>
      <h3>Why formal verification for post-quantum cryptography?</h3>
      <a href="#why-formal-verification-for-post-quantum-cryptography">
        
      </a>
    </div>
    <p>As we have previously stated, cryptographic implementations are very hard to get right, and even if they are right, the security properties they claim to provide are sometimes wrong for their intended application. The reason why this matters so much is that post-quantum cryptography is the cryptography we will be using in the future due to the arrival of quantum computers. Deploying post-quantum cryptographic algorithms with bugs or flaws in their security properties would be a disaster because connections and data that travels through it can be decrypted or attacked. We are trying to prevent that.</p><p>Cryptography is difficult to get right, and it is not only difficult to get right by people new to it, but it is also difficult for anyone, even for the experts. The designs and code we write are error-prone as we all are, as humans, prone to errors. Some examples of when some designs got it wrong are as follows (luckily, these example  were not deployed, and they did not have the usual disastrous consequences):</p><ul><li><p><a href="https://falcon-sign.info/">Falcon</a> (a post-quantum algorithm currently part of the <a href="https://csrc.nist.gov/projects/post-quantum-cryptography">NIST procedure</a>), produced valid signatures “but leaked information on the private key,” according to an <a href="https://csrc.nist.gov/CSRC/media/Projects/post-quantum-cryptography/documents/round-2/official-comments/FALCON-round2-official-comment.pdf">official comment</a> posted to the NIST post-quantum process on the algorithm. The comment also noted that “the fact that these bugs existed in the first place shows that the traditional development methodology (i.e. “being super careful”) has failed.“</p></li><li><p>“The De Feo–Jao–Plût identification scheme (the basis for <a href="https://en.wikipedia.org/wiki/Supersingular_isogeny_key_exchange">SIDH signatures</a>) contains an invalid assumption and provide[s] a counterexample for this assumption: thus showing the proof of soundness is invalid,” according to a <a href="https://eprint.iacr.org/2021/1023.pdf">finding</a> that one proof of a post-quantum algorithm was not valid. This is an example of an incorrect proof, whose flaws were discovered and eliminated prior to any deployment.</p></li></ul><p>Perhaps these two examples might convince the reader that formal analysis and formal verification of implementations are needed. While they help us avoid some human errors, they are not perfect. As for us, we are convinced of these methods. We are working towards a <a href="https://github.com/xvzcf/VeriFrodo/">formally verified implementation of FrodoKEM</a> (we have a first implementation of it in our <a href="https://github.com/cloudflare/circl/pull/311">cryptographic library, CIRCL</a>), and we are collaborating to create a <a href="https://github.com/jasmin-lang/cryptolib">formally verified and implemented library</a> we can run in real-world connections. If you are interested in learning more about EasyCrypt and Jasmin, visit the <a href="https://github.com/claucece/formal-tutorials">resources we have put together</a>, try to <a href="https://github.com/xvzcf/VeriFrodo/blob/main/install.md">install it following our guidelines</a>, or follow <a href="https://cryptojedi.org/programming/jasmin.shtml">some tutorials</a>.</p><p>See you on other adventures in post-quantum (and some <a href="https://www.youtube.com/c/utahactor">cat videos for you</a>)!</p>
    <div>
      <h3>References:</h3>
      <a href="#references">
        
      </a>
    </div>
    <ul><li><p>“SoK: Computer-Aided Cryptography” by Manuel Barbosa, Gilles Barthe, Karthik Bhargavan, Bruno Blanchet, Cas Cremers, Kevin Liao and Bryan Parno: <a href="https://eprint.iacr.org/2019/1393.pdf">https://eprint.iacr.org/2019/1393.pdf</a></p></li><li><p>“EasyPQC: Verifying Post-Quantum Cryptography” by Manuel Barbosa, Gilles Barthe, Xiong Fan, Benjamin Grégoire, Shih-Han Hung, Jonathan Katz, Pierre-Yves Strub, Xiaodi Wu and Li Zhou: <a href="https://eprint.iacr.org/2021/1253">https://eprint.iacr.org/2021/1253</a></p></li><li><p>“Jasmin: High-Assurance and High-Speed Cryptography” by José Bacelar Almeida, Manuel Barbosa, Gilles Barthe, Arthur Blot, Benjamin Grégoire, Vincent Laporte, Tiago Oliveira, Hugo Pacheco, Benedikt Schmidt and Pierre-Yves Strub: <a href="https://dl.acm.org/doi/pdf/10.1145/3133956.3134078">https://dl.acm.org/doi/pdf/10.1145/3133956.3134078</a></p></li><li><p>“The Last Mile: High-Assurance and High-Speed Cryptographic Implementations” by José Bacelar Almeida, Manuel Barbosa, Gilles Barthe, Benjamin Grégoire, Adrien Koutsos, Vincent Laporte,Tiago Oliveira and Pierre-Yves Strub: <a href="https://arxiv.org/pdf/1904.04606.pdf">https://arxiv.org/pdf/1904.04606.pdf</a></p></li></ul><p></p> ]]></content:encoded>
            <category><![CDATA[Research]]></category>
            <category><![CDATA[Cryptography]]></category>
            <category><![CDATA[Post-Quantum]]></category>
            <guid isPermaLink="false">6LJZAgw1hZFYKF1ogqsnAr</guid>
            <dc:creator>Sofía Celi</dc:creator>
            <dc:creator>Goutam Tamvada</dc:creator>
        </item>
        <item>
            <title><![CDATA[Deep dive into a post-quantum key encapsulation algorithm]]></title>
            <link>https://blog.cloudflare.com/post-quantum-key-encapsulation/</link>
            <pubDate>Tue, 22 Feb 2022 13:59:26 GMT</pubDate>
            <description><![CDATA[ In this blog post, we will look at what Key Encapsulation Mechanisms are and why they matter in a post-quantum world ]]></description>
            <content:encoded><![CDATA[ <p></p><p>The Internet is accustomed to the fact that any two parties can exchange information securely without ever having to meet in advance. This magic is made possible by key exchange algorithms, which are core to certain protocols, such as the Transport Layer Security (TLS) protocol, that are used widely across the Internet.</p><p>Key exchange algorithms are an elegant solution to a vexing, seemingly impossible problem. Imagine a scenario where keys are transmitted in person: if <a href="https://en.wikipedia.org/wiki/Persephone">Persephone</a> wishes to send her mother <a href="https://en.wikipedia.org/wiki/Demeter">Demeter</a> a secret message, she can first generate a key, write it on a piece of paper and hand that paper to her mother, Demeter. Later, she can scramble the message with the key, and send the scrambled result to her mother, knowing that her mother will be able to unscramble the message since she is also in possession of the same key.</p><p>But what if Persephone is kidnapped (as <a href="https://www.perseus.tufts.edu/hopper/text?doc=HH+2+4">the story goes</a>) and cannot deliver this key in person? What if she can no longer write it on a piece of paper because someone (by chance <a href="https://en.wikipedia.org/wiki/Hades">Hades</a>, the kidnapper) might read that paper and use the key to decrypt any messages between them? Key exchange algorithms come to the rescue: Persephone can run a key exchange algorithm with Demeter, giving both Persephone and Demeter a <i>secret value</i> that is known only to them (no one else knows it) <i>even if</i> Hades is eavesdropping. This secret value can be used to encrypt messages that Hades cannot read.</p><p>The most widely used key exchange algorithms today are based on hard mathematical problems, such as <a href="https://en.wikipedia.org/wiki/Integer_factorization">integer factorization</a> and the <a href="https://crypto.stanford.edu/pbc/notes/crypto/factoring.html">discrete logarithm problem</a>. But these problems can be efficiently solved by a quantum computer, as we have <a href="/quantum-solace-and-spectre">previously learned</a>, breaking the secrecy of the communication.</p><p>There are other mathematical problems that are hard even for quantum computers to solve, such as those based on lattices or isogenies. These problems can be used to build key exchange algorithms that are secure even in the face of quantum computers. Before we dive into this matter, we have to first look at one algorithm that can be used for Key Exchange: Key Encapsulation Mechanisms (KEMs).</p><p>Two people could agree on a <i>secret value</i> if one of them could send the secret in an encrypted form to the other one, such that only the other one could decrypt and use it. This is what a KEM makes possible, through a collection of three algorithms:</p><ul><li><p>A key generation algorithm, <i>Generate</i>, which generates a public key and a private key (a keypair).</p></li><li><p>An encapsulation algorithm, <i>Encapsulate,</i> which takes as input a public key, and outputs a shared secret value and an “encapsulation” (a ciphertext) of this secret value.</p></li><li><p>A decapsulation algorithm, <i>Decapsulate</i>, which takes as input the encapsulation and the private key, and outputs the shared secret value.</p></li></ul><p>A KEM can be seen as similar to a Public Key Encryption (PKE) scheme, since both use a combination of public and private keys. In a PKE, one encrypts a message using the public key and decrypts using the private key. In a KEM, one uses the public key to create an “encapsulation” — giving a randomly chosen shared key — and one decrypts this “encapsulation” with the private key. The reason why KEMs exist is that PKE schemes are usually less efficient than <a href="https://en.wikipedia.org/wiki/Symmetric-key_algorithm">symmetric encryption schemes</a>; one can use a KEM to only transmit the shared/symmetric key, and later use it in a symmetric algorithm to efficiently encrypt data.</p><p>Nowadays, in most of our connections, we do not use KEMs or PKEs per se. We either use Key Exchanges (KEXs) or Authenticated Key Exchanges (AKE). The reason for this is that a KEX allows us to use public keys (solving the <i>key exchange problem</i> of how to securely transmit keys) in order to generate a shared/symmetric key which, in turn, will be used in a symmetric encryption algorithm to encrypt data efficiently. A famous KEX algorithm is <a href="https://en.wikipedia.org/wiki/Diffie%E2%80%93Hellman_key_exchange">Diffie-Hellman</a>, but classical Diffie-Hellman based mechanisms do not provide security against a quantum adversary; post-quantum KEMs do.</p><p>When using a KEM, Persephone would run <i>Generate</i> and publish the public key. Demeter takes this public key, runs <i>Encapsulate</i>, keeps the generated secret to herself, and sends the encapsulation (the ciphertext) to Persephone. Persephone then runs <i>Decapsulate</i> on this encapsulation and, with it, arrives at the same shared secret that Demeter holds. Hades will not be able to guess even a bit of this secret value even if he sees the ciphertext.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7bbaBxcIILEzrNhIFZNb6p/11e6723fc4c28fdfa43009db7892e9a3/image3-21.png" />
            
            </figure><p>In this post, we go over the construction of one particular post-quantum KEM, called <i>FrodoKEM</i>. Its design is simple, which makes it a good choice to illustrate how a KEM can be constructed. We will look at it from two perspectives:</p><ul><li><p>The underlying mathematics: a cryptographic algorithm is built as a Matryoshka doll. The first doll is, most of the time, the mathematical base, which hardness should be strong so that security is maintained. In the post-quantum world, this is usually the hardness of some lattice problems (more on this in the next section).</p></li><li><p>The algorithmic construction : these are all the subsequent dolls that take the mathematical base and construct an algorithm out of it. In the case of a KEM, first you construct a Public Key Encryption (PKE) scheme and transform it (putting another doll on top) to make a KEM, so better security properties are attained, as we will see.</p></li></ul>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6EAd5Cm1yP22B42GNMq19P/39d0e79590bb89c001e1d792df8130ac/image7-3.png" />
            
            </figure><p>The core of <i>FrodoKEM</i> is a public-key encryption scheme called <i>FrodoPKE</i>, whose security is based on the hardness of the “Learning with Errors” (LWE) problem over lattices. Let us look now at the first doll of a KEM.</p><p><b>Note to the reader:</b> Some mathematics is coming in the next sections, but do not worry, we will guide you through it.</p>
    <div>
      <h3>The Learning With Errors Problem</h3>
      <a href="#the-learning-with-errors-problem">
        
      </a>
    </div>
    <p>The security (and mathematical foundation) of <i>FrodoKEM</i> relies on the hardness of the Learning With Errors (LWE) problem, a generalization of the classic Learning Parities with Noise <a href="https://cims.nyu.edu/~regev/papers/qcrypto.pdf">problem, first defined by Regev</a>.</p><p>In cryptography, specifically in the mathematics underlying it, we often use sets to define our operations. A set is a collection of any element, in this case, we will refer to collections of numbers. In cryptography textbooks and articles, one can often read:</p><p>Let $Z_q$ denote the set of integers $\{0, …, q-1\}$ where $(q &gt; 2)$,</p><p>which means that we have a collection of integers from 0 to a number <i>q</i> (which has to be bigger than 2. It is assumed that <i>q</i>, in a cryptographic application, is a prime. In the main theorem, it is an arbitrary integer).</p><p>Let $\{Z^n\}_q$ denote a vector $(v1, v2, …, vn)$ of <i>n</i> elements, each of which belongs to $Z_q$.</p><p>The LWE problem asks to recover a secret vector $s = (s1, s2, …, sn)$ in $\{Z^n\}_q$ given a sequence of random, “approximate” linear equations on <i>s</i>. For instance, if $(q = 23)$ the equations might be:</p><p>[s1 + s2 + s3 + s4 ≈ 30 (mod 23)</p><p>2s1 + s3 + s5 + … + sn ≈ 40 (mod 23)</p><p>10s2 + 13s3 + 1s4 ≈ 50 (mod 23)</p><p>…]</p><p>We see the left-hand sides of the equations above are not exactly equal to the right-hand side (the equality sign is not used but rather the “≈” sign: approximately equal to); they are off by an introduced slight “error”, (which will be defined as the variable <i>e</i>. In the equations above, the error is, for example, the number 10). If the error was a known, public value, recovering <i>s</i> (the hidden variable) would be easy: after about <i>n</i> equations, we can recover <i>s</i> in a reasonable time using <a href="https://en.wikipedia.org/wiki/Gaussian_elimination">Gaussian elimination</a>. Introducing this unknown error makes the problem difficult to solve (it is difficult with accuracy to find <i>s</i>), even for quantum computers.</p><p>An equivalent formulation of the LWE problem is:</p><ol><li><p>There exists a vector <i>s</i> in $\{Z^n\}_q$, called the secret (the hidden variable).</p></li><li><p>There exists random variables <i>a</i>.</p></li><li><p>χ is a distribution, <i>e</i> is the integer error introduced from the distribution χ.</p></li><li><p>You have: (a, ⟨a, s⟩ + e). ⟨a, s⟩ is the inner product modulo <i>q</i> of <i>s</i> and <i>a</i>.</p></li><li><p>Given ⟨a, s⟩ + e ≈ b, the input to the problem is <i>a</i> and <i>b,</i> the goal is to output a guess for <i>s</i> which is very hard to achieve with accuracy.</p></li></ol><p>Blum, Kalai and Wasserman <a href="https://arxiv.org/abs/cs/0010022">provided the first subexponetial algorithm</a> for solving this problem. It requires 2<sup>O(n /log n)</sup> equations/time.</p><p>There are two main kinds of computational LWE problems that are difficult to solve for quantum computers (given certain choices of both <i>q</i> and χ):</p><ol><li><p>Search, which is to recover the secret/hidden variable <i>s</i> by only being given a certain number of samples drawn from the distribution χ.</p></li><li><p>Decision, which is to distinguish a certain number of samples drawn from the distribution (a, ⟨a, s⟩ + e) from random samples.</p></li></ol>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3FTWefsA5VM4m5OetAKHJQ/533bee8f2eb83b992ac6a5cbfd117e14/image5-7.png" />
            
            </figure><p>The LWE problem: search and decision.</p><p>LWE is just noisy linear algebra, and yet it seems to be a very hard problem to solve. In fact, there are many reasons to believe that the LWE problem is hard: the best algorithms for solving it run in exponential time. It also is closely related to the <a href="https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.724.4038&amp;rep=rep1&amp;type=pdf">Learning Parity with Noise (LPN)</a> problem, which is extensively studied in learning theory, and it is believed to be hard to solve (any progress in breaking LPN will potentially lead to a breakthrough in coding theory). How does it relate to building cryptography? LWE is applied to the cryptographic applications of the type of public-key. In this case, the secret value s becomes the private key, and the values b<sub>i</sub> and e<sub>i</sub> are the public key.</p><p>So, why is this problem related to lattices? In <a href="/quantum-solace-and-spectre">other blog posts</a>, we have seen that certain algorithms of <a href="https://www.cloudflare.com/learning/ssl/quantum/what-is-post-quantum-cryptography/">post-quantum cryptography</a> are based on lattices. So, how does LWE relate to them? One can view LWE as the problem of decoding from random linear codes, or reduce it to lattices, in particular to problems such as the Short Vector Problem (SVP) or the Shortest independent vectors problem (SIVP): an efficient solution to LWE implies a quantum algorithm to SVP and SIVP. In <a href="/post-quantum-signatures">other blog posts</a>, we talk about SVP, so, in this one, we will focus on the random bounded distance decoding problem on lattices.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6xOnjfYoHaqK06BOZuobgw/caaacc6a318bffe35af6f5c63b9dbd07/image1-18.png" />
            
            </figure><p>Lattices (as seen in the image), as a regular and periodic arrangement of points in space, have emerged as a foundation of cryptography in the face of quantum adversaries; one modern problem in which they rely on is the Bounded Distance Decoding (BDD) problem. In the BDD problem, you are given a lattice with an arbitrary basis (a basis is a list of vectors that generate all the other points in a lattice. In the case of the image, it is the pair of vectors b<sub>1</sub> and b<sub>2</sub>). You are then given a vector b<sub>3</sub> on it. You then perturb the lattice point b<sub>3</sub> by adding some noise (or error) to give <i>x</i>. Given <i>x</i>, the goal is to find the nearest lattice point (in this case b<sub>3</sub>), as seen in the image. In this case, LWE is an average-case form of BDD (Regev also gave a worst-case to average-case reduction from BDD to LWE: the security of a cryptographic system is related to the worst-case complexity of BDD).</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5q6amGgRvbV2qDx82I0HbF/4a19e3224d839860af5a0107f0484f26/image2-19.png" />
            
            </figure><p>The first doll is built. Now, how do we build encryption from this mathematical base? From LWE, we can build a public key encryption algorithm (PKE), as we will see next with <i>FrodoPKE</i> as an example.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/51MCMCJSxznDdeKsLjSs5j/a8f7e494c928c05ae76a0faf62f0efac/image6-7.png" />
            
            </figure>
    <div>
      <h3>Public Key Encryption: FrodoPKE</h3>
      <a href="#public-key-encryption-frodopke">
        
      </a>
    </div>
    <p>The second doll of the Matryoshka is using a mathematical base to build a Public Key Encryption algorithm from it. Let’s look at <i>FrodoPKE</i>. <i>FrodoPKE</i> is a public-key encryption scheme which is the building block for <i>FrodoKEM</i>. It is made up of three components: <a href="https://www.esat.kuleuven.be/cosic/blog/introduction-to-lattice-based-cryptography-part-2-lwe-encryption/">key generation, encryption, and decryption</a>. Let’s say again that Persephone wants to communicate with Demeter. They will run the following operations:</p><ol><li><p><i>Generation</i>: Generate a key pair by taking a LWE sample (like <i>(A, B = As + e mod q)</i>). The public key is <i>A, B</i> and the private key is <i>s</i>. Persephone sends this public key to Demeter.</p></li><li><p><i>Encryption</i>: Demeter receives this public key and wants to send a private message with it, something like “come back”. She generates two secret vectors (<i>(s1, e1)</i> and <i>(e2)</i>). She then:</p></li><li><p>Makes the sample <i>(b1 = As1 + e1 mod q)</i>.</p></li><li><p>Makes the sample <i>(v1 = Bs1 + e2 mod q)</i>.</p></li><li><p>Adds the message <i>m</i> to the most significant bit of <i>v1</i>.</p></li><li><p>Sends <i>b1</i> and <i>v1</i> to Persephone (this is the ciphertext).</p></li><li><p>Decryption: Persephone receives the ciphertext and proceeds to:</p></li><li><p>Calculate <i>m = v1 - b1  *  s</i> and is able to recover the message, and she proceeds to leave to meet her mother.</p></li></ol><p>Notice that computing <i>v = v1- b1  * s</i> gives us <i>m + e2</i> (the message plus the error matrix sampled during encryption). The decryption process performs rounding, which will output the original message <i>m</i> if the error matrix <i>e2</i> is carefully chosen. If not, notice that there is the potential of decryption failure.</p><p>What kind of security does this algorithm give? In cryptography, we design algorithms with security notions in mind, notions they have to attain. This algorithm, <i>FrodoPKE</i> (as with other PKEs), satisfies only <i>IND-CPA</i> (<i>Indistinguishability under chosen-plaintext attack</i>) security. Intuitively, this notion means that a passive eavesdropper listening in can get no information about a message from a ciphertext. Even if the eavesdropper knows that a ciphertext is an encryption of just one of two messages of their choice, looking at the ciphertext should not tell the adversary which one was encrypted. We can also think of it as a game:</p><blockquote><p>A gnome can be sitting inside a box. This box takes a message and produces a ciphertext. All the gnome has to do is record each message and the ciphertext they see generated. An outside-of-the-box adversary, like a troll, wants to beat this game and know what the gnome knows: what ciphertext is produced if a certain message is given. The troll chooses two messages (m1 and m2) of the same length and sends them to the box. The gnome records the box operations and flips a coin. If the coin lands on its face, then they send the ciphertext (c1) corresponding to m1. Otherwise, they send c2 corresponding to m2. The troll, knowing the messages and the ciphertext, has to guess which message was encrypted.</p></blockquote><p>IND-CPA security is not enough for all secure communication on the Internet. Adversaries can not only passively eavesdrop, but also mount <i>chosen-ciphertext attacks</i> (<i>CCA</i>): they can actively modify messages in transit and trick the communicating parties into decrypting these modified messages, thereby obtaining a <i>decryption oracle</i>. They can use this decryption oracle to gain information about a desired ciphertext, and so compromise confidentiality. Such attacks are practical and all that an attacker has to do is, for example, ​​send several million test ciphertexts to a decryption oracle, see <a href="https://medium.com/@c0D3M/bleichenbacher-attack-explained-bc630f88ff25">Bleichenbacher’s attack</a> and the <a href="https://www.robotattack.org/">ROBOT attack</a>, for example.</p><p>Without CCA security, in the case of Demeter and Persephone, what this security means is that Hades can generate and send several million test ciphertexts to the decryption oracle and eventually reveal the content of a valid ciphertext that Hades did not generate. Demeter and Persephone then might not want to use this scheme.</p>
    <div>
      <h3>Key Encapsulation Mechanisms: FrodoKEM</h3>
      <a href="#key-encapsulation-mechanisms-frodokem">
        
      </a>
    </div>
    <p>The last figure of the Matryoshka doll is taking a secure-against-CPA scheme and making it secure against CCA. A secure-against-CCA scheme must not leak information about its private key, even when decrypting arbitrarily chosen ciphertexts. It must also be the case that an adversary cannot craft valid ciphertexts without knowing what the plaintext message is; suppose, again, that the adversary knows the messages encrypted could only be either <i>m0</i> or <i>m1</i>. If the attacker can craft another valid ciphertext, for example, by flipping a bit of the ciphertext in transit, they can send this modified ciphertext, and see whether a message close to <i>m1</i> or <i>m0</i> is returned.</p><p>To make a CPA scheme secure against CCA, one can use the <a href="https://eprint.iacr.org/2017/604.pdf">Hofheinz, Hovelmanns, and Kiltz (HHK) transformations</a> (see this <a href="https://hss-opus.ub.ruhr-uni-bochum.de/opus4/frontdoor/deliver/index/docId/7758/file/diss.pdf">thesis</a> for more information). The HHK transformation constructs an IND-CCA-secure KEM from both an IND-CPA PKE and three hash functions. In the case of the algorithm we are exploring, FrodoKEM, it uses a slightly tweaked version of the HHK transform. It has, again, three functions (some parts of this description are simplified):</p><p><i>Generation</i>:</p><ol><li><p>We need a hash function <i>G1</i>.</p></li><li><p>We need a PKE scheme, such as FrodoPKE.</p></li><li><p>We call the <i>Generation</i> function of FrodoPKE, which returns a public (pk) and private key (sk).</p></li><li><p>We hash the public key <i>pkh ← G1(pk)</i>.</p></li><li><p>We chose a value <i>s</i> at random.</p></li><li><p>The public key is <i>pk</i> and the private key <i>sk1</i> is <i>(sk, s, pk, pkh)</i>.</p></li></ol><p><i>Encapsulate</i>:</p><ol><li><p>We need two hash functions: <i>G2</i> and <i>F</i>.</p></li><li><p>We generate a random message <i>u.</i></p></li><li><p>We hash the received public key <i>pkh</i> with the random message <i>(r, k) ← G2(pkh || u)</i>.</p></li><li><p>We call the <i>Encryption</i> function of FrodoPKE: <i>ciphertext ← Encrypt(u, pk, r)</i>.</p></li><li><p>We hash: <i>shared secret ← F(c || k)</i>.</p></li><li><p>We send the ciphertext and the shared secret.</p></li></ol><p><i>Decapsulate</i>:</p><ol><li><p>We need two hash functions (<i>G2</i> and <i>F</i>) and we have <i>(sk, s, pk, pkh)</i>.</p></li><li><p>We receive the ciphertext and the shared secret.</p></li><li><p>We call the decryption function of FrodoPKE: <i>message ← Decrypt(shared secret, ciphertext)</i>.</p></li><li><p>We hash: <i>(r , k) ← G2(pkh || message)</i>.</p></li><li><p>We call the <i>Encryption</i> function of FrodoPKE: <i>ciphertext1 ← Encrypt(message, pk, r)</i>.</p></li><li><p>If <i>ciphertext1 == ciphertext</i>, <i>k = k0</i>; else, <i>k = s</i>.</p></li><li><p>We hash: <i>ss ← F(ciphertext || k)</i>.</p></li><li><p>We return the shared secret <i>ss</i>.</p></li></ol><p>What this algorithm achieves is the generation of a shared secret and ciphertext which can be used to establish a secure channel. It also means that no matter how many ciphertexts Hades sends to the decryption oracle, they will never reveal the content of a valid ciphertext that Hades himself did not generate. This is ensured when we run the encryption process again in <i>Decapsulate</i> to check if the ciphertext was computed correctly, which ensures that an adversary cannot craft valid ciphertexts simply by modifying them.</p><p>With this last doll, the algorithm has been created, and it is safe in the face of a quantum adversary.</p>
    <div>
      <h3>Other KEMs beyond Frodo</h3>
      <a href="#other-kems-beyond-frodo">
        
      </a>
    </div>
    <p>While the <a href="https://en.wikipedia.org/wiki/Frodo_Baggins">ring bearer,</a> Frodo, wanders around and transforms, he was not alone in his journey.  FrodoKEM is currently designated as an alternative candidate for standardization as part of the post-quantum NIST process. But, there are others:</p><ul><li><p>Kyber, NTRU, Saber: which are based on variants of the LWE problem over lattices and,</p></li><li><p>Classic McEliece: which is based on error correcting codes.</p></li></ul><p>The lattice-based variants have the advantage of being fast, while producing relatively small keys and ciphertexts. There are concerns about <a href="https://www.youtube.com/watch?v=iAjkEF0x5qw&amp;ab_channel=SimonsInstitute">their</a> <a href="https://www.youtube.com/watch?v=K5Apl_qCnDA&amp;ab_channel=SimonsInstitute">security</a>, which need to be properly verified, however. More confidence is found in the security of the Classic McEliece scheme, as its underlying problem has been studied for longer (It is only one year older than RSA!). It has a disadvantage: it produces extremely large public keys. Classic-McEliece-348864 for example, produces public keys of size 261,120 bytes, whereas Kyber512, which claims comparable security, produces public keys of size 800 bytes.</p><p>They are all Matryoshka dolls (including sometimes non-post-quantum ones). They are all algorithms that are placed one inside the other. They all start with a small but powerful idea: a mathematical problem whose solution is hard to find in an efficient time. They then take the algorithm approach and achieve one cryptographic security. And, by the magic of hashes and length preservation, they achieve more cryptographic security. This just goes to show that cryptographic algorithms are not perfect in themselves; they stack on top of each other to get the best of each one. Facing quantum adversaries with them is the same, not a process of isolation but rather a process of stacking and creating the big picture from the smallest one.</p>
    <div>
      <h3>References:</h3>
      <a href="#references">
        
      </a>
    </div>
    <ul><li><p>NIST Post-Quantum Cryptography process FAQ: <a href="https://csrc.nist.gov/Projects/post-quantum-cryptography/faqs">https://csrc.nist.gov/Projects/post-quantum-cryptography/faqs</a></p></li><li><p>“A Decade of Lattice Cryptography” by Chris Peikert: <a href="https://eprint.iacr.org/2015/939.pdf">https://eprint.iacr.org/2015/939.pdf</a></p></li><li><p>“FrodoKEM: Learning With Errors Key Encapsulation Algorithm Specifications and Supporting Documentation” by Erdem Alkim, Joppe W. Bos, Léo Ducas, Patrick Longa, Ilya Mironov, Michael Naehrig, Valeria Nikolaenko, Chris Peikert, Ananth Raghunathan and Douglas Stebila: <a href="https://frodokem.org/files/FrodoKEM-specification-20171130.pdf">https://frodokem.org/files/FrodoKEM-specification-20171130.pdf</a></p></li><li><p>“The Learning with Errors Problem” by Oded Regev: <a href="https://cims.nyu.edu/~regev/papers/lwesurvey.pdf">https://cims.nyu.edu/~regev/papers/lwesurvey.pdf</a></p></li><li><p>“Wonk post: chosen ciphertext security in public-key encryption” by Matthew Green: <a href="https://blog.cryptographyengineering.com/2018/07/20/wonk-post-chosen-ciphertext-security-in-public-key-encryption-part-2/">https://blog.cryptographyengineering.com/2018/07/20/wonk-post-chosen-ciphertext-security-in-public-key-encryption-part-2/</a></p></li><li><p>“A Designer's Guide to KEMs” by Alexander W. Dent: <a href="https://eprint.iacr.org/2002/174">https://eprint.iacr.org/2002/174</a></p></li><li><p>“A Modular Analysis of the Fujisaki-Okamoto Transformation” by Dennis Hofheinz, Kathrin Hövelmanns and Eike Kiltz: <a href="https://eprint.iacr.org/2017/604.pdf">https://eprint.iacr.org/2017/604.pdf</a></p></li></ul><p></p> ]]></content:encoded>
            <category><![CDATA[Research]]></category>
            <category><![CDATA[Cryptography]]></category>
            <category><![CDATA[Post-Quantum]]></category>
            <guid isPermaLink="false">49wRKGXrzIHjgTPhrb6x4w</guid>
            <dc:creator>Goutam Tamvada</dc:creator>
            <dc:creator>Sofía Celi</dc:creator>
        </item>
        <item>
            <title><![CDATA[Deep dive into a post-quantum signature scheme]]></title>
            <link>https://blog.cloudflare.com/post-quantum-signatures/</link>
            <pubDate>Tue, 22 Feb 2022 13:59:15 GMT</pubDate>
            <description><![CDATA[ How can one attest to an identity and prove it belongs to one self? And how can one do it in the face of quantum computers? In this blog post, we examine these questions and explain what post-quantum signatures are ]]></description>
            <content:encoded><![CDATA[ 
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1ewsKSumB2t0mCKIp9i1yo/6d79a6e2ec066820b1c17bdb06fb96fe/image6-5.png" />
            
            </figure><p>To provide authentication is no more than to assert, to provide proof of, an identity. We can claim who we claim to be but if there is no proof of it (recognition of our face, voice or mannerisms) there is no assurance of that. In fact, we can claim to be someone we are not. We can even claim we are someone that does not exist, as clever Odysseus did once.</p><p>The story goes that there was a man named <a href="https://en.wikipedia.org/wiki/Odysseus">Odysseus</a> who angered the gods and was punished with perpetual wandering. He traveled and traveled the seas meeting people and suffering calamities. On one of his trips, he came across the <a href="https://en.wikipedia.org/wiki/Polyphemus">Cyclops Polyphemus</a> who, in short, wanted to eat him. Clever Odysseus got away (as he usually did) by wounding the cyclops’ eye. As he was wounded, he asked for Odysseus name to which the latter replied:</p><blockquote><p>“Cyclops, you asked for my glorious name, and I will tell it; but do give the stranger's gift, just as you promised. <i>Nobody</i> I am called. <i>Nobody</i> they called me: by mother, father, and by all my comrades”</p></blockquote><p><i>(As seen in </i><a href="https://www.perseus.tufts.edu/hopper/text?doc=Perseus%3Atext%3A1999.01.0135%3Abook%3D9%3Acard%3D360"><i>The Odyssey, book 9</i></a><i>. Translation by the authors of the blogpost).</i></p><p>The cyclops believed that was Odysseus' name (<a href="https://en.wikipedia.org/wiki/Outis">Nobody</a>) and proceeded to tell everyone, which resulted in no one believing him. “How can nobody have wounded you?” they questioned the cyclops. It was a trick, a play of words by Odysseus. Because to give an identity, to tell the world who you are (or who you are pretending to be) is easy. To provide proof of it is very difficult. The cyclops could have asked Odysseus to prove who he was, and the story would have been different. And Odysseus wouldn’t have left the cyclops laughing.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1Pkg7X1JgYTojfc5AgbTxY/ce721f84c91d55c5e110a3e4d57c7f04/image1-17.png" />
            
            </figure><p>In the digital world, proving your identity is more complex. In face-to-face conversations, we can often attest to the identity of someone by knowing and verifying their face, their voice, or by someone else introducing them to us. From computer to computer, the scenario is a little different. But there are ways. When a user connects to their banking provider on the Internet, they need assurance not only that the information they send is secured; but that they are also sending it to their bank, and not a malicious website masquerading as their provider. The Transport Layer Security (TLS) protocol provides this through digitally signed statements of identity (certificates). Digital signature schemes also play a central role in <a href="https://www.cloudflare.com/learning/dns/dnssec/ecdsa-and-dnssec/">DNSSEC</a> as well, an extension to the <a href="https://www.cloudflare.com/learning/dns/what-is-dns/">Domain Name System (DNS)</a> that protects applications from accepting forged or manipulated DNS data, which is what happens during <a href="https://en.wikipedia.org/wiki/DNS_spoofing">DNS cache poisoning</a>, for example.</p><p>A digital signature is a demonstration of authorship of a document, conversation, or message sent using digital means. As with “regular” signatures, they can be publicly verified by anyone that knows that it is a signature made by someone.</p><p>A digital signature scheme is a collection of three algorithms:</p><ol><li><p>A key generation algorithm, <i>Generate</i>, which generates a public verification key and a private signing key (a keypair).</p></li><li><p>A signing algorithm, <i>Sign</i>, which takes the private signing key, a message, and outputs a signature of the message.</p></li><li><p>A verification algorithm, <i>Verify</i>, which takes the public verification key, the signature and the message, and outputs a value stating whether the signature is valid or not.</p></li></ol><p>In the case of the Odysseus' story, what the cyclops could have done to verify his identity (to verify that he indeed was <i>Nobody</i>) was to ask for a proof of identity: for example, for other people to vouch that he is who he claims to be. Or he could have asked for a digital signature (attested by several people or registered as his own) attesting he was <i>Nobody.</i> Nothing like that happened, so the cyclops was fooled.</p><p>In the Transport Layer Security protocol, TLS, authentication needs to be executed at the time a connection or conversation is established (as data sent after this point will be authenticated until that is explicitly disabled), rather than for the full lifetime of the data (as with confidentiality). Because of that, the need to transition to post-quantum signatures is not as urgent as it is for post-quantum key exchange schemes, and we do not believe there are sufficiently powerful quantum computers at the moment that can be used to listen in on connections and forge signatures. At some point, that will no longer be true, and the transition will have to be made.</p><p>There are various candidates for authentication schemes (including digital signatures) that are quantum secure: some use cryptographic hash functions, some use problems over lattices, while others use techniques from the field of <a href="https://en.wikipedia.org/wiki/Secure_multi-party_computation">multi-party computation</a>. It is also possible to use Key Encapsulation Mechanisms (or KEMs) to achieve authentication in cryptographic protocols.</p><p>In this post, much like in the one about <a href="/post-quantum-key-encapsulation">Key Encapsulation Mechanisms</a>, we will give a bird’s-eye view of the construction of one particular post-quantum signature algorithm. We will discuss <a href="https://pq-crystals.org/dilithium/data/dilithium-specification-round3.pdf">CRYSTALS-Dilithium</a>, as an example of how a signature scheme can be constructed. Dilithium is a finalist candidate in the <a href="https://csrc.nist.gov/Projects/post-quantum-cryptography/round-3-submissions">NIST post-quantum cryptography standardization process</a> and provides an example of a standard technique used to construct digital signature schemes. We chose to explain Dilithium here as it is a finalist and its design is straightforward to explain.</p><p>We will again build the algorithm up layer-by-layer. We will look at:</p><ul><li><p>Its mathematical underpinnings: as we see in <a href="/post-quantum-key-encapsulation">other blog posts</a>, a cryptographic algorithm can be built as a Matryoshka doll or a <a href="https://en.wikipedia.org/wiki/Chinese_boxes">Chinese box</a>. Let us use the Chinese box analogy here. The first box, in this case, is the mathematical base, whose hardness should be strong so that security is maintained. In the post-quantum world, this is usually the hardness of some lattice or isogeny problems.</p></li><li><p>Its algorithmic construction: these are all the subsequent boxes that take the mathematical base and construct an algorithm out of it. In the case of a signature, first one constructs an identification scheme, which we will define in the next sections, and then transform it to a signature scheme using the <a href="https://www.iacr.org/archive/asiacrypt2009/59120596/59120596.pdf">Fiat-Shamir transformation</a>.</p></li></ul>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6HkQpmpEYsnuu4tZx9NM9C/3f22f53e364162d3ed551bc5cc06fce9/image5-6.png" />
            
            </figure><p>The mathematical core of <i>Dilithium</i> is, as with <a href="https://csrc.nist.gov/Projects/post-quantum-cryptography/round-3-submissions">FrodoKEM</a>, based on the hardness of a variant of the Learning with Errors (LWE) problem and the Short Integer Solution (SIS) problem. As we have already <a href="/post-quantum-key-encapsulation">talked about LWE</a>, let’s now briefly go over SIS.</p><p><b>Note to the reader</b>: Some mathematics is coming in the next sections; but don’t worry, we will guide you through it.</p>
    <div>
      <h3>The Short Integer Solution Problem</h3>
      <a href="#the-short-integer-solution-problem">
        
      </a>
    </div>
    <p>In order to properly explain what the SIS problem is, we need to first start by understanding what a <i>lattice</i> is. A lattice is a regular repeated arrangement of objects or elements over a space. In geometry, these objects can be points; in physics, these objects can be atoms. For our purposes, we can think of a lattice as a set of points in <i>n</i>-dimensional space with a periodic (repeated) structure, as we see in the image. It is important to understand the meaning of <i>n</i>-dimensional space here: a two-dimensional space is, for example, the one that we often see represented on planes: a projection of the physical universe into a plane with two dimensions which are length and width. Historically, lattices have been investigated since the late 18th century for various reasons. For a more comprehensive introduction to lattices, you can read this <a href="https://cims.nyu.edu/~regev/papers/qcrypto.pdf">great paper</a>.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4XTU1HRx6QrSZx9fsPjhEa/0a51d0469b64c83428759635be0c5c88/image4-11.png" />
            
            </figure><p>Picture of a lattice. They are found in the wild in Portugal.</p><p>What does <a href="https://crypto.stanford.edu/cs355/18sp/lec9.pdf">SIS</a> pertain to? You are given a positive integer <i>q</i> and a <a href="https://en.wikipedia.org/wiki/Matrix_(mathematics)">matrix</a> (a rectangular array of numbers) <i>A</i> of dimensions <i>n x m</i> (the number of rows is <i>n</i> and the number of columns is <i>m</i>), whose elements are integers between 0 and a number <i>q</i>. You are then asked to find a <a href="https://en.wikipedia.org/wiki/Vector_(mathematics_and_physics)">vector</a> <i>r</i> (smaller than a certain amount, called the “norm bound”) such that <i>Ar = 0</i>. The conjecture is that, for a sufficiently large <i>n</i>, finding this solution is hard even for quantum computers. This problem is “dual” to the LWE problem that we explored in <a href="/post-quantum-key-encapsulation">another blog post</a>.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/26WDPtAXRotNflQ4IqzL0s/e7bdb4a467589e0db87f950e4482306c/image2-18.png" />
            
            </figure><p>We can define this same <a href="https://web.eecs.umich.edu/~cpeikert/pubs/slides-abit2.pdf">problem over a lattice</a>. Take a lattice <i>L(A)</i>, made up of <i>m</i> different <i>n</i>-dimensional vectors <i>y</i> (the repeated elements)<i>.</i> The goal is to find non-zero vectors in the lattice such that <i>Ay = 0 (mod q)</i> (for some <i>q</i>), whose size is less than a certain specified amount. This problem can be seen as trying to find the “short” solutions in the lattice, which makes the problem the <a href="https://en.wikipedia.org/wiki/Lattice_problem#Shortest_vector_problem_(SVP)">Short Vector Problem</a> (SVP) in the average case. Finding this solution is simple to do in two dimensions (as seen in the diagram), but finding the solution in more dimensions is hard.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3CDPZWUUhMGnb2VKGMoxyG/eedf163a85ea7567b55003d382fe8b21/image3-20.png" />
            
            </figure><p>The SIS problem as the SVP. The goal is to find the “short” vectors in the radius.</p><p>The SIS problem is often used in cryptographic constructions such as one-way functions, collision resistant hash functions, digital signature schemes, and identification schemes.</p><p>We have now built the first Chinese box: the mathematical base. Let’s take this base now and create schemes from it.</p>
    <div>
      <h3>Identification Schemes</h3>
      <a href="#identification-schemes">
        
      </a>
    </div>
    <p>From the mathematical base of our Chinese box, we build the first computational algorithm: an identification scheme. An identification scheme consists of a key generation algorithm, which outputs a public and private key, and an interactive protocol between a prover <i>P</i> and a verifier <i>V</i>. The prover has access to the public key and private key, and the verifier only has access to the public key. A series of messages are then exchanged such that the prover can demonstrate to the verifier that they know the private key, without leaking any other information about the private key.</p><p>More specifically, a three-move (three rounds of interaction) identification scheme is a collection of algorithms. Let’s think about it in the terms of Odysseus trying to prove to the cyclops that he is <i>Nobody</i>:</p><ul><li><p>Odysseus (the prover) runs a key generation algorithm, <i>Generate</i>, that outputs a public and private keypair.</p></li><li><p>Odysseus then runs a commitment algorithm, <i>Commit,</i> that uses the private key, and outputs a commitment <i>Y</i>. The commitment is nothing more than a statement that this specific private key is the one that will be used. He sends this to the cyclops.</p></li><li><p>The cyclops (the verifier) takes the commitment and runs a challenge algorithm, <i>Challenge</i>, and outputs a challenge <i>c</i>. This challenge is a question that asks: are you really the owner of the private key?</p></li><li><p>Odysseus receives the challenge and runs a response algorithm, <i>Response</i>. This outputs a response <i>z</i> to the challenge. He sends this value to the cyclops.</p></li><li><p>The cyclops runs the verification algorithm, <i>Verify</i>, which outputs either accept (1) or reject (0) if the answer is correct.</p></li></ul><p>If Odysseus was really the owner of the private key for <i>Nobody</i>, he would have been able to answer the challenge in a positive manner (with a 1). But, as he is not, he runs away (and this is the last time we see him in this blogpost).</p>
    <div>
      <h3>The Dilithium Identification Scheme</h3>
      <a href="#the-dilithium-identification-scheme">
        
      </a>
    </div>
    <p>The basic building blocks of Dilithium are polynomials and rings. This is the second-last box of the Chinese box, and we will explore it now.</p><p>A polynomial ring, <i>R</i>, is a <a href="https://en.wikipedia.org/wiki/Ring_(mathematics)">ring</a> of all polynomials. A ring is a set in which two operations can exist: addition and multiplication of integers; and a polynomial is an expression of variables and coefficients. The “size” of these polynomials, defined as the size of the largest coefficient, plays a crucial role for these kinds of algorithms.</p><p>In the case of Dilithium, the <i>Generation</i> algorithm creates a <i>k x l</i> matrix <i>A</i>. Each entry of this matrix is a polynomial in the defined ring. The generation algorithm also creates random private vectors <i>s1</i> and <i>s2</i>, whose components are elements of <i>R</i>, the ring_._ The public key is the matrix <i>A</i> and <i>t = As1 + s2</i>. It is infeasible for a quantum computer to know the secret values given just <i>t</i> and <i>A</i>. This problem is called Module-Learning With Errors (MLWE) problem, and it is a variant of LWE as seen <a href="/post-quantum-key-encapsulation">in this blog post</a>.</p><p>Armed with the public and private keys, the Dilithium <i>identification scheme</i> proceeds as follows (some details are left out for simplicity, like the rejection sampling):</p><ol><li><p>The prover wants to prove they know the private key. They generate a random secret nonce y whose coefficient is less than a security parameter. They then compute Ay and set a commitment w1 to be the “high-order”<sup>1</sup> bits of the coefficients in this vector.</p></li><li><p>The verifier accepts the commitment and creates a challenge c.</p></li><li><p>The prover creates the potential signature z = y + cs1 (notice the usage of the random secret nonce and of the private key) and performs checks on the sizes of several parameters which makes the signature secure. This is the answer to the challenge.</p></li><li><p>The verifier receives the signature and computes w1 to be the “high-order” bits of Az−ct (notice the usage of the public key). They accept this answer if all the coefficients of z are less than the security parameter, and if w1 is equal to w0.</p></li></ol><p>The identity scheme previously mentioned is an interactive protocol that requires participation from both parties. How do we turn this into a non-interactive signature scheme where one party issues signatures and other parties can verify them (the reason for this conversation is that anyone should be able to publicly verify)? Here, we place the last Chinese box.</p><p>A three-move identification scheme can be turned into a signature scheme using the Fiat–Shamir transformation: instead of the verifier accepting the commitment and sending a challenge <i>c</i>, the prover computes the challenge as a hash <i>H(M || w1)</i>  of the message <i>M</i> and of the value <i>w1</i> (computed in step 1 of the previous scheme). This is an approach in which the signer has created an instance of a lattice problem, which only the signer knows the solution to.</p><p>This in turn means that if a message was signed with a key, it could have only been signed by the person with access to the private key, and it can be verified by anyone with access to the public key.</p><p>How is this procedure related to the lattice’s problems we have seen? It is used to prove the security of the scheme: specifically the M-SIS (module SIS) problem and the LWE decisional problem.</p><p>The Chinese box is now constructed, and we have a digital signature scheme that can be used safely in the face of quantum computers.</p>
    <div>
      <h3>Other Digital Signatures beyond Dilithium</h3>
      <a href="#other-digital-signatures-beyond-dilithium">
        
      </a>
    </div>
    <p>In Star Trek, <a href="https://en.wikipedia.org/wiki/Dilithium_(Star_Trek)"><i>Dilithium</i></a> is a rare material that cannot be replicated. Similarly, signatures cannot be replicated or forged: each one is unique. But this does not mean that there are no other algorithms we can use to generate post-quantum signatures. Dilithium is currently designated as a finalist candidate for standardization as part of the post-quantum NIST process. But, there are others:</p><ul><li><p><a href="https://falcon-sign.info/">Falcon</a>, another lattice-based candidate, based on NTRU lattices.</p></li><li><p><a href="https://www.pqcrainbow.org/">Rainbow</a>, a scheme based on multivariate polynomials.</p></li></ul><p>We have seen examples of KEMs in <a href="/post-quantum-key-encapsulation">other blog posts</a> and signatures that are resistant to attacks by quantum computers. Now is the time to step back and take a look at the bigger picture. We have the building blocks, but the problem of actually building post-quantum secure cryptographic protocols with them remains, as well as making existing protocols such as TLS post-quantum secure. This problem is not entirely straightforward, owing to the trade-offs that post-quantum algorithms present. As we have carefully stitched together mathematical problems and cryptographic tools to get algorithms with the properties we desire, so do we have to carefully compose these algorithms to get the secure protocols that we need.</p>
    <div>
      <h3>References:</h3>
      <a href="#references">
        
      </a>
    </div>
    <ul><li><p>“Fiat-Shamir With Aborts: Applications to Lattice and Factoring-Based Signatures” by Vadim Lyubashevsky: <a href="https://www.iacr.org/archive/asiacrypt2009/59120596/59120596.pdf">https://www.iacr.org/archive/asiacrypt2009/59120596/59120596.pdf</a></p></li><li><p>“A Concrete Treatment of Fiat-Shamir Signatures in the Quantum Random-Oracle Model” by Eike Kiltz, Vadim Lyubashevsky and Christian Schaffner: <a href="https://eprint.iacr.org/2017/916.pdf">https://eprint.iacr.org/2017/916.pdf</a></p></li><li><p>“On Lattices, Learning with Errors, Random Linear Codes, and Cryptography” by Oded Regev: <a href="https://cims.nyu.edu/~regev/papers/qcrypto.pdf">https://cims.nyu.edu/~regev/papers/qcrypto.pdf</a></p></li><li><p>“CRYSTALS: (Cryptographic Suite for Algebraic Lattices) Dilithium” by Léo Ducas, Eike Kiltz, Tancrede Lepoint, Vadim Lyubashevsky, Peter Schwabe, Gregor Seiler and Damien Stehle: <a href="https://csrc.nist.gov/CSRC/media/Presentations/crystals-dilithium-round-2-update/images-media/crystals-dilithium-lyubashevsky.pdf">https://csrc.nist.gov/CSRC/media/Presentations/crystals-dilithium-round-2-update/images-media/crystals-dilithium-lyubashevsky.pdf</a></p></li><li><p>“The Design and Cryptanalysis of Post-Quantum Digital Signature Algorithms” by Ward Beullens: <a href="https://www.esat.kuleuven.be/cosic/publications/thesis-417.pdf">https://www.esat.kuleuven.be/cosic/publications/thesis-417.pdf</a></p></li><li><p>“How To Prove Yourself: Practical Solutions to Identification and Signature Problems” by Amos Fiat and Adi Shamir: <a href="https://link.springer.com/content/pdf/10.1007/3-540-47721-7_12.pdf">https://link.springer.com/content/pdf/10.1007/3-540-47721-7_12.pdf</a></p></li></ul><p>.....</p><p><sup>1</sup>This “high-order” and “low-order” procedure decomposes a vector, and there is a specific procedure for this for Dilithium. It aims to reduce the size of the public key.</p> ]]></content:encoded>
            <category><![CDATA[Research]]></category>
            <category><![CDATA[Cryptography]]></category>
            <category><![CDATA[Post-Quantum]]></category>
            <guid isPermaLink="false">U0ZZL2iHPnagcIJvsEI3A</guid>
            <dc:creator>Goutam Tamvada</dc:creator>
            <dc:creator>Sofía Celi</dc:creator>
        </item>
        <item>
            <title><![CDATA[The post-quantum state: a taxonomy of challenges]]></title>
            <link>https://blog.cloudflare.com/post-quantum-taxonomy/</link>
            <pubDate>Mon, 21 Feb 2022 13:59:45 GMT</pubDate>
            <description><![CDATA[ At Cloudflare, we strive to help build a better Internet, which means a quantum-protected one. In this post, we look at the challenges for migrating to post-quantum cryptography and what lies ahead using a taxonomy ]]></description>
            <content:encoded><![CDATA[ 
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3YbIrwSzZaG5GIHfUgte1e/8b883eb940ff6133e482df239b414943/image2-17.png" />
            
            </figure><p>At Cloudflare, we help to build a <a href="https://www.cloudflare.com/betterinternet/">better Internet</a>. In the face of <a href="/securing-the-post-quantum-world/">quantum computers and their threat to cryptography</a>, we want to provide protections for this future challenge. The only way that we can change the future is by analyzing and perusing the past. Only in the present, with the past in mind and the future in sight, can we categorize and unfold events. Predicting, understanding and anticipating quantum computers (with the opportunities and challenges they bring) is a daunting task. We can, though, create a taxonomy of these challenges, so the future can be better unrolled.</p><p>This is the first blog post in a post-quantum series, where we talk about our past, present and future “adventures in the Post-Quantum land”. We have <a href="/securing-the-post-quantum-world/">written about previous post-quantum efforts</a> at Cloudflare, but we think that here first we need to understand and categorize the problem by looking at what we have done and what lies ahead. So, welcome to our adventures!</p><p>A taxonomy of the challenges ahead that quantum computers and their threat to cryptography bring (for more information about it, read <a href="/quantum-solace-and-spectre">our other blog posts</a>) could be a good way to approach this problem. This taxonomy should not only focus at the technical level, though. Quantum computers fundamentally change the way certain protocols, properties, storage and retrieval systems, and infrastructure need to work. Following the <a href="https://en.wikipedia.org/wiki/Taxonomy_(biology)">biological tradition</a>, we can see this taxonomy as the idea of grouping together problems into spaces and the arrangement of them into a classification that helps to understand what and who the problems impact. Following the same tradition, we can use the idea of <a href="https://en.wikipedia.org/wiki/Kingdom_(biology)">kingdoms</a> to classify those challenges. The kingdoms are (in no particular order):</p><ol><li><p>Challenges at the Protocols level, <i>Protocolla</i></p></li><li><p>Challenges at the implementation level, <i>Implementa</i></p></li><li><p>Challenges at the standardization level, <i>Regulae</i></p></li><li><p>Challenges at the community level: <i>Communitates</i></p></li><li><p>Challenges at the research level, <i>Investigationes</i></p></li></ol><p>Let’s explore them one by one.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1LI4lo7slRYcSTCmVPR6bk/6d6749e5a99dff427570f98026e3667c/image1-16.png" />
            
            </figure><p>The taxonomy tree of the post-quantum challenges.</p>
    <div>
      <h3>Challenges at the Protocols level, <i>Protocolla</i></h3>
      <a href="#challenges-at-the-protocols-level-protocolla">
        
      </a>
    </div>
    <p>One conceptual model of the Internet is that it is composed of layers that stack on top of each other, as defined by the <a href="https://en.wikipedia.org/wiki/OSI_model">Open Systems Interconnection (OSI) model</a>. Communication protocols in each layer enable parties to interact with a corresponding one at the same layer. While quantum computers are a threat to the security and privacy of digital connections, they are not a direct threat to communication itself (we will see, though, that one of the consequences of the existence of quantum computers is how new algorithms that are safe to their attacks can impact the communication itself). But, if the protocol used in a layer aims to provide certain security or privacy properties, those properties can be endangered by a quantum computer. The properties that these protocols often aim to provide are <a href="https://users.ece.cmu.edu/~koopman/des_s99/security/">confidentiality</a> (no one can read the content of communications), <a href="https://users.ece.cmu.edu/~koopman/des_s99/security/">authentication</a> (communication parties are assured who they are talking to) and <a href="https://users.ece.cmu.edu/~koopman/des_s99/security/">integrity</a> (the content of the communication is assured to have not been changed in transit).</p><p>Well-known examples of protocols that aim to provide security or privacy properties to protect the different layers are:</p><ul><li><p>For the Network layer: <a href="https://en.wikipedia.org/wiki/IPsec">IPsec</a> or <a href="https://www.wireguard.com/papers/wireguard.pdf">WireGuard</a>.</p></li><li><p>For the Transport layer: <a href="https://en.wikipedia.org/wiki/Transport_Layer_Security">Transport Layer Security (TLS)</a> or <a href="https://en.wikipedia.org/wiki/QUIC">QUIC</a>.</p></li><li><p>For the Application later: <a href="https://www.cloudflare.com/learning/dns/dnssec/ecdsa-and-dnssec/">DNSSEC</a>, <a href="https://datatracker.ietf.org/wg/mls/about/">Message Layer Security (MLS)</a>, <a href="https://en.wikipedia.org/wiki/Secure_Shell">Secure Shell (SSH</a>), and many more.</p></li></ul><p>We know how to provide those properties (and protect the protocols) against the threat of quantum computers: the solution is to use post-quantum cryptography, and, for this reason, the National Institute of Standards and Technology (NIST) has been running an <a href="https://csrc.nist.gov/projects/post-quantum-cryptography">ongoing process</a> to choose the most suitable algorithms. At this protocol level, then, the problem doesn’t seem to be adding these new algorithms, as nothing impedes it from a theoretical perspective. But protocols and our connections often have other requirements or constraints. Requirements or constraints are, for example, that data to be exchanged fits into a specific packet or segment size. In the case of the Transport Control Protocol (TCP), which ensures that data packets are delivered and received in order, for example, the <a href="https://en.wikipedia.org/wiki/Maximum_segment_size">Maximum Segment Size</a> — <a href="https://www.cloudflare.com/learning/network-layer/what-is-mss/">MSS</a> — sets the largest segment size that a network-connected device can receive and, if a packet exceeds the MSS, it is dropped (certain middleboxes or software desire all data to fit into a single segment). IPv4 hosts are required to handle an MSS of 536 bytes and IPv6 hosts are required to handle an MSS of 1220 bytes. In the case of <a href="https://www.cloudflare.com/learning/dns/what-is-dns/">DNS</a>, it has primarily answered queries over the User Datagram Protocol (UDP), which limits the answer size to 512 bytes unless the extension EDNS is in use. Larger packets are subject to fragmentation or to the request that TCP should be used: this results in added round-trips, which should be avoided.</p><p>Another important requirement is that the operations that algorithms execute (such as multiplication or addition) or the resources they consume (which can be time but also space/memory) are fast enough that connection times are not impacted. This is even more pressing when fast, reliable and cheap Internet access is not available, as access to this “type” of Internet is not globally available.</p><p>TLS is a protocol that needs to handle heavy HTTPS traffic load and one that gets impacted by the additional cost that cryptographic operations add. Since 2010, TLS has been deemed “<a href="https://www.imperialviolet.org/2010/06/25/overclocking-ssl.html">not computationally expensive any more</a>” due to the usage of fast cryptography implementations and abbreviated handshakes (by using <a href="/tls-session-resumption-full-speed-and-secure/">resumption</a>, for example). But what if we add post-quantum algorithms into the TLS handshake? Some post-quantum algorithms can be suitable, while others might not be.</p><p>Many of the schemes that are part of the <a href="https://csrc.nist.gov/projects/post-quantum-cryptography/round-3-submissions">third round</a> of the NIST post-quantum process, that can be used for confidentiality, seem to have encryption and decryption performance time comparable (or faster) to <a href="/a-relatively-easy-to-understand-primer-on-elliptic-curve-cryptography/">fast elliptic-curve cryptography</a>. This in turn means that, from this point of view, they can be practically used. But, what about storage or transmission of the public/private keys that they produce (an attack can be mounted, for example, to force a server into constantly storing big public keys, which can lead to a denial-of-service attack or variants of the <a href="https://en.wikipedia.org/wiki/SYN_flood">SYN flood attack</a>)? What about their impacts on bandwidth and latency? Does having larger keys that span multiple packets (or congestion windows) affect performance especially in scenarios with degraded networks or packet loss?</p><p><a href="/the-tls-post-quantum-experiment/">In 2019</a>, we ran a wide-scale post-quantum experiment with <a href="https://www.imperialviolet.org/2019/10/30/pqsivssl.html">Google</a>’s Chrome Browser team to test these ideas. The goal of that experiment was to assess, in a controlled environment, the impact of adding post-quantum algorithms to the Key Exchange phase of TLS handshakes. This investigation gave us some good insight into what kind of post-quantum algorithms can be used in the Key Agreement part of a TLS handshake (mainly <a href="https://en.wikipedia.org/wiki/Lattice-based_cryptography">lattice-based schemes</a>, or so it seems) and allowed us to test them in a real-world setting. It is worth noting that these algorithms were tested in a “<a href="https://datatracker.ietf.org/doc/html/draft-ietf-tls-hybrid-design">hybrid mode</a>”: a combination of classical cryptographic algorithms with post-quantum ones.</p><p>The key exchange of TLS ensures confidentiality: it is the most pressing task to update it as non-post-quantum traffic captured today can be decrypted by a quantum computer in the future. Luckily, many of the post-quantum Key Encapsulation Mechanisms (KEMs) that are under consideration by NIST seem to be well suited with minimal performance impact. Unfortunately, a problem that has arisen before when considering protocol changes is the presence and behavior of old servers and clients, and of “<a href="https://en.wikipedia.org/wiki/Middlebox">middleboxes</a>” (devices that manipulate traffic —by inspecting— it for purposes other than continuing the communication process). For instance, some middleboxes assume that the first message of a handshake from a browser fits in a single network packet, which is not the case when adding the majority of post-quantum KEMs. Such false assumptions (“<a href="https://en.wikipedia.org/wiki/Protocol_ossification">protocol ossification</a>”) are not a problem unique to post-quantum cryptography: the TLS 1.3 standard is carefully crafted to <a href="/why-tls-1-3-isnt-in-browsers-yet/">work around quirks of older clients</a>, servers and middleboxes.</p><p>While all the data seems to suggest that replacing classical cryptography by post-quantum cryptography in the key exchange phase of TLS handshakes is a straightforward exercise, the problem seems to be much harder for handshake authentication (or for any protocol that aims to give authentication, such as DNSSEC or IPsec). The majority of TLS handshakes achieve authentication by using digital signatures generated via advertised public keys in public certificates (what is called “certificate-based” authentication). Most of the post-quantum signature algorithms currently being considered for standardization in the NIST post-quantum process, have signatures or public keys that are much larger than their classical counterparts. Their operations’ computation time, in the majority of cases, is also much bigger. It is unclear how this will affect the TLS handshake latency and round-trip times, though we have a <a href="/sizing-up-post-quantum-signatures/">better insight now</a> in respect to which sizes can be used. We still need to know how much slowdown will be acceptable for early adoption.</p><p>There seems to be several ways by which we can add post-quantum cryptography to the authentication phase of TLS. We can:</p><ul><li><p>Change the standard to reduce the number of signatures needed.</p></li><li><p>Use different post-quantum signature schemes that fit.</p></li><li><p>Or achieve authentication in a novel way.</p></li></ul><p>On the latter, a novel way to achieve certificate-based TLS authentication is to use KEMs, as their post-quantum versions have smaller sizes than post-quantum signatures. This mechanism is called <a href="https://eprint.iacr.org/2020/534">KEMTLS</a> and we ran a controlled <a href="/kemtls-post-quantum-tls-without-signatures/">experiment</a> showing that it performs well, even when it adds an extra or full round trip to the handshake (KEMTLS adds half a round trip for server-only authentication and a full round-trip for mutual authentication). It is worth noting that we only experimented with replacing the authentication algorithm in the handshake itself and not all the authentication algorithms needed for the certificate chain. We used a mechanism called “<a href="/keyless-delegation/">delegated credentials</a>” for this: since we can’t change the whole certificate chain to post-quantum cryptography (as it involves other actors beyond ourselves), we use this short-lived credential that advertises new algorithms. More details around this experiment can be found in <a href="https://eprint.iacr.org/2021/1019">our paper</a>.</p><p>Lastly, on the TLS front, <a href="/sizing-up-post-quantum-signatures/">we wanted to</a> test the notion that having bigger signatures (such as the post-quantum ones) noticeably impacts TLS handshake times. Since it is difficult to deploy post-quantum signatures to real-world connections, we found a way to emulate bigger signatures without having to modify clients. This emulation was done by using dummy data. The result of this experiment showed that even if large signatures fit in the TCP congestion window, there will still be a double-digit percentage slowdown due to the relatively low average Internet speed. This slowdown is a hard sell for browser vendors and for content servers to adopt. The ideal situation for early adoption seems to be that the six signatures and two public keys of the TLS handshake fit together within 9kB (the signatures are: two in the certificate chain, one handshake signature, one OCSP staple and two SCTs for certificate transparency).</p><p>After this TLS detour, we can now list the challenges at this kingdom, <i>Protocolla</i>, level. The challenges (in no order in particular) seem to be (divided into sections):</p><p><i>Storage of cryptographic parameters used during the protocol’s execution:</i></p><ul><li><p>How are we going to properly store post-quantum cryptographic parameters, such as keys or certificates, that are generated for/during protocol execution (their sizes are bigger than what we are accustomed to)?</p></li><li><p>How is post-quantum cryptography going to work with stateless servers, ones that do not store session state and where every client request is treated as a new one, such as <a href="https://en.wikipedia.org/wiki/Network_File_System">NFS, Sun’s Network File System</a> (for an interesting discussion on the matter, see <a href="https://mctiny.org/mctiny-20191202.pdf">this paper</a>)?</p></li></ul><p><i>Long-term operations and ephemeral ones:</i></p><ul><li><p>What are the impacts of using post-quantum cryptography for long-term operations or for ephemeral ones: will bigger parameters make ephemeral connections a problem?</p></li><li><p>Are security properties assumed in protocols preserved and could we relax others (such as IND-CCA or IND-CPA. For an interesting discussion on the matter, see <a href="https://eprint.iacr.org/2020/379.pdf">this paper</a>)?</p></li></ul><p><i>Managing bigger keys and signatures:</i></p><ul><li><p>What are the impacts on latency and bandwidth?</p></li><li><p>Does the usage of post-quantum increase the roundtrips at the Network layer, for example? And, if so, are these increases tolerable?</p></li><li><p>Will the increased sizes cause dropped or fragmented packets?</p></li><li><p>Devices can occasionally have settings for packets smaller than expected: a router, for example, along a network path can have a maximum transmission unit, MTU (the MSS plus the TCP and IP headers), value set lower than the typical 1,500 bytes. In these scenarios, will post-quantum cryptography make these settings more difficult (one can apply <a href="https://docs.vmware.com/en/VMware-NSX-T-Data-Center/3.1/administration/GUID-5ADD0FBB-7F30-4933-8737-2AC0D919EE3F.html">MSS clamping</a> for some cases)?</p></li></ul><p><i>Preservation of protocols as we know them:</i></p><ul><li><p>Can we achieve the same security or privacy properties as we use them today?</p></li><li><p>Can protocols change: should we change, for example, the way DNSSEC or the <a href="https://en.wikipedia.org/wiki/Public_key_infrastructure">PKI</a> work? Can we consider this radical change?</p></li><li><p>Can we integrate and deploy novel ways to achieve authentication?</p></li></ul><p><i>Hardware (or novel alternative to hardware) usage during protocol’s execution:</i></p><ul><li><p>Is <a href="https://www.nist.gov/publications/post-quantum-cryptography-and-5g-security-tutorial">post-quantum cryptography</a> going to impact <a href="https://docbox.etsi.org/isg/nfv/open/Publications_pdf/White%20Papers/NFV_White_Paper1_2012.pdf">network function virtualization</a> (as used in 5G cellular networks)?</p></li><li><p>Will middleware, such as middleboxes, be able to handle post-quantum cryptography (as noted in “<a href="https://www.chromium.org/cecpq2">The Chromium Projects</a>”)?</p></li><li><p>What will be the impacts on mobile device’s connections?</p></li><li><p>What will be the impacts on old servers and clients?</p></li></ul><p><i>Novel attacks:</i></p><ul><li><p>Will post-quantum cryptography increase the possibility of mounting denial of service attacks?</p></li></ul>
    <div>
      <h3>Challenges at the Implementation level, <i>Implementa</i></h3>
      <a href="#challenges-at-the-implementation-level-implementa">
        
      </a>
    </div>
    <p>The second kingdom that we are going to look at is the one that deals with the implementation of post-quantum algorithms. The ongoing NIST process is standardizing post-quantum algorithms on two fronts: those that help preserve confidentiality (KEMs) and those that provide authentication (signatures). There are other algorithms not currently part of the process that already can be used in a post-quantum world, such as <a href="https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-208.pdf">hash-based signatures</a> (for example, <a href="https://datatracker.ietf.org/doc/html/rfc8391">XMSS</a>).</p><p>What must happen for algorithms to be widely deployed? What are the steps they need to take in order to be usable by protocols or for data at rest? The usual path that algorithms take is:</p><ol><li><p>Standardization: usually by a standardization body. We will talk about it further in the next section.</p></li><li><p>Efficiency at an algorithmic level: by finding new ways to speed up operations in an algorithm. In the case of Elliptic Curve Cryptography, for example, it happened with the usage of <a href="https://link.springer.com/chapter/10.1007/3-540-44647-8_11">endomorphisms for faster scalar multiplication</a>.</p></li><li><p>Efficient software implementations: by identifying the pitfalls that cause increase in time or space consumption (in the case of ECC, <a href="https://link.springer.com/content/pdf/10.1007%2F3-540-44499-8_1.pdf">this paper</a> can illustrate these efforts), and fixing them. An optimal implementation is always dependent on the target, though: where it will be used.</p></li><li><p>Hardware efficiency: by using, for example, <a href="https://en.wikipedia.org/wiki/Hardware_acceleration">hardware acceleration</a>.</p></li><li><p>Avoidance of attacks: by looking at the usual pitfalls of algorithms which, in practice, are <a href="https://en.wikipedia.org/wiki/Side-channel_attack">side-channel attacks</a>.</p></li></ol><p>Implementation of post-quantum cryptography will follow (and is following) the same path. <a href="https://en.wikipedia.org/wiki/Lattice-based_cryptography">Lattice-based cryptography</a> for KEMs, for example, has taken <a href="https://eprint.iacr.org/2016/461.pdf">many steps</a> in order to be faster than ECC (but, from a protocol level perspective, it is inefficient than them, as their parameters are bigger than ECC ones and might cause extra round-trips). <a href="https://arxiv.org/pdf/1711.04062.pdf">Isogeny-based cryptography</a>, on the other hand, is still too slow (due to <a href="https://www.martindale.info/car_article.pdf">long isogenies evaluation</a>), but it is an active area of research.</p><p>The challenges at this kingdom, <i>Implementa</i>, (in no particular order) level are:</p><ul><li><p>Efficiency of algorithms: can we make them faster at the software, hardware (by using  acceleration or FPGA-based research) or at an algorithmic level (with new data structures or parallelization techniques) to meet the requirements of network protocols and <a href="https://www.cloudflare.com/learning/performance/more/speed-up-the-web/">ever-fastest connections</a>?</p></li><li><p>Can we use new mechanisms to accelerate algorithms (such as, for example, the usage of floating point numbers as in the <a href="https://falcon-sign.info/">Falcon signature scheme</a>)? Will this lead to <a href="https://cryptoservices.github.io/post-quantum/cryptography/2019/09/18/new-falcon-impl.html">portability issues</a> as it might be dependent on the underlying architecture?</p></li><li><p>What is the asymptotic complexity of post-quantum algorithms (how they impact time and space)?</p></li><li><p>How will post-quantum algorithms work on embedded devices due to their limited capacity (see this <a href="https://arxiv.org/abs/2106.05577">paper</a> for more explanations)?</p></li><li><p>How can we avoid attacks, failures in security proofs and misuse of APIs?</p></li><li><p>Can we provide correct testing of these algorithms?</p></li><li><p>Can we ensure <a href="https://www.bearssl.org/constanttime.html">constant-time</a> needs for the algorithms?</p></li><li><p>What will happen in a disaster-recovery mode: what happens if an algorithm is found to be weaker than expected or is fully broken? How will we be able to remove or update this algorithm? How can we make sure there are transition paths to recover from a cryptographical weakening?</p></li></ul><p>At Cloudflare, we have also worked on implementation of post-quantum algorithms. We published our own library (<a href="https://github.com/cloudflare/circl">CIRCL</a>) that contains high-speed assembly versions of several post-quantum algorithms (like <a href="https://pq-crystals.org/kyber/">Kyber</a>, <a href="https://pq-crystals.org/dilithium/">Dilithium</a>, <a href="https://sike.org/">SIKE</a> and <a href="https://eprint.iacr.org/2018/383.pdf">CSIDH</a>). We believe that providing these implementations for public use will help others with the transition to post-quantum cryptography by giving easy-to-use APIs that developers can integrate into their projects.</p>
    <div>
      <h3>Challenges at the Standards level, <i>Regulae</i></h3>
      <a href="#challenges-at-the-standards-level-regulae">
        
      </a>
    </div>
    <p>The third kingdom deals with the standardization process as done by different bodies of organizations (such NIST or the Internet Engineering Task Force — IETF). We have talked a little about the matter in the previous section as it involves the standardization of both protocols and algorithms. Standardization can be a long process due to the need for careful discussion, and this discussion will be needed for the standardization of post-quantum algorithms. Post-quantum cryptography is based on mathematical constructions that are not widely known by the engineering community, which can then lead to difficulty levels when standardizing.</p><p>The challenges in this kingdom, <i>Regulae</i>, (in no particular order) are:</p><ul><li><p>The mathematical base of post-quantum cryptography is an active area of development and research, and there are some concerns in the security they give (are there new attacks in the confidentiality or authentication they give?). How will standardization bodies approach this problem?</p></li><li><p>Post-quantum cryptography introduces new models in which to analyze the security of algorithms (for example, the usage of the <a href="https://eprint.iacr.org/2010/428.pdf">Quantum Random Oracle Model</a>). Will this mean that new attacks or adversaries will not be noted at the standards level?</p></li><li><p>What will be the recommendation of migrating to post-quantum cryptography from the standards' perspective: will we use a <a href="https://datatracker.ietf.org/doc/html/draft-ietf-tls-hybrid-design">hybrid approach</a>?</p></li><li><p>How can we bridge the academic/research community into the standardization community, so analysis of protocols are executed and attacks are found on time (prior to being widely deployed)<sup>1</sup>? How can we make sure that standards bodies are informed enough to make the right practical/theoretical trade-offs?</p></li></ul><p>At Cloudflare, we are closely collaborating with standardization bodies to prepare the path for post-quantum cryptography (see, for example, the <a href="https://github.com/claucece/draft-celi-wiggers-tls-authkem">AuthKEM</a> draft at IETF).</p>
    <div>
      <h3>Challenges at the Community level, <i>Comunitates</i></h3>
      <a href="#challenges-at-the-community-level-comunitates">
        
      </a>
    </div>
    <p>The Internet is not an isolated system: it is a community of different actors coming together to make protocols and systems work. Migrating to post-quantum cryptography means sitting together as a community to update systems and understand the different needs. This is one of the reasons why, at Cloudflare, we are organizing a second installment of the <a href="https://www.sofiaceli.com/PQNet-Workshop/">PQNet workshop</a> (expected to be colocated with <a href="https://rwc.iacr.org/2022/">Real World Crypto 2022</a> on April 2022) for experts on post-quantum cryptography to talk about the challenges of putting it into protocols, systems and architectures.</p><p>The challenges in this kingdom, <i>Comunitates</i>, are:</p><ul><li><p>What are the needs of different systems? While we know what the needs of different protocols are, we don’t know exactly how all deployed systems and services work. Are there further restrictions?</p></li><li><p>On certain systems (for example, on the PKI), when will the migration happen, and how will it be coordinated?</p></li><li><p>How will the migration be communicated to the end-user?</p></li><li><p>How will we deprecate pre-quantum cryptography?</p></li><li><p>How will we integrate post-quantum cryptography into systems where algorithms are hardcoded (such as IoT devices)?</p></li><li><p>Who will maintain implementations of post-quantum algorithms and protocols? Is there incentive and funding for a diverse set of interoperable implementations?</p></li></ul>
    <div>
      <h3>Challenges at the Research level, <i>Investigationes</i></h3>
      <a href="#challenges-at-the-research-level-investigationes">
        
      </a>
    </div>
    <p>Post-quantum cryptography is an active area of research. This research is not devoted only to how algorithms interact with protocols, systems and architectures (as we have seen), but it is heavily interested at the foundational level. The open challenges on this front are many. We will list four that are of the most interest to us:</p><ul><li><p>Are there any efficient and secure post-quantum non-interactive key exchange (NIKE) algorithms?</p></li></ul><p><a href="https://eprint.iacr.org/2012/732.pdf">NIKE</a> is a cryptographic algorithm which enables two participants, who know each others’ public keys, to agree on a shared key, without requiring any interaction. An example of a NIKE is the <a href="https://en.wikipedia.org/wiki/Diffie%E2%80%93Hellman_key_exchange">Diffie-Hellman algorithm</a>. There are no efficient and secure post-quantum NIKEs. A candidate seems to be <a href="https://eprint.iacr.org/2018/383.pdf">CSIDH</a>, which is rather slow and whose security <a href="https://csrc.nist.gov/CSRC/media/Events/Second-PQC-Standardization-Conference/documents/accepted-papers/stebila-prototyping-post-quantum.pdf">is debated</a>.</p><ul><li><p>Are there post-quantum alternatives to (V)OPRFs based protocols, such as <a href="/privacy-pass-v3/">Privacy Pass</a> or <a href="/opaque-oblivious-passwords/">OPAQUE</a>?</p></li><li><p>Are there post-quantum alternatives to other cryptographic schemes such as threshold signature schemes, credential based signatures and more?</p></li><li><p>How can post-quantum algorithms be formally verified with new notions such as the QROM?</p></li></ul>
    <div>
      <h3>Post-Quantum Future</h3>
      <a href="#post-quantum-future">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/fSXz737Szc3uhVJsP7M2Z/405e1dbd816bcb7243684989bfcb2df2/image3-19.png" />
            
            </figure><p>The future of Cloudflare is post-quantum.</p><p>What is the post-quantum future at Cloudflare? There are many avenues that we explore in this blog series. While all of these experiments have given us some good and reliable information for the post-quantum migration, we further need tests in different network environments and with broader connections. We also need to test how post-quantum cryptography fits into different architectures and systems. We are preparing a bigger, wide-scale post-quantum effort that will give more insight into what can be done for real-world connections.</p><p>In this series of blog posts, we will be looking at:</p><ul><li><p>What has been happening in the quantum research world in the last years and how it impacts post-quantum efforts.</p></li><li><p>What is a post-quantum algorithm: explanations of KEMs and signatures, and their security properties.</p></li><li><p>How one integrates post-quantum algorithms into protocols.</p></li><li><p>What is formal verification, analysis and implementation, and how it is needed for the post-quantum transition.</p></li><li><p>What does it mean to implement post-quantum cryptography: we will look at our efforts making Logfwdr, Cloudflare Tunnel and Gokeyless post-quantum.</p></li><li><p>What does the future hold for a post-quantum Cloudflare.</p></li></ul><p>See you in the next blogpost and prepare for a better, safer, faster and quantum-protected Internet!</p><p>If you are a student enrolled in a PhD or equivalent research program and looking for an internship for 2022, see <a href="https://www.cloudflare.com/careers/jobs/?department=University&amp;location=default"><b>open opportunities</b></a><b>.</b></p><p>If you’re interested in contributing to projects helping Cloudflare, <a href="https://www.cloudflare.com/careers/jobs/?department=Engineering&amp;location=default">our engineering teams are hiring</a>.</p><p>You can reach us with questions, comments, and research ideas at <a>ask-research@cloudflare.com</a>.</p><p>.......</p><p><sup>1</sup>A success scenario of this is the standardization of TLS 1.3 (in comparison to TLS 1.0-1.2) as it involved the formal verification community, which helped bridge the academic-standards communities to good effect. Read <a href="https://pure.royalholloway.ac.uk/portal/files/27884959/paper.pdf">the analysis</a> of this novel process.</p> ]]></content:encoded>
            <category><![CDATA[Research]]></category>
            <category><![CDATA[Cryptography]]></category>
            <category><![CDATA[Post-Quantum]]></category>
            <guid isPermaLink="false">15pArsKO5W1d6vdJcHdp2x</guid>
            <dc:creator>Sofía Celi</dc:creator>
        </item>
        <item>
            <title><![CDATA[The quantum solace and spectre]]></title>
            <link>https://blog.cloudflare.com/quantum-solace-and-spectre/</link>
            <pubDate>Mon, 21 Feb 2022 13:59:28 GMT</pubDate>
            <description><![CDATA[ What is quantum computing and what advances have been made so far on this front? In this blog post, we will answer this question and see how to protect against quantum adversaries ]]></description>
            <content:encoded><![CDATA[ <p></p><blockquote><p><i>Not only is the universe stranger than we think, but it is stranger than we can think of— </i><b><i>Werner Heisenberg</i></b></p></blockquote><p>Even for a physicist as renowned as Heisenberg, the universe was strange. And it was strange because several phenomena could only be explained through the lens of quantum mechanics. This field changed the way we understood the world, challenged our imagination, and, since the <a href="https://en.wikipedia.org/wiki/Solvay_Conference">Fifth Solvay Conference</a> in 1927, has been integrated into every explanation of the physical world (it is, to this day, our best description of the inner workings of nature). Quantum mechanics created a rift: every physical phenomena (even the most micro and macro ones) stopped being explained only by <a href="https://en.wikipedia.org/wiki/Classical_physics">classical physics</a> and started to be explained by quantum mechanics. There is another world in which quantum mechanics has not yet created this rift: the realm of computers (note, though, that manufacturers have been <a href="https://semiengineering.com/quantum-effects-at-7-5nm/">affected by quantum effects</a> for a long time). That is about to change.</p><p><a href="https://www.eejournal.com/article/richard-feynman-and-quantum-computing/">In the 80s</a>, several physicists (including, for example, Richard Feynman and Yuri Manin) asked themselves these questions: are there computers that can, with high accuracy and in a reasonable amount of time, simulate physics? And, specifically, can they simulate quantum mechanics? These two questions started a long field of work that we now call “quantum computing”. Note that you can simulate quantum phenomena with classical computers. But we can simulate in a much faster manner with a quantum computer.</p><p>Any computer is already a quantum system, but a quantum computer uses the true computational power that quantum mechanics offers. It solves some problems much faster than a classical computer. Any computational problem solvable by a classical computer is also solvable by a quantum computer, as any physical phenomena (such as operations performed in classical computers) can be described using quantum mechanics. Contrariwise, any problem solvable by a quantum computer is also solvable by a classical computer: quantum computers provide no additional power over classical computers in terms of what they could theoretically compute. Quantum computers cannot solve <a href="https://en.wikipedia.org/wiki/Undecidable_problem">undecidable problems</a>, and they do not disprove the <a href="https://en.wikipedia.org/wiki/Church%E2%80%93Turing_thesis">Church–Turing thesis</a>. But, they are faster for specific problems! Let’s explore now why this is the case.</p><p>In a classical computer, data is stored in bits: 0 or 1, current <i>on</i> or current <i>off</i>, true or false. Nature is not that binary: any physical bit (current or not; particle or not; magnetized or not) is in fact a <i>qubit</i> and exists on a spectrum (or rather a <a href="https://en.wikipedia.org/wiki/Bloch_sphere">sphere</a>) of superpositions between 0 and 1. We have mentioned two terms that might be new to the reader: <i>qubit</i> and <i>superposition</i>. Let’s first define the second term.</p><p>A <i>superposition</i> is a normalized linear combination of all the possible configurations of <i>something</i>. What is this <i>something</i>? If we take the <a href="https://en.wikipedia.org/wiki/Schr%C3%B6dinger_equation">Schrödinger equation</a> to explain, “for any isolated region of the universe that you want to consider, that equation describes the evolution in time of the state of that region, which we represent as a normalized linear combination — a <i>superposition</i> — of all the possible configurations of elementary particles in that region”, as <a href="https://scottaaronson.com/democritus/lec1.html">Aaronson put it</a>. <i>Something</i>, then, refers to all elementary particles. Does it mean, then, that we all are in a superposition? The answer to that question is open to many interpretations (for a nice introduction on the topic, one can read “Quantum Computing Since Democritus” by Scott Aaronson).</p><p>As you see, we are concerned then no longer on matter or energy or waves (as in classical physics), but rather on information and probabilities. In a quantum system, we first initialize information (like a <i>bit</i>) to some “easy” states, like 0 and 1. This information will be in a superposition of both states at the same time rather than just one <i>or</i> the other. If we examine a classical bit, we will either get the states of 1 or 0. Instead, with a <i>qubit</i> (quantum information) we can only get results with some probability. A _qubi_t can exist in a continuum of states between ⟨0 and ⟨1 (using <a href="https://en.wikipedia.org/wiki/Bra%E2%80%93ket_notation">Dirac’s notation</a>) all waiting to happen with some probability until it is observed.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/261LCjxL0vVwA0MLjCA6Nt/0719f89160ce7002d65721817c83ffd1/image2-15.png" />
            
            </figure><p>A qubit stores a combination of two or more states.</p><p>In day-to-day life, we don’t see this quantum weirdness because we’re looking at huge jittery ensembles of qubits where this weirdness gets lost in noise. However, if we turn the temperature way down and isolate some qubits from noise, we can do some proper quantum computation. For instance, to find a 256-bit decryption key, we can set 256 qubits to a superposition of all possible keys. Then, we can run the decryption algorithm, trying the key in the qubits. We end up with a superposition of many failures and one success. We did 2<sup>256</sup> computations in a single go! Unfortunately, we can’t read out this superposition. In fact, if we open up the quantum computer to inspect the qubits inside, the outside noise will collapse the superposition into one outcome, which will most certainly be a failure. The real art of quantum computing is interfering this superposition with itself in a clever way to amplify out the answer we want. For the case we are talking about, <a href="https://en.wikipedia.org/wiki/Grover%27s_algorithm">Lov Grover</a> figured out how to do it. Unfortunately, it requires about 2<sup>128</sup> steps: we have a nice, but not exponential speed-up.</p><p>Quantum computers do not provide any additional power over classical computers in terms of computability: they don’t solve unsolvable problems. But they provide efficiency: they find solutions in faster time, even for algorithms that are too “expensive” for a non-quantum (classical) computer. A classical computer can solve any known computational problem if it is given enough time and resources. But, on some occasions, they will need so much time and so many resources that it will be deemed inefficient to try to solve this problem: waiting millions of years with millions of computers working at the same time trying to solve a problem is not something that we consider feasible to do. This idea of efficient and inefficient algorithms was made mathematically precise by the field of computational complexity. Roughly speaking, an efficient algorithm is one which runs in time polynomial in the size of the problem solved.</p><p>We can look at this differently. The many quantum states can be potentially observed (even in a classical computer), but states <i>interfere</i> with each other, which prevents the usage of statistical sampling to obtain the state. We, then, have to track every possible configuration that a quantum system can be in. Tracking every possible configuration is theoretically possible by a classical computer, but would exceed the memory capacity of even the most powerful classical computers. So, we need quantum computers to efficiently solve these problems.</p><p>Quantum computers are powerful. They can efficiently simulate quantum mechanics experiments, and are able to perform <a href="https://quantumalgorithmzoo.org/">a number of mathematical, optimization, and search problems</a> in a faster manner than classical computers. A sufficiently scaled quantum computer can, for example, factor large integers efficiently, as <a href="https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.123.5183&amp;rep=rep1&amp;type=pdf">was shown by Shor</a>. They also have significant advantages when it comes to searching <a href="https://arxiv.org/abs/quant-ph/9605043">on unstructured databases</a> or <a href="https://arxiv.org/abs/quant-ph/0011023">solving the discrete logarithm problem</a>. These advances are the <i>solace</i>: they efficiently advance computational problems that seem stalled. But, they are also a <i>spectre</i>, a threat: the problems that they efficiently solve are what is used as the core of much of cryptography as it exists today. They, hence, can break most of the secure connections we use nowadays. For a longer explanation of the matter, see <a href="/the-quantum-menace/">our past blog post</a>.</p><p>The realization of large-scale quantum computers will pose a threat to our connections and infrastructure. This threat is urgent and acts as a spectre of the future: an attacker can record and store secured traffic now and decrypt it later when a quantum computer exists. The spectre is ever more present if one recalls that migrating applications, architectures and protocols is a long procedure. Migrating these services to <a href="https://www.cloudflare.com/learning/ssl/quantum/what-is-post-quantum-cryptography/">quantum-secure cryptography</a> will take a lot of time.</p><p>In our <a href="/securing-the-post-quantum-world/">other blog posts</a>, we talk at length about what quantum computers are and what threats they pose. In this one, we want to give an overview of the advances in quantum computing and their threats to cryptography, as well as an update to what is considered quantum-secure cryptography.</p>
    <div>
      <h3>The new upcoming decade of quantum computing</h3>
      <a href="#the-new-upcoming-decade-of-quantum-computing">
        
      </a>
    </div>
    <p>In our <a href="/the-quantum-menace/">previous blog post</a>, we mainly talked about improvements to <a href="https://arxiv.org/abs/1905.09749">reduce the cost of factoring integers and computing discrete logarithms</a> during the middle of 2019. But recent years have brought new advances to the field. Google, for example, is aiming to build a “useful, error-corrected quantum computer” by <a href="https://blog.google/technology/ai/unveiling-our-new-quantum-ai-campus/">the end of the decade</a>. Their goal is to be able to better simulate nature and, specifically, to better simulate molecules. This research can then be used, for example, to create <a href="https://www.ibm.com/blogs/research/2020/01/next-gen-lithium-sulfur-batteries/">better batteries</a> or generate <a href="https://www.ibm.com/downloads/cas/8QDGKDZJ#:~:text=strict%20safety%20protocols.-,Quantum%20computing%20has%20the%20potential%20to%20improve%20the%20analysis%20of,may%20include%20single%2Dcell%20methods.">better targeted medicines</a>.</p><p>Why will it take them (and, generally, anyone) so long to create this quantum computer? Classical mechanisms, like Newtonian physics or classical computers, provide great detail and prediction of events and phenomena at a macroscopic level, which makes them perfect for our everyday computational needs. Quantum mechanics, on the other hand, are better manifested and understood at the microscopic level. Quantum phenomena are, furthermore, very fragile: uncontrolled interaction of quantum phenomena with the environment can make quantum features disappear, a process which is referred to as _decoherenc_e. Due to this, measurement of quantum states has to be controlled, properly prepared and isolated from its environment. Furthermore, <a href="https://en.wikipedia.org/wiki/Measurement_problem">measuring a quantum phenomenon</a> disturbs its state, so reliable methods of computing information have to be developed.</p><p>Creating a computer that achieves computational tasks with these challenges is a daunting one. In fact, it may ultimately be impossible to completely eliminate errors in the computational tasks. It can even be an impossible task at a perfect level, as a certain amount of errors will persist. Reliable computation seems to be only achievable in a quantum computational model by employing error correction.</p><p>So, when will we know that a created quantum device is ready for the “quantum” use cases? One can use the idea of “<a href="https://arxiv.org/abs/1801.00862">quantum supremacy</a>”, which is defined as the ability for a quantum device to perform a computational task that would be practically impossible (in terms of efficiency and usage of computational power) for classical computers. Nevertheless, this idea can be criticized as classical algorithms improve over time, and there is no clear definition on how to compare quantum computations with their classical counterparts (one must prove that no classical means would allow us to perform the same computational task in a faster time) in order to say that indeed quantum computations of the task are faster.</p><p>A molecule can indeed be claimed to be “quantum supreme” if researchers find it computationally prohibitive (unrealistically expensive) to solve the <a href="https://en.wikipedia.org/wiki/Schr%C3%B6dinger_equation">Schrödinger equation</a> and calculate its ground state. That molecule constitutes a “<a href="https://arxiv.org/pdf/1612.05903.pdf">useful quantum computer, for the task of simulating itself</a>.” But, for computer science, this individual instance is not enough: “for any one molecule, the difficulty in simulating it classically might reflect genuine asymptotic hardness, but it might also reflect other issues (e.g., a failure to exploit special structure in the molecule, or the same issues of modeling error, constant-factor overheads, and so forth that arise even in simulations of classical physics)”, as stated by <a href="https://arxiv.org/pdf/1612.05903.pdf">Aaronson and Chen</a>.</p><p>By the end of 2019, Google argued they had <a href="https://www.nature.com/articles/s41586-019-1666-5.pdf">reached quantum supremacy</a> (with the quantum processor named “<a href="https://en.wikipedia.org/wiki/Sycamore_processor">Sycamore</a>”) but their claim was <a href="https://www.ibm.com/blogs/research/2019/10/on-quantum-supremacy/">debated</a> mainly due to the perceived lack of criteria to properly define when “quantum supremacy” is reached<sup>1</sup>. Google claimed that “a state-of-the-art supercomputer would require approximately 10,000 years to perform the equivalent task” and it was refuted by IBM by saying that “an ideal simulation of the same task can be performed on a classical system in 2.5 days and with far greater fidelity”. Regardless of this lack of criteria, this result in itself is a remarkable achievement and shows great interest of parties into creating a quantum computer. Last November, it was claimed that the most challenging sampling task performed on Sycamore can be accomplished within one week on the <a href="https://en.wikipedia.org/wiki/Sunway_TaihuLight">Sunway supercomputer</a>. A crucial point has to be understood on the matter, though: “<a href="https://scottaaronson.blog/?p=6111">more like a week still seems to be needed on the supercomputer</a>”, as noted by Aaronson. Quantum supremacy then is still not an irrevocable agreement.</p><p>Google’s quantum computer has already been applied to simulating real physical phenomena. Early in 2021, it was used to <a href="https://arxiv.org/abs/2107.13571">demonstrate a “time crystal”</a> (a quantum system of particles which move in a regular, repeating motion without burning any energy to the environment). On the other hand, by late 2021, IBM <a href="https://techmonitor.ai/technology/ibm-eagle-chip-quantum-computing">unveiled its 127-qubit chip Eagle</a>, but we don’t yet know how much gate fidelity (how close <a href="https://arxiv.org/pdf/0910.1315.pdf">two operations are to each other</a>) it really attains, as outlined by Aaronson in <a href="https://scottaaronson.blog/?p=6111">his blog post</a>.</p><p>What do all of these advances mean for cryptography? Does it mean that soon all secure connections will be broken? If not “soon”, when will it be? The answers to all of these questions are complex and unclear. The <a href="https://globalriskinstitute.org/publications/quantum-threat-timeline-report-2020/">Quantum Threat Timeline Report</a> of 2020 suggests that the majority of researchers on the subject think that 15 to 40 years<sup>2</sup> will be the time frame for when a “cryptography-breaking” quantum computer arrives. It is worth noting that these predictions are more optimistic than compared to the same <a href="https://globalriskinstitute.org/publications/quantum-threat-timeline/">report of 2019</a>, which suggests that, due to the advances of the quantum computing field, researchers feel more assured of their imminent arrival.</p>
    <div>
      <h3>The upcoming decades of breaking cryptography</h3>
      <a href="#the-upcoming-decades-of-breaking-cryptography">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4MptzoQWhV84gy0bhgHio0/6350adb125f91f5dacc15eb305125282/Eyes-on-the-Horizon.png" />
            
            </figure><p>Quantum computing will shine a light into new areas of development and advances, but it will also expose secret areas: faster mechanisms to solve mathematical problems and break cryptography. The metaphor of the dawn can be used to show how close we are to quantum computers breaking some types of cryptography (how far are we to twilight), which will be catastrophic to the needs of privacy, communication and security we have.</p><p>It is worth noting again, though, that the advent of quantum computing is not a bad scenario in itself. Quantum computing will bring much progress in the form of advances to the understanding and mimicking of the fundamental way in which nature interacts. Their inconvenience is that they break most of the cryptography algorithms we use nowadays. They also have <a href="https://www.cs.virginia.edu/~robins/The_Limits_of_Quantum_Computers.pdf">their limits</a>: they cannot solve every problem, and in cryptography there are algorithms that will not be broken by them.</p><p>Let’s start by defining first what dawn is. Dawn is the arrival of stable quantum computers that break cryptography in a reasonable amount of time and space (explicitly: breaking the mathematical foundation of the <a href="https://en.wikipedia.org/wiki/Elliptic-curve_cryptography">p256 algorithm</a>; although some research defines this as breaking <a href="https://en.wikipedia.org/wiki/RSA_(cryptosystem)">RSA2048</a>). It is also the time by which a new day arrives: a day that starts with new advances in medicine, for example, by better mimicking molecules’ nature with a computer.</p><p>How close to dawn are we? To answer this question, we have to look at the recent advances in the aim of breaking cryptography with a quantum computer. One of the most interesting advances is that the <a href="https://research.ibm.com/blog/factor-15-shors-algorithm">number 15 was factored in 2001</a> (it is 20 years already) and <a href="https://arxiv.org/abs/1111.4147">number 21 was factored in 2012</a> by a quantum computer. While this does not seem so exciting for us (as we can factor it in our minds), it is astonishing to see that a quantum computer was stable enough to do so.</p><p>Why do we care about computers factoring or not factoring integers? The reason is that they are the mathematical base of certain public key cryptography algorithms (like, RSA) and, if they are factored, the algorithm is broken.</p><p>Taking a step further, quantum computers are also able to break encryption algorithms themselves, such as <a href="https://en.wikipedia.org/wiki/Advanced_Encryption_Standard">AES</a>, as we can use quantum efficient searching algorithms for the task. This kind of searching algorithm, <a href="https://en.wikipedia.org/wiki/Grover%27s_algorithm">Grover’s algorithm</a>, solves the task of <a href="https://en.wikipedia.org/wiki/Inverse_function">function inversion</a>, which speeds up collision and preimage attacks (attacks that you don’t want to be sped up in your encryption algorithms). These attacks are related to <a href="https://en.wikipedia.org/wiki/Hash_function">hash functions</a>: a collision attack is one that tries to find two messages that produce the same hash; a preimage attack is one that tries to find a message given its hash value. A smart evolution of this algorithm, using <a href="https://www.scottaaronson.com/papers/qchvpra.pdf">Hidden Variables</a> as part of a computational model, will theoretically make searching much more efficient.</p><p>So what are the tasks needed for a quantum computer to break cryptography? It should:</p><ul><li><p>Be able to factor big integers, or,</p></li><li><p>Be able to efficiently search for collisions or pre-images, or,</p></li><li><p>Be able to solve the <a href="https://en.wikipedia.org/wiki/Discrete_logarithm_records">discrete logarithm problem</a>.</p></li></ul><p>Why do we care about these quantum computers breaking cryptographic algorithms that we rarely hear of? What has it to do with the connections we make? Cryptographic algorithms are the base of the secure protocols that we use today. Every time we visit a website, we are using a broad suite of cryptographic algorithms that preserve the security, privacy and integrity of connections. Without them, we don’t have the properties we are used to when connecting over digital means.</p><p>What can we do? Do we now stop using digital means altogether? There are solutions. Of course, the easy answer will be, for example, to <a href="https://cr.yp.to/papers/pqrsa-20170419.pdf">increase the size of integers</a> to make them impossible to be factored by a quantum computer. But, is it really a usable algorithm? How slow will our connections be if we have to increase these algorithms exponentially?</p><p>As we get closer to twilight, we also get closer to the dawn of new beginnings, the rise of post-quantum cryptography.</p>
    <div>
      <h4>The rise and state of post-quantum cryptography</h4>
      <a href="#the-rise-and-state-of-post-quantum-cryptography">
        
      </a>
    </div>
    <p>As dawn approaches, researchers everywhere have been busy looking for solutions. If the mathematical underpinnings of an algorithm are what is broken by a quantum computer, can we just change them? Are there mathematical problems that are hard even for quantum computers? The answer is yes and, even more interestingly, we already know how to create algorithms out of them.</p><p>The mathematical problems that we use in this new upcoming era, called post-quantum, are of many kinds:</p><ul><li><p>Lattices by using learning with errors (LWE) or ring learning with errors problems (rLWE).</p></li><li><p>Isogenies, which rely on properties of supersingular elliptic curves and supersingular isogeny graphs.</p></li><li><p>Multivariate cryptography, which relies on the difficulty of solving systems of multivariate equations.</p></li><li><p>Code-based cryptography, which relies on the theory of error-correcting codes.</p></li></ul><p>With these mathematical constructions, we can build algorithms that can be used to protect our connections and data. How do we know, nevertheless, that the algorithms are safe? Even if a mathematical construction that an algorithm uses is safe from the quantum spectre, is the interaction with other parts of the system or in regard to adversaries safe? Who decides on this?</p><p>Because it is difficult to determine answers to these questions on an isolated level, the <a href="https://www.nist.gov/">National Institute of Standards and Technology</a> of the United States has been running, since 2016, a <a href="https://csrc.nist.gov/projects/post-quantum-cryptography">post-quantum process</a> that will define how these algorithms work, their design, what properties they provide and how they will be used. We now have a set of finalists that are close to being standardized. The competition standardizes on two tracks: algorithms for key encapsulation mechanisms and public-key encryption (or KEMs) and algorithms for the authentication (mainly, digital signatures). The current finalists are:</p><p>For the public-key encryption and key-establishment algorithms:</p><ul><li><p>Classic McEliece, a code-based cryptography scheme.</p></li><li><p>CRYSTALS-KYBER, a lattice based scheme.</p></li><li><p>NTRU, a lattice based scheme.</p></li><li><p>SABER, a lattice based scheme.</p></li></ul><p>For the digital signature algorithms:</p><ul><li><p>CRYSTALS-DILITHIUM, a lattice based scheme.</p></li><li><p>FALCON, a lattice based scheme.</p></li><li><p>Rainbow, a multivariate cryptography scheme.</p></li></ul><p>What do these finalists show us? It seems that we can rely on the security given by lattice-based schemes (which is no slight to the security of any of the other schemes). It also shows us that in the majority of the cases, we prioritize schemes that can be used in the same way cryptography is used nowadays: with fast operations' computation time, and with small parameters’ sizes. In the migration to post-quantum cryptography, it is not only about using safe mathematical constructions and secure algorithms but also about how they integrate into the systems, architectures, protocols, and expected speed and space efficiencies we currently depend on. More research seems to be needed in the signatures/authentication front and the inclusion of new innovative signature schemes (like <a href="https://eprint.iacr.org/2020/1240">SQISign</a> or <a href="https://eprint.iacr.org/2021/1144">MAYO</a>), and that is why <a href="https://www.nist.gov/video/third-pqc-standardization-conference-session-i-welcomecandidate-updates">NIST will be calling for a fourth round of its competition</a>.</p><p>It is worth noting that there exists another flavor of upcoming cryptography called “<a href="https://en.wikipedia.org/wiki/Quantum_cryptography">quantum cryptography</a>”, but its efficient dawn seems to be far away (though, some Quantum Key Distribution — QKD — protocols are being deployed now). Quantum cryptography is one that uses quantum mechanical properties to perform cryptographic tasks. It has mainly focused on QKD), but it seems to be too inefficient to be used in practice outside very limited settings. And even though, in theory, these schemes should be perfectly secure against any eavesdropper; in practice, the physical machines may still fall victim to side-channel attacks.</p><p>Is dawn, then, the collapse of cryptography? It can be, on one hand, as it is the collapse of most classical cryptography. But, on the other hand, it is also the dawn of new, post-quantum cryptography and the start of quantum computing for the benefit of the world. Will the process of reaching dawn be simple? Certainly, not. Migrating systems, architectures and protocols to new algorithms is an intimidating task. But we at Cloudflare and as a community are taking the steps, as you will read in other blog posts throughout this week.</p>
    <div>
      <h4>References:</h4>
      <a href="#references">
        
      </a>
    </div>
    <ul><li><p><a href="https://courses.cs.washington.edu/courses/cse599/01wi/papers/simon_qc.pdf">“On the power of Quantum Computation”</a> by Daniel R. Simon</p></li><li><p><a href="https://www.scottaaronson.com/papers/npcomplete.pdf">“NP-complete Problems and Physical Reality”</a> by Scott Aaronson</p></li><li><p><a href="https://arxiv.org/pdf/1807.10749.pdf">‘Quantum Supremacy Is Both Closer and Farther than It Appears”</a> by Igor L. Markov, Aneeqa Fatima, Sergei V. Isakov, and Sergio Boixo</p></li><li><p><a href="https://arxiv.org/pdf/1612.05903.pdf">“Complexity-Theoretic Foundations of Quantum Supremacy Experiments”</a> by Scott Aaronson and Lijie Chen</p></li><li><p><a href="https://www.nature.com/articles/s41586-019-1666-5.pdf">“Quantum supremacy using a programmable superconducting processor”</a> by Frank Arute <i>et al</i>.</p></li><li><p><a href="https://arxiv.org/abs/1910.09534">“Leveraging Secondary Storage to Simulate Deep 54-qubit Sycamore Circuits”</a> by Edwin Pednault, John A. Gunnels, Giacomo Nannicini, Lior Horesh, Robert Wisnieff</p></li><li><p><a href="https://arxiv.org/abs/0708.0261">“An Introduction to Quantum Computing”</a> by Noson S. Yanofsky</p></li><li><p><a href="https://www.nature.com/articles/nature23461">“Post-quantum cryptography”</a> by Daniel J. Bernstein and Tanja Lange</p></li><li><p><a href="https://arxiv.org/pdf/0910.1315.pdf">“Gate fidelity fluctuations and quantum process invariants”</a> by Easwar Magesan, Robin Blume-Kohout, and Joseph Emerson</p></li><li><p>“<a href="https://arxiv.org/abs/1805.07185">Complete characterization of the directly implementable quantum gates used in the IBM quantum processors</a>” by Abhishek Shukla, Mitali Sisodia, and Anirban Pathak</p></li></ul><p>.......</p><p><sup>1</sup>In the original paper, they estimate of 10,000 years is based on the observation that there is not enough Random Access Memory (RAM) to store the full state vector in a Schrödinger-type simulation and, then, it is needed to use a <a href="https://arxiv.org/abs/1807.10749">hybrid Schrödinger–Feynman algorithm</a>, which is memory-efficient but exponentially more computationally expensive. In contrast, <a href="https://arxiv.org/abs/1910.09534">IBM used a Schrödinger-style classical simulation approach</a> that uses both RAM and hard drive space.</p><p><sup>2</sup>There is a 50% or more likelihood of “cryptography-breaking” quantum computers arriving. Slightly more than half of the researchers answered that, in the next 15 years, they will arrive. About 86% of the researchers answered that, in the next 20 years, they will arrive. All but one among the researchers answered that, in the next 30 years, they will arrive.</p><p><sup>3</sup>Although, in practice, this seems difficult to attain.</p> ]]></content:encoded>
            <category><![CDATA[Research]]></category>
            <category><![CDATA[Cryptography]]></category>
            <category><![CDATA[Post-Quantum]]></category>
            <guid isPermaLink="false">4AjaRpTOtmH4LtsF1O3xc6</guid>
            <dc:creator>Sofía Celi</dc:creator>
        </item>
        <item>
            <title><![CDATA[Privacy Pass v3: the new privacy bits]]></title>
            <link>https://blog.cloudflare.com/privacy-pass-v3/</link>
            <pubDate>Tue, 12 Oct 2021 12:59:19 GMT</pubDate>
            <description><![CDATA[ A new version of Privacy Pass for reducing the number of CAPTCHAs. ]]></description>
            <content:encoded><![CDATA[ <p></p><p>In November 2017, we <a href="/cloudflare-supports-privacy-pass/">released</a> our implementation of a privacy preserving protocol to let users prove that they are humans without enabling tracking. When you install <a href="https://privacypass.github.io/">Privacy Pass’s browser extension</a>, you get tokens when you solve a Cloudflare CAPTCHA which can be used to avoid needing to solve one again... The redeemed token is cryptographically unlinkable to the token originally provided by the server. That is why Privacy Pass is privacy preserving.</p><p>In October 2019, Privacy Pass reached another milestone. We released <a href="/supporting-the-latest-version-of-the-privacy-pass-protocol/">Privacy Pass Extension v2.0</a> that includes a <a href="https://www.hcaptcha.com/privacy-pass">new service provider</a> (hCaptcha) which provides a way to redeem a token not only with CAPTCHAs in the Cloudflare challenge pages but also hCaptcha CAPTCHAs in any website. When you encounter any hCaptcha CAPTCHA in any website, including the ones not behind Cloudflare, you can redeem a token to pass the CAPTCHA.</p><p>We believe Privacy Pass solves an important problem — balancing privacy and security for bot mitigation— but we think there’s more to be done in terms of both the <a href="https://github.com/privacypass/challenge-bypass-extension/tree/v3-rc">codebase</a> and the protocol. We improved the codebase by redesigning how the service providers interact with the core extension. At the same time, we made progress on the standardization at IETF and improved the protocol by adding metadata which allows us to do more fabulous things with Privacy Pass.</p>
    <div>
      <h2>Announcing Privacy Pass Extension v3.0</h2>
      <a href="#announcing-privacy-pass-extension-v3-0">
        
      </a>
    </div>
    <p>The current implementation of our extension is functional, but it is difficult to maintain two Privacy Pass service providers: Cloudflare and hCaptcha. So we decided to <a href="https://www.cloudflare.com/learning/cloud/how-to-refactor-applications/">refactor</a> the browser extension to improve its maintainability. We also used this opportunity to make following improvements:</p><ul><li><p>Implement the extension using TypeScript instead of plain JavaScript.</p></li><li><p>Build the project using a module bundler instead of custom build scripts.</p></li><li><p>Refactor the code and define the API for the cryptographic primitive.</p></li><li><p>Treat provider-specific code as an encapsulated software module rather than a list of configuration properties.</p></li></ul><p>As a result of the improvements listed above, the extension will be less error-prone and each service provider will have more flexibility and can be integrated seamlessly with other providers.</p><p>In the new extension we use TypeScript instead of plain JavaScript because its syntax is a kind of extension to JavaScript, and we already use TypeScript in <a href="/bootstrapping-a-typescript-worker/">Workers</a>. One of the things that makes TypeScript special is that it has features that are only available in modern programming languages, like <a href="https://en.wikipedia.org/wiki/Void_safety">null safety</a>.</p>
    <div>
      <h2>Support for Future Service Providers</h2>
      <a href="#support-for-future-service-providers">
        
      </a>
    </div>
    <p>Another big improvement in v3.0 is that it is designed for modularity, meaning that it will be very easy to add a new potential service provider in the future. A new provider can use an API provided by us to implement their own request flow to use the Privacy Pass protocol and to handle the HTTP requests. By separating the provider-specific code from the core extension code using the API, the extension will be easier to update when there is a need for more service providers.</p><p>On a technical level, we allow each service provider to have its own <a href="https://developer.chrome.com/extensions/webRequest">WebRequest API</a> event listeners instead of having central event listeners for all the providers. This allows providers to extend the browser extension's functionality and implement any request handling logic they want.</p><p>Another major change that enables us to do this is that we moved away from configuration to programmable modularization.</p>
    <div>
      <h2>Configuration vs Modularization</h2>
      <a href="#configuration-vs-modularization">
        
      </a>
    </div>
    <p><a href="/supporting-the-latest-version-of-the-privacy-pass-protocol/">As mentioned in 2019</a>, it would be impossible to expect different service providers to all abide by the same exact request flow, so we decided to use a JSON configuration file in v2.0 to define the request flow. The configuration allows the service providers to easily modify the extension characteristics without dealing too much with the core extension code. However, recently we figured out that we can improve it without using a configuration file, and using modules instead.</p><p>Using a configuration file limits the flexibility of the provider by the number of possible configurations. In addition, when the logic of each provider evolves and deviates from one another, the size of configuration will grow larger and larger which makes it hard to document and keep track of. So we decided to refactor how we determine the request flow from using a configuration file to using a module file written specifically for each service provider instead.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/34G52zSZ9ukkaa0h079EBV/287aaf7e3245f7fe1071ee0d4270a95f/image2-19.png" />
            
            </figure><p>By using a programmable module, the providers are not limited by the available fields in the configuration. In addition, the providers can use the available implementations of the necessary cryptographic primitives in any point of the request flow because we factored out the crypto bits into a separate module which can be used by any provider. In the future, if the cryptographic primitives ever change, the providers can update the code and use it any time.</p>
    <div>
      <h2>Towards Standard Interoperability</h2>
      <a href="#towards-standard-interoperability">
        
      </a>
    </div>
    <p>The Privacy Pass protocol was first published at the <a href="https://www.petsymposium.org/2018/files/papers/issue3/popets-2018-0026.pdf">PoPETS</a> symposium in 2018. As explained in this <a href="/privacy-pass-the-math/">previous post</a>, the core of the Privacy Pass protocol is a secure way to generate tokens between server and client. To that end, the protocol requires evaluating a pseudorandom function that is oblivious and verifiable. The first property prevents the server from learning information about the client’s tokens, while the client learns nothing about the server’s private key. This is useful to protect the privacy of users. The token generation must also be verifiable in the sense that the client can attest to the fact that its token was minted using the server’s private key.</p><p>The original implementation of Privacy Pass has seen real-world use in our browser extension, helping to reduce CAPTCHAs for hundreds of thousands of people without compromising privacy. But to guarantee interoperability between services implementing Privacy Pass, what's required is an accurate specification of the protocol and its operations. With this motivation, the Privacy Pass protocol was proposed as an Internet draft at the <a href="https://www.ietf.org/">Internet Engineering Task Force</a> (IETF) — to know more about our participation at IETF <a href="/cloudflare-and-the-ietf">look at the post</a>.</p><p>In March 2020, the protocol was presented at IETF-107 for the first time. The session was a <a href="https://www.ietf.org/how/bofs/">Birds-of-a-Feather</a>, a place where the IETF community discusses the creation of new working groups that will write the actual standards. In the session, the working group’s charter is presented and proposes to develop a secure protocol for redeeming unforgeable tokens that attest to the validity of some attribute being held by a client. The charter was later approved, and three documents were integrated covering the protocol, the architecture, and an HTTP API for supporting Privacy Pass. The working group at IETF can be found at <a href="https://datatracker.ietf.org/wg/privacypass/about/">https://datatracker.ietf.org/wg/privacypass/</a>.</p><p>Additionally, to its core functionality, the Privacy Pass protocol can be extended to improve its usability or to add new capabilities. For instance, adding a mechanism for public verifiability will allow a third party, someone who did not participate in the protocol, to verify the validity of tokens. Public verifiability can be implemented using a <i>blind-signature scheme</i> — this is a special type of digital signatures firstly proposed by <a href="https://link.springer.com/chapter/10.1007/978-1-4757-0602-4_18">David Chaum</a> in which signers can produce signatures on messages without learning the content of the message. A diversity of algorithms to implement blind-signatures exist; however, there is still work to be done to define a good candidate for public verifiability.</p><p>Another extension for Privacy Pass is the support for including metadata in the tokens. As this is a feature with high impact on the protocol, we devote a larger section to explain the benefits of supporting metadata in the face of hoarding attacks.</p>
    <div>
      <h2>Future work: metadata</h2>
      <a href="#future-work-metadata">
        
      </a>
    </div>
    <p>What is research without new challenges that arise? What does development look like if there are no other problems to solve? During the design and development of Privacy Pass (both as a service, as an idea, and as a protocol), a potential vector for abuse was noted, which will be referred to as a “hoarding” or “farming” attack. This attack consists of individual users or groups of users that can gather tokens over a long period of time and redeem them all at once with the aim of, for example, overwhelming a website and making the service unavailable for other users. In a more complex scenario, an attacker can build up a stock of tokens that they could then redistribute amongst other clients. This redistribution ability is possible as tokens are not linked to specific clients, which is a property of the Privacy Pass protocol.</p><p>There have been several proposed solutions to this attack. One can, for example, make the verification of tokens procedure very efficient, so attackers will need to hoard an even larger amount of tokens in order to overwhelm a service. But the problem is not only about making verification times faster, and, therefore, this does not completely solve the problem. Note that in Privacy Pass, a successful token redemption could be exchanged for a single-origin cookie. These cookies allow clients to avoid future challenges for a particular domain without using more tokens. In the case of a hoarding attack, an attacker could trade in their hoarded number of tokens for a number of cookies. An attacker can, then, mount a layer 7 DDoS attack with the “hoarded” cookies, which would render the service unavailable.</p><p>In the next sections, we will explore other different solutions to this attack.</p>
    <div>
      <h3>A simple solution and its limitations: key rotation</h3>
      <a href="#a-simple-solution-and-its-limitations-key-rotation">
        
      </a>
    </div>
    <p>What does “key rotation” mean in the context of Privacy Pass? In Privacy Pass, each token is attested by keys held by the service. These keys are further used to verify the honesty of a token presented by a client when trying to access a challenge-protected service. “Key rotation” means updating these keys with regard to a chosen epoch (meaning, for example, that every two weeks — the epoch —, the keys will be rotated). Regular key rotation, then, implies that tokens belong to these epochs and cannot be used outside them, which prevents stocks of tokens from being useful for longer than the epoch they belong to.</p><p>Keys, however, should not be rotated frequently as:</p><ul><li><p>Rotating a key can lead to security implications</p></li><li><p>Establishing trust in a frequently-rotating key service can be a challenging problem</p></li><li><p>The unlinkability of the client when using tokens can be diminished</p></li></ul><p>Let’s explore these problems one by one now:</p><p><b>Rotating a key can lead to security implications</b>, as past keys need to be deleted from secure storage locations and replaced with new ones. This process is prone to failure if done regularly, and can lead to potential key material leakage.</p><p><b>Establishing trust in a frequently-rotating key service</b> can be a challenging problem, as keys will have to be verified by the needed parties each time they are regenerated. Keys need to be verified as it has to be attested that they belong to the entity one is trying to communicate with. If keys rotate too frequently, this verification procedure will have to happen frequently as well, so that an attacker will not be able to impersonate the honest entity with a “fake” public key.</p><p><b>The unlinkability of the client when using tokens can be diminished</b> as a savvy attacker (a malicious server, for example) could link token generation and token future-use. In the case of a malicious server, it can, for example, rotate their keys too often to violate unlinkability or could pick a separate public key for each client issuance. In these cases, this attack can be solved by the usage of public mechanisms to record which server’s public keys are used; but this requires further infrastructure and coordination between actors. Other cases are not easily solvable by this “public verification”: if keys are rotated every minute, for example, and a client was the only one to visit a “privacy pass protected” site in that minute, then, it's not hard to infer (to “link”) that the token came only from this specific client.</p>
    <div>
      <h3>A novel solution: Metadata</h3>
      <a href="#a-novel-solution-metadata">
        
      </a>
    </div>
    <p>A novel solution to this “hoarding” problem that does not require key rotation or further optimization of verification times is the addition of metadata. This approach was introduced in the paper “<a href="https://eprint.iacr.org/2021/864.pdf">A Fast and Simple Partially Oblivious PRF, with Applications</a>”, and it is called the “POPRF with metadata” construction. The idea is to add a metadata field to the token generation procedure in such a way that tokens are cryptographically linked to this added metadata. The added metadata can be, for example, a number that signals which epoch this token belongs to. The service, when presented with this token on verification, promptly checks that it corresponds to its internal epoch number (this epoch number can correspond to a period of time, a threshold of number of tokens issued, etc.). If it does not correspond, this token is expired and cannot be further used. Metadata, then, can be used to expire tokens without performing key rotations, thereby avoiding some issues outlined above.</p><p>Other kinds of metadata can be added to the Partially Oblivious PRF (PO-PRF) construction as well. Geographic location can be added, which signals that tokens can only be used in a specific region.</p>
    <div>
      <h3>The limits of metadata</h3>
      <a href="#the-limits-of-metadata">
        
      </a>
    </div>
    <p>Note, nevertheless, that the addition of this “metadata” should be carefully considered as adding, in the case of “time-metadata”, an explicit time bound signal will diminish the unlikability set of the tokens. If an explicit time-bound signal is added (for example, the specific time — year, month, day, hour, minute and seconds — in which this token was generated and the amount of time it is valid for), it will allow a malicious server to link generation and usage. The recommendation is to use “opaque metadata”: metadata that is public to both client and service but that only the service knows its precise meaning. A server, for example, can set a counter that gets increased after a period of time (for example, every two weeks). The server will add this counter as metadata rather than the period of time. The client, in this case, publicly knows what this counter is but does not know to which period it refers to.</p><p>Geographic location metadata should be coarse as well: it should refer to a large geographical area, such as a continent, or political and economic union rather than an explicit location.</p>
    <div>
      <h2>Wrap up</h2>
      <a href="#wrap-up">
        
      </a>
    </div>
    <p>The Privacy Pass protocol provides users with a secure way for redeeming tokens. At Cloudflare, we use the protocol to reduce the number of CAPTCHAs improving the user experience while browsing websites. A natural evolution of the protocol is expected, ranging from its standardization to innovating with new capabilities that help to prevent abuse of the service.</p><p>On the service side, we refactored the Privacy Pass browser extension aiming to improve the quality of the code, so bugs can be detected in earlier phases of the development. The code is available at the <a href="https://github.com/privacypass/challenge-bypass-extension/tree/v3-rc">challenge-bypass-extension</a> repository, and we invite you to try the release candidate version.</p><p>An appealing extension for Privacy Pass is the inclusion of metadata as it provides a non-cumbersome way to solve hoarding attacks, while preserving the anonymity (in general, the privacy) of the protocol itself. <a href="https://eprint.iacr.org/2021/864.pdf">Our paper</a> provides you more information about the technical details behind this idea.</p><p>The application of the Privacy Pass protocol in other use cases or to create other service providers requires a certain degree of compatibility. People wanting to implement Privacy Pass must be able to have a standard specification, so implementations can interoperate. The efforts along these lines are centered on the <a href="https://datatracker.ietf.org/wg/privacypass/about/">Privacy Pass working group</a> at IETF, a space open for anyone to participate in delineating the future of the protocol. Feel free to be part of these efforts too.</p><p>We are continuously working on new ways of improving our services and helping the Internet be a better and a more secure place. You can join us on this effort and can reach us at <a href="https://research.cloudflare.com">research.cloudflare.com</a>. See you next time.</p> ]]></content:encoded>
            <category><![CDATA[Research]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[Privacy Pass]]></category>
            <category><![CDATA[CAPTCHA]]></category>
            <guid isPermaLink="false">nnk7WdvORjw4nOJUFyE1z</guid>
            <dc:creator>Pop Chunhapanya</dc:creator>
            <dc:creator>Armando Faz-Hernández</dc:creator>
            <dc:creator>Sofía Celi</dc:creator>
        </item>
        <item>
            <title><![CDATA[KEMTLS: Post-quantum TLS without signatures]]></title>
            <link>https://blog.cloudflare.com/kemtls-post-quantum-tls-without-signatures/</link>
            <pubDate>Fri, 15 Jan 2021 12:00:00 GMT</pubDate>
            <description><![CDATA[ The TLS 1.3 protocol has been around for quite some time, but it will be broken once quantum computers arrive. What can we do? In this blog post, we will examine a technique for achieving full post-quantum security for TLS 1.3 in the face of quantum computers: KEMTLS. ]]></description>
            <content:encoded><![CDATA[ 
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/TYxahpsj66q7L2bERLIZa/537157a1eecaedcbd187f9b1d391523d/KEMTLS--Post-quantum-TLS-without-signatures-blog-home-2.png" />
            
            </figure><p>The Transport Layer Security protocol (TLS), which secures most Internet connections, has mainly been a protocol consisting of a key exchange authenticated by digital signatures used to encrypt data at transport[1]. Even though it has undergone major changes since 1994, when SSL 1.0 was introduced by Netscape, its main mechanism has remained the same. The key exchange was first based on RSA, and later on traditional Diffie-Hellman (DH) and Elliptic-curve Diffie-Hellman (ECDH). The signatures used for authentication have almost always been RSA-based, though in recent years other kinds of signatures have been adopted, mainly <a href="https://www.cloudflare.com/learning/dns/dnssec/ecdsa-and-dnssec/">ECDSA</a> and Ed25519. This recent change to elliptic curve cryptography in both at the key exchange and at the signature level has resulted in considerable speed and bandwidth benefits in comparison to traditional Diffie-Hellman and RSA.</p><p>TLS is the main protocol that protects the connections we use everyday. It’s everywhere: we use it when we buy products online, when we register for a newsletter — when we access any kind of website, IoT device, API for mobile apps and more, really. But with the imminent threat of the arrival of <a href="/securing-the-post-quantum-world/">quantum computers</a> (a threat that seems to be getting closer and closer), we need to reconsider the future of TLS once again. <a href="/the-tls-post-quantum-experiment/">A wide-scale post-quantum experiment</a> was carried out by Cloudflare and Google: two post-quantum key exchanges were integrated into our TLS stack and deployed at our edge servers as well as in Chrome Canary clients. The goal of that experiment was to evaluate the performance and feasibility of deployment of two post-quantum key exchanges in TLS.</p><p>Similar experiments have been proposed for introducing post-quantum algorithms into the TLS handshake itself. Unfortunately, it seems infeasible to replace both the key exchange and signature with post-quantum primitives, because post-quantum cryptographic primitives are bigger, or slower (or both), than their predecessors. The proposed algorithms under consideration in the <a href="https://csrc.nist.gov/Projects/post-quantum-cryptography/round-3-submissions">NIST post-quantum standardization process</a> use mathematical objects that are larger than the ones used for elliptic curves, traditional Diffie-Hellman, or RSA. As a result, the overall size of public keys, signatures and key exchange material is much bigger than those from elliptic curves, Diffie-Hellman, or RSA.</p><p>How can we solve this problem? How can we use post-quantum algorithms as part of the TLS handshake without making the material too big to be transmitted? In this blogpost, we will introduce a new mechanism for making this happen. We’ll explain how it can be integrated into the handshake and we’ll cover implementation details. The key observation in this mechanism is that, while post-quantum algorithms have bigger communication size than their predecessors, post-quantum <i>key exchanges</i> have somewhat smaller sizes than post-quantum <i>signatures</i>, so we can try to replace signatures with key exchanges in some places to save space.  We will only focus on the TLS 1.3 handshake as it is the TLS version that should be currently used.</p>
    <div>
      <h3>Past experiments: making the TLS 1.3 handshake post-quantum</h3>
      <a href="#past-experiments-making-the-tls-1-3-handshake-post-quantum">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/589KMSU0NPVnVGNPCiWIXg/b12d643b105956d0c452dbdcb122ce61/image2-5.png" />
            
            </figure><p><a href="https://tools.ietf.org/html/rfc8446">TLS 1.3</a> was introduced in August 2018, and it brought many security and performance improvements (notably, having only one round-trip to complete the handshake). But TLS 1.3 is designed for a world with classical computers, and some of its functionality will be broken by quantum computers when they do arrive.</p><p>The primary goal of TLS 1.3 is to provide authentication (the server side of the channel is always authenticated, the client side is optionally authenticated), confidentiality, and integrity by using a handshake protocol and a record protocol. The handshake protocol, the one of interest for us today, establishes the cryptographic parameters for securing and authenticating a connection. It can be thought of as having three main phases, as defined in <a href="https://tools.ietf.org/html/rfc8446">RFC8446</a>:</p><p>-  The <b>Parameter Negotiation</b> phase (referred to as ‘Server Parameters’ in RFC8446), which establishes other handshake parameters (whether the client is authenticated, application-layer protocol support, etc).</p><p>-  The <b>Key Exchange</b> phase, which establishes shared keying material and selects the cryptographic parameters to be used. Everything after this phase will be encrypted.</p><p>-  The <b>Authentication</b> phase, which authenticates the server (and, optionally, the client) and provides key confirmation and handshake integrity.</p><p>The main idea of past experiments that introduced post-quantum algorithms into the handshake of TLS 1.3 was to use them in place of classical algorithms by advertising them as part of the <a href="https://tools.ietf.org/html/rfc8446#section-4.2.7">supported groups</a>[2] and <a href="https://tools.ietf.org/html/rfc8446#section-4.2.8">key share</a>[3] extensions, and, therefore, establishing with them the negotiated connection parameters. Key encapsulation mechanisms (KEMs) are an abstraction of the basic key exchange primitive, and were used to generate the shared secrets. When using a <a href="https://tools.ietf.org/html/rfc8446#section-4.2.11">pre-shared key</a>, its symmetric algorithms can be easily replaced by post-quantum KEMs as well; and, in the case of password-authenticated TLS, some <a href="https://eprint.iacr.org/2017/1192.pdf">ideas</a> have been proposed on how to use post-quantum algorithms with them.</p><p>Most of the above ideas only provide what is often defined as ‘transitional security’, because its main focus is to provide quantum-resistant confidentiality, and not to take quantum-resistant authentication into account. It is possible to use post-quantum signatures for TLS authentication, but the post-quantum signatures are larger than traditional ones. Furthermore, it is <a href="https://csrc.nist.gov/Presentations/2019/the-2nd-round-of-the-nist-pqc-standardization-proc">worth noting</a> that using post-quantum signatures is much more expensive than using post-quantum KEMs.</p><p>We can estimate the impact of such a replacement on network traffic by simply looking at the sum of the cryptographic objects that are transmitted during the handshake. A typical TLS 1.3 handshake using elliptic curve X25519 and RSA-2048 would transmit 1,376 bytes, which would correspond to the public keys for key exchange, the certificate, the signature of the handshake, and the certificate chain. If we were to replace X25519 by the post-quantum KEM <a href="https://pq-crystals.org/kyber/">Kyber512</a> and RSA by the post-quantum signature <a href="https://pq-crystals.org/dilithium/">Dilithium II</a>, two of the more efficient proposals, the size transmitted data would increase to 10,036 bytes[4]. The increase is mostly due to the size of the post-quantum signature algorithm.</p><p>The question then is: how can we achieve full post-quantum security and give a handshake that is efficient to be used?</p>
    <div>
      <h3>A more efficient proposal: KEMTLS</h3>
      <a href="#a-more-efficient-proposal-kemtls">
        
      </a>
    </div>
    <p>There is a long history of other mechanisms, besides signatures, being used for authentication. Modern protocols, such as the Signal protocol, the Noise framework, or WireGuard, rely on key exchange mechanisms for authentication; but they are unsuitable for the TLS 1.3 case as they expect the long-term key material to be known in advance by the interested parties.</p><p>The <a href="https://eprint.iacr.org/2015/978.pdf">OPTLS proposal</a> by Krawczyk and Wee authenticates the TLS handshake without signatures by using a non-interactive key exchange (NIKE). However, the only somewhat efficient construction for a post-quantum NIKE is CSIDH, the security of which is the subject of an ongoing debate. But we can build on this idea, and use KEMs for authentication. KEMTLS, the current proposed experiment, replaces the handshake signature by a post-quantum KEM key exchange. It was designed and introduced by Peter Schwabe, Douglas Stebila and Thom Wiggers in the publication <a href="https://thomwiggers.nl/publication/kemtls/kemtls.pdf">‘Post-Quantum TLS Without Handshake Signatures’</a>.</p><p>KEMTLS, therefore, achieves the same goals as TLS 1.3 (authentication, confidentiality and integrity) in the face of quantum computers. But there’s one small difference compared to the TLS 1.3 handshake. KEMTLS allows the client to send encrypted application data in the second client-to-server TLS message flow when client authentication is not required, and in the third client-to-server TLS message flow when mutual authentication is required. Note that with TLS 1.3, the server is able to send encrypted and authenticated application data in its first response message (although, in most uses of TLS 1.3, this feature is not actually used). With KEMTLS, when client authentication is not required, the client is able to send its first encrypted application data after the same number of handshake round trips as in TLS 1.3.</p><p>Intuitively, the handshake signature in TLS 1.3 proves possession of the private key corresponding to the public key certified in the TLS 1.3 server certificate. For these signature schemes, this is the straightforward way to prove possession; another way to prove possession is through key exchanges. By carefully considering the key derivation sequence, a server can decrypt any messages sent by the client only if it holds the private key corresponding to the certified public key. Therefore, implicit authentication is fulfilled. It is worth noting that KEMTLS still relies on signatures by certificate authorities to authenticate the long-term KEM keys.</p><p>With KEMTLS, application data transmitted during the handshake is implicitly authenticated rather than explicitly (as in TLS 1.3), and has slightly weaker downgrade resilience and forward secrecy; but full downgrade resilience and forward secrecy are achieved once the KEMTLS handshake completes.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/UREaRAF5rjwsKtO4HpZ2I/96dcf49355e3f1f67d88bcb336515dd1/image1-7.png" />
            
            </figure><p>By replacing the handshake signature by a KEM key exchange, we reduce the size of the data transmitted in the example handshake to 8,344 bytes, using Kyber512 and Dilithium II — a significant reduction. We can reduce the handshake size even for algorithms such as the NTRU-assumption based KEM NTRU and signature algorithm Falcon, which have a less-pronounced size gap. Typically, KEM operations are computationally much lighter than signing operations, which makes the reduction even more significant.</p><p>KEMTLS was presented at ACM CCS 2020. You can read more about its details in <a href="https://thomwiggers.nl/publication/kemtls/kemtls.pdf">the paper</a>. It was initially <a href="https://github.com/thomwiggers/kemtls-experiment">implemented in the RustTLS library</a> by Thom Wiggers using optimized C/assembly implementations of the post-quantum algorithms provided by the <a href="https://github.com/PQClean/PQClean">PQClean</a> and <a href="https://openquantumsafe.org/">Open Quantum Safe</a> projects.</p>
    <div>
      <h3>Cloudflare and KEMTLS: the implementation</h3>
      <a href="#cloudflare-and-kemtls-the-implementation">
        
      </a>
    </div>
    <p>As part of our effort to show that TLS can be completely post-quantum safe, we implemented the full KEMTLS handshake in Golang’s TLS 1.3 suite. The implementation was done in several steps:</p><ol><li><p>We first needed to clone our own version of Golang, so we could add different post-quantum algorithms to it. You can find our own version <a href="https://github.com/cloudflare/go/">here</a>. This code gets constantly updated with every release of Golang, following <a href="https://github.com/cloudflare/go/wiki/Starting-out">these steps</a>.</p></li><li><p>We needed to implement post-quantum algorithms in Golang, which we did on our own cryptographic library, <a href="https://github.com/cloudflare/circl/tree/master/kem">CIRCL</a>.</p></li><li><p>As we cannot force certificate authorities to use certificates with long-term post-quantum KEM keys, we decided to use <a href="/keyless-delegation/">Delegated Credentials</a>. A delegated credential is a short-lasting key that the certificate’s owner has delegated for use in TLS. Therefore, they can be used for post-quantum KEM keys. See its implementation in our Golang code <a href="https://github.com/cloudflare/go/tree/cf-delegated-credentials">here</a>.</p></li><li><p>We implemented mutual auth (client and server authentication) KEMTLS by using <a href="/keyless-delegation/">Delegated Credentials</a> for the authentication process. See its implementation in our Golang code <a href="https://github.com/cloudflare/go/tree/cf-pq-kemtls">here</a>. You can also check its <a href="https://github.com/cloudflare/go/blob/cf-pq-kemtls/src/crypto/tls/delegated_credentials_test.go#L774">test</a> for an overview of how it works.</p></li></ol><p>Implementing KEMTLS was a straightforward process, although it did require changes to the way Golang handles a TLS 1.3 handshake and how the key schedule works.</p><p>A “regular” TLS 1.3 handshake in Golang (from the server perspective) looks like this:</p>
            <pre><code>func (hs *serverHandshakeStateTLS13) handshake() error {
    c := hs.c

    // For an overview of the TLS 1.3 handshake, see RFC 8446, Section 2.
    if err := hs.processClientHello(); err != nil {
   	 return err
    }
    if err := hs.checkForResumption(); err != nil {
   	 return err
    }
    if err := hs.pickCertificate(); err != nil {
   	 return err
    }
    c.buffering = true
    if err := hs.sendServerParameters(); err != nil {
   	 return err
    }
    if err := hs.sendServerCertificate(); err != nil {
   	 return err
    }
    if err := hs.sendServerFinished(); err != nil {
   	 return err
    }
    // Note that at this point we could start sending application data without
    // waiting for the client's second flight, but the application might not
    // expect the lack of replay protection of the ClientHello parameters.
    if _, err := c.flush(); err != nil {
   	 return err
    }
    if err := hs.readClientCertificate(); err != nil {
   	 return err
    }
    if err := hs.readClientFinished(); err != nil {
   	 return err
    }

    atomic.StoreUint32(&amp;c.handshakeStatus, 1)

    return nil
}</code></pre>
            <p>We had to interrupt the process when the server sends the Certificate (<code>sendServerCertificate()</code>) in order to send the KEMTLS specific messages. In the same way, we had to add the appropriate KEM TLS messages to the client’s handshake. And, as we didn’t want to change so much the way Golang handles TLS 1.3, we only added one new constant to the configuration that can be used by a server in order to ask for the Client’s Certificate (the constant is <code>serverConfig.ClientAuth = RequestClientKEMCert</code>).</p><p>The implementation is easy to work with: if a delegated credential or a certificate has a public key of a supported post-quantum KEM algorithm, the handshake will proceed with KEMTLS. If the server requests a Client KEMTLS Certificate, the handshake will use client KEMTLS authentication.</p>
    <div>
      <h3>Running the Experiment</h3>
      <a href="#running-the-experiment">
        
      </a>
    </div>
    <p>So, what’s next? We’ll take the code we have produced and run it on actual Cloudflare infrastructure to measure how efficiently it works.</p>
    <div>
      <h3>Thanks</h3>
      <a href="#thanks">
        
      </a>
    </div>
    <p>Many thanks to everyone involved in the project: Chris Wood, Armando Faz-Hernández, Thom Wiggers, Bas Westerbaan, Peter Wu, Peter Schwabe, Goutam Tamvada, Douglas Stebila, Thibault Meunier, and the whole Cloudflare Research team.</p><p><sup>1</sup>It is worth noting that the RSA key transport in TLS ≤1.2 has the server only authenticated by RSA public key encryption, although the server's RSA public key is certified using RSA signatures by Certificate Authorities.</p><p><sup>2</sup>An extension used by the client to indicate which named groups -Elliptic Curve Groups, Finite Field Groups- it supports for key exchange.</p><p><sup>3</sup>An extension which contains the endpoint’s cryptographic parameters.</p><p><sup>4</sup>These numbers, as it is noted in the paper, are based on the round-2 submissions.</p> ]]></content:encoded>
            <category><![CDATA[Cryptography]]></category>
            <category><![CDATA[TLS]]></category>
            <category><![CDATA[Research]]></category>
            <category><![CDATA[Post-Quantum]]></category>
            <guid isPermaLink="false">5jJAu3Wf2WiwWl9zjhkHTV</guid>
            <dc:creator>Sofía Celi</dc:creator>
            <dc:creator>Thom Wiggers</dc:creator>
        </item>
    </channel>
</rss>