
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/">
    <channel>
        <title><![CDATA[ The Cloudflare Blog ]]></title>
        <description><![CDATA[ Get the latest news on how products at Cloudflare are built, technologies used, and join the teams helping to build a better Internet. ]]></description>
        <link>https://blog.cloudflare.com</link>
        <atom:link href="https://blog.cloudflare.com/" rel="self" type="application/rss+xml"/>
        <language>en-us</language>
        
        <lastBuildDate>Tue, 07 Apr 2026 18:40:21 GMT</lastBuildDate>
        <item>
            <title><![CDATA[Cloudflare targets 2029 for full post-quantum security]]></title>
            <link>https://blog.cloudflare.com/post-quantum-roadmap/</link>
            <pubDate>Tue, 07 Apr 2026 21:00:00 GMT</pubDate>
            <description><![CDATA[ Recent advances in quantum hardware and software have accelerated the timeline on which quantum attack might happen. Cloudflare is responding by moving our target for full post-quantum security to 2029. ]]></description>
            <content:encoded><![CDATA[ <p>Cloudflare is accelerating its post-quantum roadmap. We now target <b>2029</b> to be fully post-quantum (PQ) secure including, crucially, post-quantum authentication.</p><p>At Cloudflare, we believe in making the Internet private and secure by default. We started by offering <a href="https://blog.cloudflare.com/introducing-universal-ssl/"><u>free universal SSL certificates</u></a> in 2014, began preparing our <a href="https://blog.cloudflare.com/towards-post-quantum-cryptography-in-tls/"><u>post-quantum migration</u></a> in 2019, and enabled post-quantum encryption for <a href="https://blog.cloudflare.com/post-quantum-for-all/"><u>all websites</u></a> and APIs in 2022, mitigating harvest-now/decrypt-later attacks. While we’re excited by the fact that over <a href="https://radar.cloudflare.com/post-quantum"><u>65% of human traffic</u></a> to Cloudflare is post-quantum encrypted, our work is not done until authentication is also upgraded. Credible new research and rapid industry developments suggest that the deadline to migrate is much sooner than expected. This is a challenge that any organization must treat with urgency, which is why we’re expediting our own internal Q-Day readiness timeline.</p><p>What happened? Last week, Google <a href="https://research.google/blog/safeguarding-cryptocurrency-by-disclosing-quantum-vulnerabilities-responsibly/"><u>announced</u></a> they had drastically improved upon the quantum algorithm to break elliptic curve cryptography, which is widely used to secure the Internet. They did not reveal the algorithm, but instead provided a <a href="https://en.wikipedia.org/wiki/Zero-knowledge_proof"><u>zero-knowledge proof</u></a> that they have one.</p><p>This is not even the biggest breakthrough. That same day, Oratomic <a href="https://arxiv.org/abs/2603.28627"><u>published</u></a> a resource estimate for breaking RSA-2048 and P-256 on a neutral atom computer. For P-256, it only requires a shockingly low 10,000 qubits. Google’s motivation behind their recent announcement to also pursue <a href="https://blog.google/innovation-and-ai/technology/research/neutral-atom-quantum-computers/"><u>neutral atoms</u></a> alongside superconducting quantum computers becomes clear now. Although Oratomic explains their basic approach, they still leave out crucial details <a href="https://www.oratomic.com/news/responsible-disclosure-in-the-era-of-quantum-computers"><u>on purpose</u></a>.</p><p>These independent advances prompted Google to accelerate their post-quantum migration timeline to <a href="https://blog.google/innovation-and-ai/technology/safety-security/cryptography-migration-timeline/"><u>2029</u></a>. What’s more, in their announcement and <a href="https://westerbaan.name/~bas/rwpqc2026/sophie.pdf"><u>other talks</u></a>, Google has placed a priority on quantum-secure authentication over mitigating harvest-now/decrypt-later attacks. As we discuss next, this priority indicates that Google is concerned about Q-Day coming as soon as 2030. Following the announcements, IBM Quantum Safe’s CTO is more pessimistic and can’t rule out quantum “moonshot attacks” on high value targets <a href="https://www.linkedin.com/pulse/quantum-timeline-elliptic-curve-cryptography-just-jumped-osborne-k1oae"><u>as early as 2029</u></a>.</p><p>The quantum threat is well known: Q-Day is the day that sufficiently capable quantum computers can break essential cryptography used to protect data and access across systems today.<i> </i>Cryptographically relevant quantum computers (CRQCs) don’t exist yet, but many labs across the world are pursuing different approaches to building one. Until recently, progress on CRQCs has been mostly public, but there is no reason to expect that will continue. Indeed, there is ample reason to expect that progress will leave the public eye. As quantum computer scientist Scott Aaronson <a href="https://scottaaronson.blog/?p=9425"><u>warned</u></a> at the end of 2025:</p><blockquote><p>[A]t some point, the people doing detailed estimates of how many physical qubits and gates it’ll take to break actually deployed cryptosystems using Shor’s algorithm are going to stop publishing those estimates, if for no other reason than the risk of giving too much information to adversaries. Indeed, for all we know, that point may have been passed already.</p></blockquote><p>That point has now passed indeed.</p>
    <div>
      <h2>Why now: independent progress on three fronts</h2>
      <a href="#why-now-independent-progress-on-three-fronts">
        
      </a>
    </div>
    <p>We’d like to spend some words on why it’s difficult to predict progress on quantum computing. Sudden “quantum” leaps in understanding, like the one we witnessed last week, can occur even if everything happens in the public eye. Simply put, breaking cryptography with a quantum computer requires engineering on three independent fronts: quantum hardware, error correction, and quantum software. Progress on each front compounds progress on the others.</p><p><b>Hardware.</b> There are many different competing approaches. We mentioned neutral atoms and superconducting qubits, but there are also ion-trap, photonics, and moonshots like topological qubits. Complementary approaches can even be <a href="https://www.caltech.edu/about/news/low-noise-transducers-to-bridge-the-gap-between-microwave-and-optical-qubits"><u>combined</u></a>. Most of these approaches are pursued by several labs around the world. They all have their distinct engineering challenges and problems to solve before they can scale up. A few years ago, all of them had a long list of open challenges, and it was unclear if any of them would scale. Today most of them have made good progress. None have been demonstrated to scale yet: if they had, we wouldn’t have a couple of years left. But these approaches are much closer now, especially neutral atoms. To ignore this progress, you’d have to believe that every single approach will hit a wall.</p><p><b>Error correction.</b> All quantum computers are noisy and require error-correcting codes to perform meaningful computation. This adds quite a bit of overhead, though how much depends on the architecture. More noise requires more error correction, but more interestingly, improved qubit connectivity allows for much more efficient codes. For a sense of scale: typically around a thousand physical qubits are required for one logical qubit for the superconducting quantum computers that are noisy and only have neighbor qubit connectivity. We knew “reconfigurable qubits” such as those of neutral-atom machines allow for an order of magnitude better error-correcting codes. Surprisingly, Oratomic showed the advantage is even larger: only about 3-4 physical neutral atom qubits are required per logical qubit.</p><p><b>Software. </b>Lastly, the quantum algorithms to crack cryptography can be improved. This is Google’s breakthrough: they massively sped up the algorithm to crack P-256. On top of that, Oratomic showed further architecture specific optimizations for reconfigurable qubits.</p><p>The picture comes together: in 2025 neutral atoms turned out to be more scalable than expected, and now Oratomic figured out how to do much better error-correcting codes with such highly connected qubits. On top of that, breaking P-256 requires much less work. The result is that Q-Day has been pulled forward significantly from typical <a href="https://blog.cloudflare.com/pq-2025/#is-q-day-always-fifteen-years-away"><u>2035+ timelines</u></a>, with neutral atoms in the lead, and other approaches not far behind.</p><p>In previous blog posts <a href="https://blog.cloudflare.com/pq-2025/#progress-on-quantum-hardware"><u>we’ve discussed</u></a> how different quantum computers compare on physical qubit count and fidelity, compared to the conservative goalpost of cracking RSA-2048 on a superconducting qubit architecture. This analysis gives us a rough idea of how much time we have, and it’s certainly better than tracking <a href="https://bas.westerbaan.name/notes/2026/04/02/factoring.html"><u>quantum factoring records</u></a>, but it misses architecture-specific optimization and software improvements. What to watch for now is when the final missing <a href="https://westerbaan.name/~bas/rwpqc2026/adam.pdf"><u>capabilities</u></a> for each architecture are achieved.</p>
    <div>
      <h2>It’s time to focus on authentication</h2>
      <a href="#its-time-to-focus-on-authentication">
        
      </a>
    </div>
    <p>Historically, the industry’s focus on post-quantum cryptography (PQC) has been based largely on PQ <i>encryption, </i>which stops harvest-now/decrypt-later (HNDL) attacks. In an HNDL attack, an adversary harvests sensitive encrypted network traffic today and stores it until a future date when it can use a powerful quantum computer to decrypt the data. HNDL attacks are the primary threat when Q-Day is far away. That’s why our focus, thus far, has been on mitigating this risk, by adopting post-quantum encryption by default in our products <a href="https://blog.cloudflare.com/post-quantum-for-all/"><u>since 2022</u></a>. Today, as we mentioned above, <a href="https://developers.cloudflare.com/ssl/post-quantum-cryptography/"><u>most Cloudflare products</u></a> are secure against HNDL attacks, and we’re working to upgrade the rest as we speak. </p><p>The other category of attacks is against authentication: adversaries armed with functioning quantum computers impersonate servers or forge access credentials. If Q-Day is far off, authentication is not urgent: deploying PQ certificates and signatures does not add any value, only effort.</p><p>An imminent Q-Day flips the script: data leaks are severe, but broken authentication is catastrophic. Any overlooked quantum-vulnerable remote-login key is an access point for an attacker to do as they wish, whether that’s to extort, take down, or snoop on your system. Any automatic software-update mechanism becomes a <a href="https://www.cloudflare.com/learning/security/what-is-remote-code-execution/"><u>remote code execution</u></a> vector. An active quantum attacker has it easy — they only need to find one trusted quantum-vulnerable key to get in.</p><p>When experts in the field of building quantum computers start patching authentication systems, we should all listen. The question is no longer "when will our encrypted data be at risk?" but "how long before an attacker walks in the front door with a quantum-forged key?"</p>
    <div>
      <h3>Prioritizing the most vulnerable systems</h3>
      <a href="#prioritizing-the-most-vulnerable-systems">
        
      </a>
    </div>
    <p>If quantum computers arrive in the next few years, they will be scarce and expensive. Attackers will prioritize high-value targets, like long-lived keys that unlock substantial assets or persistent access such as root certificates, API auth keys and code-signing certs. If an attacker is able to compromise one such key, they retain indefinite access until they are discovered or that key is revoked.</p><p>This suggests long-lived keys should be prioritized. That is certainly true if the quantum attack of a single key is expensive and slow, which is to be expected for the first generation of neutral atom quantum computers. That’s not the case for scalable superconducting quantum computers and later generations of neutral atom quantum computers, which could well crack keys much faster. Such fast CRQCs flip the script again, and an adversary with one might focus purely on HNDL attacks so that their attacks remain undetected. Google’s Sophie Schmieg <a href="https://westerbaan.name/~bas/rwpqc2026/sophie.pdf"><u>compares</u></a> this scenario to Enigma’s cryptanalysis that changed the direction of World War II.</p><p>Adding support for PQ cryptography is not enough. Systems must disable support for quantum-vulnerable cryptography to be secure against <a href="https://en.wikipedia.org/wiki/Downgrade_attack"><u>downgrade attacks</u></a>. In larger, especially federated systems such as the web, this is not feasible because not every client (browser) will support post-quantum certificates, and servers need to keep supporting these legacy clients. However, downgrade protection for HTTPS is still achievable using “<a href="https://www.chromium.org/Home/chromium-security/post-quantum-auth-roadmap"><u>PQ HSTS</u></a>” and/or <a href="https://westerbaan.name/~bas/rwpqc2026/bas.pdf"><u>certificate transparency</u></a>.</p><p>Disabling quantum-vulnerable cryptography is not the last step: once done, all secrets such as passwords and access tokens previously exposed in the quantum-vulnerable system need to be rotated. Unlike post-quantum encryption, which takes one big push, migrating to post-quantum authentication has a long dependency chain — not to mention third-party validation and fraud monitoring. This will take years, not months.</p><p>It’s natural for organizations reading this to rush out and think about which internal systems they need to upgrade. But that’s not the end of the story. Q-day threatens all systems. As such, it’s important to understand the impact of a potential Q-day on third-party dependencies, both direct and indirect. Not just the third-parties you speak cryptography to, but also any third parties that are critical business dependencies like financial services and utilities.</p><p>With Q-day approaching on a shorter timeline, post-quantum authentication is top priority. Long-term keys should be upgraded first. Deep dependency chains and the fact that everyone has third-party vendors means this effort will take on the order of years, not months. Upgrading to post-quantum cryptography is not enough: to prevent downgrades, quantum-vulnerable cryptography must also be turned off.</p>
    <div>
      <h2>Cloudflare’s roadmap to full post-quantum security</h2>
      <a href="#cloudflares-roadmap-to-full-post-quantum-security">
        
      </a>
    </div>
    <p>Today, Cloudflare provides post-quantum encryption for the majority of our products mitigating harvest-now/decrypt-later. This is the product of work we started <a href="https://blog.cloudflare.com/introducing-universal-ssl/"><u>over a decade ago</u></a> to protect our customers and the Internet at large.

We are targeting full post-quantum security including authentication for our entire product suite by 2029. Here we’re sharing some intermediate milestones we’ve set, subject to change as our understanding of the risk and deployment challenges evolve.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/CxLrmxb2dExy2S5FL0ouy/e1ee5616ce31d15ed8e0fd7b40ae2a39/image2.jpg" />
          </figure>
    <div>
      <h2>What we recommend</h2>
      <a href="#what-we-recommend">
        
      </a>
    </div>
    <p>For businesses, we recommend making post-quantum support a requirement for any procurement. Common best practices, like keeping software updated and automating certificate issuance, are meaningful and will get you pretty far. We recommend assessing critical vendors early for what their failure to take action would mean for your business.</p><p>For regulatory agencies and governments: leading by setting early timelines has been crucial for industry-wide progress so far. We are now in a pivotal position where fragmentation in standards and effort between and within jurisdictions could put progress at risk. We recommend that governments assign and empower a lead agency to coordinate the migration on a clear timeline, stay security-focused, and promote the use of existing international standards. Governments need not panic, but can lead migration with confidence.</p><p>For Cloudflare customers, with respect to our services, you do not need to take any mitigating action. We are following the latest advancements in quantum computing closely and taking proactive steps to protect your data. As we have done in the past, we will turn on post-quantum security by default, with no switches to flip. What we don’t control is the other side: browsers, applications, and origins need to upgrade. Corporate network traffic on Cloudflare need not worry: Cloudflare One offers <a href="https://blog.cloudflare.com/post-quantum-sase/"><u>end-to-end protection</u></a> when tunnelling traffic through our post-quantum encrypted infrastructure.</p><p>Privacy and security are table stakes for the Internet. That's why every post-quantum upgrade we build will continue to be available to all customers, on every plan, at <a href="https://blog.cloudflare.com/post-quantum-crypto-should-be-free/"><u>no additional cost</u></a>. Making post-quantum security the default is the only way to protect the Internet at scale.</p><p><a href="https://blog.cloudflare.com/introducing-universal-ssl/"><u>Free TLS</u></a> helped encrypt the web. Free post-quantum cryptography will help secure it for what comes next.</p> ]]></content:encoded>
            <category><![CDATA[Post-Quantum]]></category>
            <category><![CDATA[Security]]></category>
            <guid isPermaLink="false">1vlCZshPEwWazkAnJsxN44</guid>
            <dc:creator>Bas Westerbaan</dc:creator>
        </item>
        <item>
            <title><![CDATA[Bringing more transparency to post-quantum usage, encrypted messaging, and routing security]]></title>
            <link>https://blog.cloudflare.com/radar-origin-pq-key-transparency-aspa/</link>
            <pubDate>Fri, 27 Feb 2026 06:00:00 GMT</pubDate>
            <description><![CDATA[ Cloudflare Radar has added new tools for monitoring PQ adoption, KT logs for messaging, and ASPA routing records to track the Internet's migration toward more secure encryption and routing standards.  ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Cloudflare Radar already offers a wide array of <a href="https://radar.cloudflare.com/security/"><u>security insights</u></a> — from application and network layer attacks, to malicious email messages, to digital certificates and Internet routing.</p><p>And today we’re introducing even more. We are launching several new security-related data sets and tools on Radar: </p><ul><li><p>We are extending our post-quantum (PQ) monitoring beyond the client side to now include origin-facing connections. We have also released a new tool to help you check any website's post-quantum encryption compatibility. </p></li><li><p>A new Key Transparency section on Radar provides a public dashboard showing the real-time verification status of Key Transparency Logs for end-to-end encrypted messaging services like WhatsApp, showing when each log was last signed and verified by Cloudflare's Auditor. The page serves as a transparent interface where anyone can monitor the integrity of public key distribution and access the API to independently validate our Auditor’s proofs. </p></li><li><p>Routing Security insights continue to expand with the addition of global, country, and network-level information about the deployment of ASPA, an emerging standard that can help detect and prevent BGP route leaks. </p></li></ul>
    <div>
      <h2>Measuring origin post-quantum support</h2>
      <a href="#measuring-origin-post-quantum-support">
        
      </a>
    </div>
    
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2gs0x3zMZTxios168jT9xW/179d8959b5e0939835cf6facef797457/1.png" />
          </figure><p>Since <a href="https://x.com/CloudflareRadar/status/1788277817362329983"><u>April 2024</u></a>, we have tracked the aggregate growth of client support for post-quantum encryption on Cloudflare Radar, chronicling its global growth from <a href="https://radar.cloudflare.com/adoption-and-usage?dateStart=2024-01-01&amp;dateEnd=2024-01-31#post-quantum-encryption-adoption"><u>under 3% at the start of 2024</u></a>, to <a href="https://radar.cloudflare.com/adoption-and-usage?dateStart=2026-02-01&amp;dateEnd=2026-02-28#post-quantum-encryption-adoption"><u>over 60% in February 2026</u></a>. And in October 2025, <a href="https://blog.cloudflare.com/pq-2025/#what-you-can-do-today-to-stay-safe-against-quantum-attacks"><u>we added the ability</u></a> for users to <a href="https://radar.cloudflare.com/adoption-and-usage#browser-support"><u>check</u></a> whether their browser supports <a href="https://developers.cloudflare.com/ssl/post-quantum-cryptography/pqc-support/#x25519mlkem768"><code><u>X25519MLKEM768</u></code></a> — a hybrid key exchange algorithm combining classical <a href="https://www.rfc-editor.org/rfc/rfc8410"><code><u>X25519</u></code></a> with <a href="https://nvlpubs.nist.gov/nistpubs/FIPS/NIST.FIPS.203.pdf"><u>ML-KEM</u></a>, a lattice-based post-quantum scheme standardized by NIST. This provides security against both classical and quantum attacks. </p><p>However, post-quantum encryption support on user-to-Cloudflare connections is only part of the story.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/67cvSmOaISIHjrKKRHKPzg/e0ccf032658904fd6beaa7de7340b561/2.png" />
          </figure><p>For content not in our CDN cache, or for uncacheable content, Cloudflare’s edge servers establish a separate connection with a customer’s origin servers to retrieve it. To accelerate the transition to quantum-resistant security for these origin-facing fetches, we <a href="https://blog.cloudflare.com/post-quantum-to-origins/"><u>previously introduced an API</u></a> allowing customers to opt in to preferring post-quantum connections. Today, we’re making post-quantum compatibility of origin servers visible on Radar.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6KvV2meYLEPbNIQyHP6yji/9477a134c8f5f6a7aaecd6257cd59981/3.png" />
          </figure><p>The new origin post-quantum support graph on Radar illustrates the share of customer origins supporting <code>X25519MLKEM768</code>. This data is derived from <a href="https://blog.cloudflare.com/automatically-secure/"><u>our automated TLS scanner,</u></a> which probes TLS 1.3-compatible origins and aggregates the results daily. It is important to note that our scanner tests for support rather than the origin server's specific preference. While an origin may support a post-quantum key exchange algorithm, its local TLS key exchange preference can ultimately dictate the encryption outcome.</p><p>While the headline graph focuses on post-quantum readiness, the scanner also evaluates support for classical key exchange algorithms. Within the Radar <a href="https://radar.cloudflare.com/explorer?dataSet=post_quantum.origin&amp;groupBy=key_agreement#result"><u>Data Explorer view</u></a>, you can also see the full distribution of these supported TLS key exchange methods.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5PBOoQSCcIAQrYezKp1pJU/d4218aba59deef6c21df53856a93040a/4.png" />
          </figure><p>As shown in the graphs above, approximately 10% of origins could benefit from a post-quantum-preferred key agreement today. This represents a significant jump from less than 1% at the start of 2025 — <a href="https://radar.cloudflare.com/explorer?dataSet=post_quantum.origin&amp;groupBy=key_agreement&amp;dt=2025-01-01_2025-12-31"><u>a 10x increase in just over a year</u></a>. We expect this number to grow steadily as the industry continues its migration. This upward trend likely accelerated in 2025 as many server-side TLS libraries, such as <a href="https://openssl-library.org/post/2025-04-08-openssl-35-final-release/"><u>OpenSSL 3.5.0+</u></a>,<a href="https://www.gnutls.org/"><u> GnuTLS 3.8.9+</u></a>, and <a href="https://go.dev/doc/go1.24#cryptotlspkgcryptotls"><u>Go 1.24+</u></a>, enabled hybrid post-quantum key exchange by default, allowing platforms and services to support post-quantum connections simply by upgrading their cryptographic library dependencies.</p><p>In addition to the Radar and Data Explorer graphs, the <a href="https://developers.cloudflare.com/api/resources/radar/subresources/post_quantum/subresources/origin/"><u>origin readiness data is available through the Radar API</u></a> as well.</p><p>As an additional part of our efforts to help the Internet transition to post-quantum cryptography, we are also launching <a href="https://radar.cloudflare.com/post-quantum#website-support"><u>a tool to test whether a specific hostname supports post-quantum encryption</u></a>. These tests can be run against any publicly accessible website, as long as they allow connections from Cloudflare’s <a href="https://www.cloudflare.com/ips/"><u>egress IP address ranges</u></a>. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5dgwK3i7IeLLSUt5xnk4lf/276e25dda3389f6e0ad83a26acd08fec/5.png" />
          </figure><p><sub><i>A screenshot of the tool in Radar to test whether a hostname supports post-quantum encryption.</i></sub></p><p>The tool presents a simple form where users can enter a hostname (such as <a href="https://radar.cloudflare.com/post-quantum?host=cloudflare.com%3A443"><code><u>cloudflare.com</u></code></a> or <a href="https://radar.cloudflare.com/post-quantum?host=www.wikipedia.org%3A443"><code><u>www.wikipedia.org</u></code></a>) and optionally specify a custom port (the default is <a href="https://www.iana.org/assignments/service-names-port-numbers/service-names-port-numbers.xhtml?search=443"><u>443, the standard HTTPS port</u></a>). After clicking "Test", the result displays a tag indicating PQ support status alongside the negotiated TLS key exchange algorithm. If the server prefers PQ secure connections, a green "PQ" tag appears with a message confirming the connection is "post-quantum secure." Otherwise, a red tag indicates the connection is "not post-quantum secure", showing the classical algorithm that was negotiated.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3rfEG4dMlwR4FJkaKXTRWF/8cab135242057ce57f3b0e4a92be4cec/6.png" />
          </figure>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/PXu3kjzwhVkb29kIFREOn/41785c06297e0667ff9e2b261ae9b819/7.png" />
          </figure><p>Under the hood, this tool uses <a href="https://developers.cloudflare.com/containers/"><u>Cloudflare Containers</u></a> — a new capability that allows running container workloads alongside Workers. Since the Workers runtime is not exposed to details of the underlying TLS handshake, Workers cannot initiate TLS scans. Therefore, we created a Go container that leverages the <a href="https://pkg.go.dev/crypto/tls"><code><u>crypto/tls</u></code></a> package's support for post-quantum compatibility checks. The container runs on-demand and performs the actual handshake to determine the negotiated TLS key exchange algorithm, returning results through the <a href="https://developers.cloudflare.com/api/resources/radar/subresources/post_quantum/subresources/tls/methods/support/"><u>Radar API</u></a>.</p><p>With the addition of these origin-facing insights, complementing the existing client-facing insights, we have moved all the post-quantum content to <a href="https://radar.cloudflare.com/post-quantum"><u>its own section on Radar</u></a>. </p>
    <div>
      <h2>Securing E2EE messaging systems with Key Transparency</h2>
      <a href="#securing-e2ee-messaging-systems-with-key-transparency">
        
      </a>
    </div>
    
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/71b8HJK1iT0udJscvkqqI4/778efb329047fca017ff2cf4153330ad/8.png" />
          </figure><p><a href="https://www.cloudflare.com/learning/privacy/what-is-end-to-end-encryption/"><u>End-to-end encrypted (E2EE)</u></a> messaging apps like WhatsApp and Signal have become essential tools for private communication, relied upon by billions of people worldwide. These apps use <a href="https://www.cloudflare.com/learning/ssl/how-does-public-key-encryption-work/"><u>public-key cryptography</u></a> to ensure that only the sender and recipient can read the contents of their messages — not even the messaging service itself. However, there's an often-overlooked vulnerability in this model: users must trust that the messaging app is distributing the correct public keys for each contact.</p><p>If an attacker were able to substitute an incorrect public key in the messaging app's database, they could intercept messages intended for someone else — all without the sender knowing.</p><p>Key Transparency addresses this challenge by creating an auditable, append-only log of public keys — similar in concept to <a href="https://radar.cloudflare.com/certificate-transparency"><u>Certificate Transparency</u></a> for TLS certificates. Messaging apps publish their users' public keys to a transparency log, and independent third parties can verify and vouch that the log has been constructed correctly and consistently over time. In September 2024, Cloudflare <a href="https://blog.cloudflare.com/key-transparency/"><u>announced</u></a> such a Key Transparency auditor for WhatsApp, providing an independent verification layer that helps ensure the integrity of public key distribution for the messaging app's billions of users.</p><p>Today, we're publishing Key Transparency audit data in a new <a href="https://radar.cloudflare.com/key-transparency"><u>Key Transparency section</u></a> on Cloudflare Radar. This section showcases the Key Transparency logs that Cloudflare audits, giving researchers, security professionals, and curious users a window into the health and activity of these critical systems.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1LZ1DUzv0SCgBa0XqDURKP/26ccd8b0741073895cbb52aa7f1d5643/image11.png" />
          </figure><p>The new page launches with two monitored logs: WhatsApp and Facebook Messenger Transport. Each monitored log is displayed as a card containing the following information:</p><ul><li><p><b>Status:</b> Indicates whether the log is online, in initialization, or disabled. An "online" status means the log is actively publishing key updates into epochs that Cloudflare audits. (An epoch represents a set of updates applied to the key directory at a specific time.)</p></li><li><p><b>Last signed epoch:</b> The most recent epoch that has been published by the messaging service's log and acknowledged by Cloudflare. By clicking on the eye icon, users can view the full epoch data in JSON format, including the epoch number, timestamp, cryptographic digest, and signature.</p></li><li><p><b>Last verified epoch:</b> The most recent epoch that Cloudflare has verified. Verification involves checking that the transition of the transparency log data structure from the previous epoch to the current one represents a valid tree transformation — ensuring the log has been constructed correctly. The verification timestamp indicates when Cloudflare completed its audit.</p></li><li><p><b>Root:</b> The current root hash of the <a href="https://github.com/facebook/akd"><u>Auditable Key Directory (AKD)</u></a> tree. This hash cryptographically represents the entire state of the key directory at the current epoch. Like the epoch fields, users can click to view the complete JSON response from the auditor.</p></li></ul><p>The data shown on the page is also available via the Key Transparency Auditor API, with endpoints for <a href="https://developers.cloudflare.com/key-transparency/api/auditor-information/"><u>auditor information</u></a> and <a href="https://developers.cloudflare.com/key-transparency/api/namespaces/"><u>namespaces</u></a>.</p><p>If you would like to perform audit proof verification yourself, you can follow the instructions in our <a href="https://blog.cloudflare.com/key-transparency/"><u>Auditing Key Transparency blog post</u></a>. We hope that these use cases are the first of many that we publish in this Key Transparency section in Radar — if your company or organization is interested in auditing for your public key or related infrastructure, you can <a href="https://www.cloudflare.com/lp/privacy-edge/"><u>reach out to us here</u></a>.</p>
    <div>
      <h2>Tracking RPKI ASPA adoption</h2>
      <a href="#tracking-rpki-aspa-adoption">
        
      </a>
    </div>
    
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2LAbrwY9ziVbe1BzfUyl7K/821a40f86c62dd9b44f7bcaee018dd28/10.png" />
          </figure><p>While the <a href="https://www.cloudflare.com/learning/security/glossary/what-is-bgp/"><u>Border Gateway Protocol (BGP)</u></a> is the backbone of Internet routing, it was designed without built-in mechanisms to verify the validity of the paths it propagates. This inherent trust has long left the global network vulnerable to route leaks and hijacks, where traffic is accidentally or maliciously detoured through unauthorized networks.</p><p>Although <a href="https://en.wikipedia.org/wiki/Resource_Public_Key_Infrastructure"><u>RPKI</u></a> and <a href="https://www.arin.net/resources/manage/rpki/roas/"><u>Route Origin Authorizations (ROAs)</u></a> have successfully hardened the origin of routes, they cannot verify the path traffic takes between networks. This is where <a href="https://datatracker.ietf.org/doc/draft-ietf-sidrops-aspa-verification/"><u>ASPA (Autonomous System Provider Authorization)</u></a><b> </b>comes in. ASPA extends RPKI protection by allowing an <a href="https://www.cloudflare.com/learning/network-layer/what-is-an-autonomous-system/"><u>Autonomous System (AS)</u></a> to cryptographically sign a record listing the networks authorized to propagate its routes upstream. By validating these Customer-to-Provider relationships, ASPA allows systems to detect invalid path announcements with confidence and react accordingly.</p><p>While the specific IETF standard remains <a href="https://datatracker.ietf.org/doc/draft-ietf-sidrops-aspa-verification/"><u>in draft</u></a>, the operational community is moving fast. Support for creating ASPA objects has already landed in the portals of Regional Internet Registries (RIRs) like <a href="https://www.arin.net/announcements/20260120/"><u>ARIN</u></a> and <a href="https://labs.ripe.net/author/tim_bruijnzeels/aspa-in-the-rpki-dashboard-a-new-layer-of-routing-security/"><u>RIPE NCC</u></a>, and validation logic is available in major software routing stacks like <a href="https://www.undeadly.org/cgi?action=article;sid=20231002135058"><u>OpenBGPD</u></a> and <a href="https://bird.network.cz/?get_doc&amp;v=20&amp;f=bird-5.html"><u>BIRD</u></a>.</p><p>To provide better visibility into the adoption of this emerging standard, we have added comprehensive RPKI ASPA support to the <a href="https://radar.cloudflare.com/routing"><u>Routing section</u></a> of Cloudflare Radar. Tracking these records globally allows us to understand how quickly the industry is moving toward better path validation.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6SI6A5vd2bAp3QnBAsJFmZ/24e11445eb0309252d759e88dbf2ba62/11.png" />
          </figure><p>Our new ASPA deployment view allows users to examine the growth of ASPA adoption over time, with the ability to visualize trends across the five <a href="https://en.wikipedia.org/wiki/Regional_Internet_registry"><u>Regional Internet Registries</u></a> (RIRs) based on AS registration. You can view the entire history of ASPA entries, dating back to October 1, 2023, or zoom into specific date ranges to correlate spikes in adoption with industry events, such as the introduction of ASPA features on ARIN and RIPE NCC online dashboards.</p><p>Beyond aggregate trends, we have also introduced a granular, searchable explorer for real-time ASPA content. This table view allows you to inspect the current state of ASPA records, searchable by AS number, AS name, or by filtering for only providers or customer ASNs. This allows network operators to verify that their records are published correctly and to view other networks’ configurations.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/K97G5TC7O1MYwkvFbrdTl/85b27f807401f85d2bbe140f1611a034/12.png" />
          </figure><p>We have also integrated ASPA data directly into the country/region routing pages. Users can now track how different locations are progressing in securing their infrastructure, based on the associated ASPA records from the customer ASNs registered locally.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6mhZyfrHexdo1GDAoKZEd7/44b63675595a01939fa4748210d8c482/13.png" />
          </figure><p>On individual AS pages, we have updated the Connectivity section. Now, when viewing the connections of a network, you may see a visual indicator for "ASPA Verified Provider." This annotation confirms that an ASPA record exists authorizing that specific upstream connection, providing an immediate signal of routing hygiene and trust.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3lVJY4fZWv3KaFdKwLHfAV/aeb2bc27bdccb6a9025345dbaed5b762/14.png" />
          </figure><p>For ASes that have deployed ASPA, we now display a complete list of authorized provider ASNs along with their details. Beyond the current state, Radar also provides a detailed timeline of ASPA activity involving the AS. This history distinguishes between changes initiated by the AS itself ("As customer") and records created by others designating it as a provider ("As provider"), allowing users to immediately identify when specific routing authorizations were established or modified.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/ZIlAn2l0sDTLCyEMMcBI9/871b8d7abffe17b3aee060502eaa4c1c/15.png" />
          </figure><p>Visibility is an essential first step toward broader adoption of emerging routing security protocols like ASPA. By surfacing this data, we aim to help operators deploy protections and assist researchers in tracking the Internet's progress toward a more secure routing path. For those who need to integrate this data into their own workflows or perform deeper analysis, we are also exposing these metrics programmatically. Users can now access ASPA content snapshots, historical timeseries, and detailed changes data using the newly introduced endpoints in the<a href="https://developers.cloudflare.com/api/resources/radar/subresources/bgp/subresources/rpki/subresources/aspa/"> <u>Cloudflare Radar API</u></a>.</p>
    <div>
      <h2>As security evolves, so does our data</h2>
      <a href="#as-security-evolves-so-does-our-data">
        
      </a>
    </div>
    <p>Internet security continues to evolve, with new approaches, protocols, and standards being developed to ensure that information, applications, and networks remain secure. The security data and insights available on Cloudflare Radar will continue to evolve as well. The new sections highlighted above serve to expand existing routing security, transparency, and post-quantum insights already available on Cloudflare Radar. </p><p>If you share any of these new charts and graphs on social media, be sure to tag us: <a href="https://x.com/CloudflareRadar"><u>@CloudflareRadar</u></a> (X), <a href="https://noc.social/@cloudflareradar"><u>noc.social/@cloudflareradar</u></a> (Mastodon), and <a href="https://bsky.app/profile/radar.cloudflare.com"><u>radar.cloudflare.com</u></a> (Bluesky). If you have questions or comments, or suggestions for data that you’d like to see us add to Radar, you can reach out to us on social media, or contact us via <a><u>email</u></a>.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5jAzDXss7PvszWkwGC0q2g/df14de40bf268052fac11239952fc1ed/16.png" />
          </figure><p></p> ]]></content:encoded>
            <category><![CDATA[Radar]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[Privacy]]></category>
            <category><![CDATA[Post-Quantum]]></category>
            <category><![CDATA[Routing]]></category>
            <category><![CDATA[Research]]></category>
            <guid isPermaLink="false">1Iy1Qvw9TsOhRwgjUYqFxO</guid>
            <dc:creator>David Belson</dc:creator>
            <dc:creator>Mingwei Zhang</dc:creator>
            <dc:creator>André Jesus</dc:creator>
            <dc:creator>Suleman Ahmad</dc:creator>
            <dc:creator>Sabina Zejnilovic</dc:creator>
            <dc:creator>Thibault Meunier</dc:creator>
            <dc:creator>Mari Galicer</dc:creator>
        </item>
        <item>
            <title><![CDATA[Cloudflare One is the first SASE offering modern post-quantum encryption across the full platform]]></title>
            <link>https://blog.cloudflare.com/post-quantum-sase/</link>
            <pubDate>Mon, 23 Feb 2026 06:00:00 GMT</pubDate>
            <description><![CDATA[ We’ve upgraded Cloudflare One to support post-quantum encryption by implementing the latest IETF drafts for hybrid ML-KEM into our Cloudflare IPsec product. This extends post-quantum encryption across all major Cloudflare One on-ramps and off-ramps. ]]></description>
            <content:encoded><![CDATA[ <p>During Security Week 2025, we launched the industry’s first cloud-native<a href="https://www.cloudflare.com/press/press-releases/2025/cloudflare-advances-industrys-first-cloud-native-quantum-safe-zero-trust/"> <u>post-quantum Secure Web Gateway (SWG) and Zero Trust solution</u></a>, a major step towards securing enterprise network traffic sent from end user devices to public and private networks.</p><p>But this is only part of the equation. To truly secure the future of enterprise networking, you need a complete <a href="https://www.cloudflare.com/learning/access-management/what-is-sase/"><u>Secure Access Service Edge (SASE)</u></a>. </p><p>Today, we complete the equation: Cloudflare One is the first SASE platform to support modern standards-compliant post-quantum (PQ) encryption in our Secure Web Gateway, and across Zero Trust and Wide Area Network (WAN) use cases.  More specifically, Cloudflare One now offers post-quantum hybrid ML-KEM (Module-Lattice-based Key-Encapsulation Mechanism) across all major on-ramps and off-ramps.</p><p>To complete the equation, we added support for post-quantum encryption to our <a href="https://developers.cloudflare.com/magic-wan/reference/gre-ipsec-tunnels/"><u>Cloudflare IPsec</u></a> (our cloud-native WAN-as-a-Service) and <a href="https://developers.cloudflare.com/magic-wan/configuration/connector/"><u>Cloudflare One Appliance</u></a> (our physical or virtual WAN appliance that establish Cloudflare IPsec connections). Cloudflare IPsec uses the <a href="https://www.cloudflare.com/learning/network-layer/what-is-ipsec/"><u>IPsec</u></a> protocol to establish encrypted tunnels from a customer’s network to Cloudflare’s global network, while IP <a href="https://www.cloudflare.com/learning/cdn/glossary/anycast-network/"><u>Anycast</u></a> is used to automatically route that tunnel to the nearest Cloudflare data center. Cloudflare IPsec simplifies configuration and provides high availability; if a specific data center becomes unavailable, traffic is automatically rerouted to the closest healthy data center. Cloudflare IPsec runs at the scale of our global network, and supports site-to-site across a WAN as well as outbound connections to the Internet.</p><p>The <a href="https://developers.cloudflare.com/magic-wan/configuration/connector/"><u>Cloudflare One Appliance</u></a> upgrade is generally available as of appliance version 2026.2.0. The <a href="https://developers.cloudflare.com/magic-wan/reference/gre-ipsec-tunnels/"><u>Cloudflare IPsec</u></a> upgrade is in closed beta, and you can request access by adding your name to our <a href="https://www.cloudflare.com/security-week/pq-ipsec-beta/"><u>closed beta list</u></a>.</p>
    <div>
      <h2>Post-quantum cryptography matters now</h2>
      <a href="#post-quantum-cryptography-matters-now">
        
      </a>
    </div>
    <p>Quantum threats are not a "next decade" problem. Here is why our customers are prioritizing <a href="https://www.cloudflare.com/learning/ssl/quantum/what-is-post-quantum-cryptography/"><u>post-quantum cryptography (PQC)</u></a> today:</p><p><b>The deadline is approaching. </b>At the end of 2024, the National Institute of Standards and Technology (NIST) sent a <a href="https://nvlpubs.nist.gov/nistpubs/ir/2024/NIST.IR.8547.ipd.pdf"><u>clear signal</u></a> (that has been <a href="https://www.bsi.bund.de/EN/Service-Navi/Presse/Pressemitteilungen/Presse2024/241127_Post-Quantum_Cryptography.html"><u>echoed</u></a> by other <a href="https://www.ncsc.gov.uk/guidance/pqc-migration-timelines"><u>agencies</u></a>): the era of classical public-key cryptography is coming to an end. NIST set a 2030 deadline for depreciating RSA and Elliptic Curve Cryptography (ECC) and <a href="https://www.cloudflare.com/pqc/"><u>transitioning to PQC</u></a> that cannot be broken by powerful quantum computers. Organizations that haven't begun their migration risk being out of compliance and vulnerable as the deadline nears.</p><p><b>Upgrades have historically been tricky. </b>While 2030 might seem far away, upgrading cryptographic algorithms is notoriously difficult. History has shown us that depreciating cryptography can take decades: we found examples of <a href="https://blog.cloudflare.com/radius-udp-vulnerable-md5-attack/"><u>MD5 causing problems 20 years after it was deprecated</u></a>. This lack of crypto agility — the ability to easily swap out cryptographic algorithms — is a major bottleneck. By integrating PQ encryption directly into <a href="https://www.cloudflare.com/zero-trust/"><u>Cloudflare One</u></a>, our SASE platform, we provide built-in crypto agility, simplifying how organizations offer remote access and site-to-site connectivity.</p><p><b>Data may already be at risk.</b> Finally, "Harvest Now, Decrypt Later" is a present and persistent threat, where attackers harvest sensitive network traffic today and then store it until quantum computers become powerful enough to decrypt it. If your data has a shelf life of more than a few years (e.g. financial information, health data, state secrets) it is already at risk unless it is protected by PQ encryption.</p>
    <div>
      <h3>The two migrations on the road to quantum safety: key agreement and digital signatures</h3>
      <a href="#the-two-migrations-on-the-road-to-quantum-safety-key-agreement-and-digital-signatures">
        
      </a>
    </div>
    <p>Transitioning network traffic to post-quantum cryptography (PQC) requires an overhaul of two cryptographic primitives: key agreement and digital signatures.  </p><p><b>Migration 1: Key establishment. </b>Key agreement allows two parties to establish a shared secret over an insecure channel; the shared secret is then used to encrypt network traffic, resulting in post-quantum encryption. The industry has largely converged on ML-KEM (Module-Lattice-based Key-Encapsulation Mechanism) as the standard PQ key agreement protocol. </p><p>ML-KEM has been widely adopted for use in <a href="https://www.cloudflare.com/learning/ssl/transport-layer-security-tls/"><u>TLS</u></a>, usually deployed alongside classical Elliptic Curve Diffie Hellman (ECDHE), where the key used to encrypt network traffic is derived by mixing the outputs of the ML-KEM and ECDHE key agreements. (This is also known as “hybrid ML-KEM”). Well over <a href="https://radar.cloudflare.com/adoption-and-usage#post-quantum-encryption"><u>60% of human-generated TLS traffic</u></a> to Cloudflare’s network is currently protected with hybrid ML-KEM. The transition to hybrid ML-KEM has been successful because it:</p><ul><li><p>stops "harvest-now, decrypt-later" attacks</p></li><li><p>does not require specialized hardware or specialized physical connectivity between client and server, unlike approaches like <a href="https://blog.cloudflare.com/you-dont-need-quantum-hardware/"><u>Quantum Key Distribution (QKD)</u></a></p></li><li><p>has <a href="https://blog.cloudflare.com/you-dont-need-quantum-hardware/"><u>little impact on performance</u></a>, even for short-lived TLS connections</p></li></ul><p>Because ML-KEM runs in <i>parallel </i>with classical ECDHE, there is no reduction in security and compliance as compared to the classical ECDHE approach.  </p><p><b>Migration 2: Digital signatures. </b>Meanwhile, digital signatures and certificates protect authenticity, stopping active adversaries from impersonating the server to the client. Unfortunately, PQ signatures are currently larger in size than classical ECC algorithms, which has slowed their adoption. Fortunately, the migration to PQ signatures is less urgent, because PQ signatures are designed to stop active adversaries armed with powerful quantum computers, which are not known to exist yet. Thus, while Cloudflare is actively contributing to the standardization and rollout of PQ digital signatures, the current Cloudflare IPsec upgrade focuses on upgrading key establishment to hybrid ML-KEM.  </p><p>The U.S. Cybersecurity &amp; Infrastructure Security Agency (CISA) recognized the nature of these two migrations in its <a href="https://www.cisa.gov/resources-tools/resources/product-categories-technologies-use-post-quantum-cryptography-standards"><u>January 2026 publication</u></a>, “Product Categories for Technologies That Use Post-Quantum Cryptography Standards.”</p>
    <div>
      <h2>Breaking new ground with IPsec </h2>
      <a href="#breaking-new-ground-with-ipsec">
        
      </a>
    </div>
    <p>To achieve a SASE fully protected with post-quantum encryption, we’ve upgraded our Cloudflare IPsec products to support hybrid ML-KEM in the IPsec protocol.</p><p>The IPsec community’s journey toward post-quantum cryptography has been very different from that of TLS. <a href="https://www.cloudflare.com/learning/ssl/transport-layer-security-tls/"><u>TLS</u></a> is the de facto standard for encrypting public Internet traffic at Layer 4  — e.g. from a browser to a <a href="https://www.cloudflare.com/learning/cdn/what-is-a-cdn/"><u>content delivery network (CDN)</u></a> — so security and vendor interoperability are at the forefront of its design. Meanwhile, IPsec is a Layer 3 protocol that commonly connects devices built by the same vendor (e.g. two routers), so interoperability has historically been less of a concern. With this in mind, let’s take a look at IPsec’s journey into the quantum future. </p>
    <div>
      <h3>Pre-Shared Keys? Quantum key distribution?</h3>
      <a href="#pre-shared-keys-quantum-key-distribution">
        
      </a>
    </div>
    <p><a href="https://datatracker.ietf.org/doc/html/rfc8784"><u>RFC 8784</u></a>, published in May 2020, was intended to be the post-quantum update to IPsec Internet Key Exchange v2 (IKEv2), which is used to establish the symmetric keys used to encrypt IPsec network traffic. RFC 8784 implies the use of either long-lived pre-shared keys (PSK) or quantum key distribution (QKD). Neither of these approaches are very palatable.</p><p>RFC 8784 proposes mixing a PSK with a key derived from Diffie Hellman Exchange (DHE), essentially running PSK in hybrid with DHE. This approach protects against harvest-now-decrypt-later attackers, but does not offer <a href="https://blog.cloudflare.com/staying-on-top-of-tls-attacks/#forward-secrecy"><u>forward secrecy</u></a> against quantum adversaries. </p><p><a href="https://blog.cloudflare.com/staying-on-top-of-tls-attacks/#forward-secrecy"><u>Forward secrecy</u></a> is a standard desideratum of key agreement protocols. It ensures that a system is secure even if the long-lived key is leaked. The PSK approach in RFC 8784 is vulnerable to an harvest-now-decrypt-later adversary that also obtains a copy of a long-lived PSK, and can then decrypt traffic in the future (by breaking the DHE key agreement) once powerful quantum computers become available.</p><p>To solve this forward secrecy issue, RFC 8784 can instead be used to mix the key from the classical DHE with a freshly generated key derived from a QKD protocol.</p><p>QKD uses quantum mechanics to establish a shared, secret cryptographic key between two parties. Importantly, for QKD to work, the parties must have specialized hardware or be connected by a dedicated physical connection. This is a <a href="https://blog.cloudflare.com/you-dont-need-quantum-hardware/"><u>significant limitation</u></a>, rendering QKD useless for common Internet use cases like connecting a laptop to a distant server over Wi-Fi. These limitations are also why we never invested in deploying QKD for Cloudflare IPsec. The U.S. <a href="https://www.nsa.gov/Cybersecurity/Quantum-Key-Distribution-QKD-and-Quantum-Cryptography-QC/"><u>National Security Agency (NSA)</u></a>, <a href="https://www.bsi.bund.de/EN/Themen/Unternehmen-und-Organisationen/Informationen-und-Empfehlungen/Quantentechnologien-und-Post-Quanten-Kryptografie/quantentechnologien-und-post-quanten-kryptografie_node.html"><u>Germany’s BSI</u></a> and the <a href="https://www.ncsc.gov.uk/whitepaper/quantum-security-technologies"><u>UK National Cyber Security Centre</u></a> have also warned against relying solely on QKD.</p>
    <div>
      <h3>But what about interoperability? </h3>
      <a href="#but-what-about-interoperability">
        
      </a>
    </div>
    <p><a href="https://datatracker.ietf.org/doc/html/rfc9370"><u>RFC 9370</u></a> landed in May 2023, specifying the use of hybrid key agreement rather than PSK or QKD. But unlike TLS, which only supports using post-quantum ML-KEM in parallel with classical DHE, this IPsec standard allows using up to <i>seven different key agreements to run at the same time</i> in parallel with classical Diffie Helman. Moreover, it doesn't specify details about what these key agreements should be, leaving it up to the vendors to choose their algorithms and implementations. Palo Alto Networks, for example, took this seriously and built support for over <a href="https://docs.paloaltonetworks.com/compatibility-matrix/reference/supported-cipher-suites/cipher-suites-supported-in-pan-os-11-2/cipher-suites-supported-in-pan-os-11-2-ipsec"><u>seven different PQC ciphersuites</u></a> into its next generation firewall (NGFW), most of which do not interoperate with other vendors and some of which have not yet been standardized by NIST.</p><p>Over the years, TLS has gone in the opposite direction, reducing the number of registered ciphersuites from hundreds in TLS 1.2, down to around five in TLS 1.3. This philosophy of reducing “ciphersuite bloat” is also in line with NIST’s <a href="https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-52r2.pdf"><u>SP 800 52</u></a> from 2019.  The rationale for reducing “ciphersuite bloat” includes: </p><ul><li><p>Improved interoperability across vendors and regions</p></li><li><p>Lower risk of attacks that exploit downgrades to weak ciphersuites </p></li><li><p>Lower risk of security problems due to misconfiguration</p></li><li><p>Lower risk of implementation flaws by reducing the size of the codebase</p></li></ul><p>This is why we didn’t initially build support for RFC 9370. </p>
    <div>
      <h3>Standards that are finally on the right track</h3>
      <a href="#standards-that-are-finally-on-the-right-track">
        
      </a>
    </div>
    <p>It’s also why we were excited when the IPsec community put forth <a href="https://datatracker.ietf.org/doc/draft-ietf-ipsecme-ikev2-mlkem/"><u>draft-ietf-ipsecme-ikev2-mlkem</u></a>. This Internet-Draft standardizes PQ exchange for IPsec in the same way PQ key exchange has been widely deployed for TLS: hybrid ML-KEM. The new draft fills in the gaps in RFC 9370, by specifying how to run the ML-KEM as the additional key exchange in parallel with classical Diffie Hellman in IKEv2. </p><p>Now that this specification is available, we’ve moved forward with supporting post-quantum IPsec in our Cloudflare IPsec products. </p>
    <div>
      <h2>Cloudflare IPsec goes post-quantum</h2>
      <a href="#cloudflare-ipsec-goes-post-quantum">
        
      </a>
    </div>
    <p>Cloudflare IPsec is a WAN <a href="https://www.cloudflare.com/learning/network-layer/network-as-a-service-naas/"><u>Network-as-a-Service</u></a> solution that replaces legacy private network architectures by connecting data centers, branch offices, and cloud VPCs to Cloudflare’s global IP Anycast network. </p><p>With Cloudflare IPsec, Cloudflare’s network acts as the <a href="https://datatracker.ietf.org/doc/html/rfc5996"><u>IKEv2</u></a> Responder, awaiting connection requests from an IPsec initiator, which is a branch connector device in the customer’s network. Cloudflare IPsec supports IPsec sessions initiated by branch connectors that include our own Cloudflare One Appliance, along with branch connectors from a <a href="https://developers.cloudflare.com/magic-wan/reference/device-compatibility/"><u>diverse set of vendors</u></a>, including Cisco, Juniper, Palo Alto Networks, Fortinet, Aruba and others.</p><p>We’ve implemented production hybrid ML-KEM support in the Cloudflare IPsec IKEv2 Responder, as specified in <a href="https://datatracker.ietf.org/doc/draft-ietf-ipsecme-ikev2-mlkem/"><u>draft-ietf-ipsecme-ikev2-mlkem</u></a>. The draft requires a first key exchange to run using a classical Diffie Helman key exchange. The derived key is used to encrypt a second key exchange that is run using ML-KEM. Finally, the keys derived by the two exchanges are mixed and the result is used to secure the data plane traffic in IPsec ESP (Encapsulating Security Payload) mode. ESP mode uses symmetric cryptography and is thus already quantum safe without any additional upgrades.  We’ve tested our implementation against the IPsec Initiator in the <a href="https://strongswan.org/"><u>strongswan</u></a> reference implementation.</p><p>You can see the ciphersuite used in the IKEv2 negotiation by viewing the Cloudflare <a href="https://developers.cloudflare.com/logs/logpush/logpush-job/datasets/account/ipsec_logs/"><u>IPsec logs</u></a>.</p><p>We chose to implement hybrid ML-KEM rather than “pure” ML-KEM, i.e. only ML-KEM without DHE running in parallel, for two reasons. First, we’ve used hybrid ML-KEM across all of our other Cloudflare products, since this is the approach adopted across the TLS community. And second, it provides a “belt-and-suspenders” security: ML-KEM provides protection against quantum harvest-now-decrypt-later attacks, while DHE provides a tried-and-true algorithm against non-quantum adversaries.</p>
    <div>
      <h3>An invitation for interoperability</h3>
      <a href="#an-invitation-for-interoperability">
        
      </a>
    </div>
    <p>The full value of this implementation can be realized only via interoperability. For this reason, we are inviting other vendors that are building out support for IPsec Initiators in their branch connectors per <a href="https://datatracker.ietf.org/doc/draft-ietf-ipsecme-ikev2-mlkem/"><u>draft-ietf-ipsecme-ikev2-mlkem</u></a> to test against our Cloudflare IPsec implementation. Cloudflare customers looking to test out interoperability with third-party branch connectors while we are in closed beta can <a href="https://www.cloudflare.com/security-week/pq-ipsec-beta/"><u>sign up here</u></a>. We plan to GA and build out interoperability with other vendors as more begin to come online with support for <a href="https://datatracker.ietf.org/doc/draft-ietf-ipsecme-ikev2-mlkem/"><u>draft-ietf-ipsecme-ikev2-mlkem</u></a>.</p>
    <div>
      <h3>Quantum-safe hardware: the Cloudflare One Appliance</h3>
      <a href="#quantum-safe-hardware-the-cloudflare-one-appliance">
        
      </a>
    </div>
    <p>Many of our customers purchase their branch connector (hardware or virtualized) from Cloudflare, rather than a third-party vendor. That’s why the <a href="https://developers.cloudflare.com/magic-wan/configuration/connector/"><u>Cloudflare One Appliance</u></a> — our plug-and-play appliance that connects your local network to Cloudflare One — has also been upgraded with post-quantum encryption.</p><p>Cloudflare One Appliance does not use IKEv2 for key agreement or session establishment, opting instead to rely on TLS. The appliance periodically initiates a TLS handshake with the Cloudflare edge, shares a symmetric secret over the resulting TLS connection, then injects that symmetric secret into the ESP layer of IPsec, which then encrypts and authenticates the IPsec data plane traffic. This design allowed us to avoid building out IKEv2 Initiator logic, and makes the Connector easier to maintain using our existing TLS libraries. </p><p>Thus, upgrading Cloudflare One Appliance to PQ encryption was just a matter of upgrading TLS 1.2 to TLS 1.3 with hybrid ML-KEM — something we’ve done many times on different products at Cloudflare. </p>
    <div>
      <h3>How do I turn this on? And what does it cost?</h3>
      <a href="#how-do-i-turn-this-on-and-what-does-it-cost">
        
      </a>
    </div>
    <p>As always, this upgrade to Cloudflare IPsec comes at no extra cost to our customers. Because we believe that a secure and private Internet should be accessible to all, we’re on a mission to include PQC in all our <a href="https://blog.cloudflare.com/post-quantum-cryptography-ga/"><u>products</u></a>, without <a href="https://blog.cloudflare.com/you-dont-need-quantum-hardware/"><u>specialized hardware</u></a>, at <a href="https://blog.cloudflare.com/post-quantum-crypto-should-be-free/"><u>no extra cost</u></a> to our customers and end users.</p><p>Customers using the Cloudflare One Appliance obtained this upgrade to PQC in version 2026.2.0 (released 2026-02-11). The upgrade is pushed automatically (with no customer action required) according to each appliance’s configured interrupt window.</p><p>For customers using Cloudflare IPsec with another vendor’s branch connector appliance, we will be interoperating with these once more support for <a href="https://datatracker.ietf.org/doc/draft-ietf-ipsecme-ikev2-mlkem/"><u>draft-ietf-ipsecme-ikev2-mlkem</u></a> comes online. <a href="https://www.cloudflare.com/security-week/pq-ipsec-beta/"><u>You can also contact us</u></a> directly to get access to closed beta and request that we interoperate with a specific vendor’s branch connector.</p>
    <div>
      <h2>The full picture: post-quantum SASE</h2>
      <a href="#the-full-picture-post-quantum-sase">
        
      </a>
    </div>
    <p>The value proposition for a post-quantum SASE is clear: organizations can obtain immediate end-to-end protection for their private network traffic by sending it over tunnels protected by hybrid ML-KEM. This protects traffic from  harvest-now-decrypt-later attacks, even if the individual applications in the corporate network are not yet upgraded to PQC.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/xK6FEbQYw9vLJKgx0bHp6/c4b584cc95adc5f8320d03c86b8fe38c/Cloudflare-s_post-quantum_SASE_2.png" />
          </figure><p>The diagram above shows how post-quantum hybrid ML-KEM is offered in various Cloudflare One network configurations.  It includes the following on-ramps:</p><ul><li><p>clientless (<a href="https://blog.cloudflare.com/post-quantum-zero-trust/"><u>TLS 1.3 with hybrid ML-KEM</u></a> (assuming the browser supports hybrid ML-KEM))</p></li><li><p>Cloudflare One Client (<a href="https://blog.cloudflare.com/post-quantum-warp/"><u>MASQUE over TLS 1.3 with hybrid ML-KEM</u></a> initiated by the device client)</p></li><li><p>Cloudflare IPsec on-ramp (as described in this blog)</p></li></ul><p>and the following off-ramps:</p><ul><li><p>Cloudflare Tunnel off-ramp (<a href="https://blog.cloudflare.com/post-quantum-tunnel/"><u>TLS 1.3 with hybrid ML-KEM tunnel</u></a> initiated by the cloudflared server agent)</p></li><li><p>Cloudflare IPsec off-ramp (as described in this blog)</p></li></ul><p>The diagram below highlights a sample network configuration that uses the Cloudflare One Client on-ramp to connect a device to a server behind a Cloudflare One Appliance offramp. The end user's device connects to the Cloudflare network (link 1) using <a href="https://blog.cloudflare.com/post-quantum-warp/"><u>MASQUE with hybrid ML-KEM</u></a>. The traffic then travels across Cloudflare’s global network over TLS 1.3 with hybrid ML-KEM (link 2). Traffic then leaves the Cloudflare network over a post-quantum Cloudflare IPsec link (link 3) that is terminated at a Cloudflare One Appliance appliance. Finally it connects to a server inside the customer’s environment. Traffic is protected by post-quantum cryptography as it travels over the public Internet, even if the server itself does not support post-quantum cryptography.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2mrF4j8VDBGOzGEQNCobo4/b78ae6bef6f1152d92c9f63102aa8491/image4.png" />
          </figure><p>Finally, we note that traffic that on-ramps to Cloudflare One and then egresses to the public Internet can also be protected by our post-quantum <a href="https://developers.cloudflare.com/cloudflare-one/traffic-policies/http-policies/tls-decryption/#post-quantum-support"><u>Cloudflare Gateway</u></a>, our Secure Web Gateway (SWG).  Here’s a diagram showing how the SWG works:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3j3uN3x5oyUZBWbXIECxzA/9f2fd83cc567c8e511de08dd86ee462f/image2.png" />
          </figure><p> As discussed in <a href="https://blog.cloudflare.com/post-quantum-zero-trust/#quantum-safe-swg-end-to-end-pqc-for-access-to-third-party-web-applications"><u>an earlier blog post</u></a>, our SWG can already support hybrid ML-KEM on traffic from SWG to the origin server (as long as the origin supports hybrid ML-KEM), and on traffic from the client to the SWG (if the client supports hybrid ML-KEM, which is the case for most modern browsers). Importantly, any traffic that onramps to the SWG via a device that has Cloudflare One Client installed is still protected with hybrid ML-KEM — even if the web browser itself does not yet support post-quantum cryptography. This is due to the <a href="https://blog.cloudflare.com/post-quantum-warp/"><u>post-quantum MASQUE tunnel</u></a> that the Cloudflare One Client establishes to Cloudflare’s global network.  The same is true of traffic that onramps to the SWG via a post-quantum Cloudflare IPsec tunnel.</p><p>Putting it all together, Cloudflare One now offers post-quantum encryption on our TLS, MASQUE and IPsec on-ramp and off-ramps, and for private network traffic, and to traffic that egresses to the public Internet via our SWG. </p>
    <div>
      <h2>The future is quantum-safe</h2>
      <a href="#the-future-is-quantum-safe">
        
      </a>
    </div>
    <p>By completing the post-quantum SASE equation with Cloudflare IPsec and the Cloudflare One Appliance, we have extended post-quantum encryption across all our major on-ramps and off-ramps. We have intentionally chosen the path of interoperability and simplicity — the hybrid ML-KEM approach that the IETF and NIST have championed, rather than locking our customers into proprietary implementations, “ciphersuite bloat," or unnecessary hardware upgrades. </p><p>This is the promise of Cloudflare One: a SASE platform that is not only faster and more reliable than the legacy architectures it replaces, but one that provides post-quantum encryption. Whether you are securing a remote worker’s browser or a multi-gigabit data center link, you can now do so with the confidence that your data is protected from harvest-now-decrypt-later attacks and other future-looking threats.  </p><p><a href="https://www.cloudflare.com/lp/pqc/"><u>Sign up here</u></a> to get a full demo of our post-quantum capabilities across the Cloudflare One SASE platform, or <a href="https://www.cloudflare.com/security-week/pq-ipsec-beta/"><u>register here</u></a> to get on the list for the Cloudflare IPsec closed beta. We are proud to lead the industry into this new era of cryptography, and we invite you to join us in building a scalable, standards-compliant, and post-quantum Internet.</p> ]]></content:encoded>
            <category><![CDATA[Post-Quantum]]></category>
            <category><![CDATA[Zero Trust]]></category>
            <category><![CDATA[Cryptography]]></category>
            <category><![CDATA[Cloudflare One]]></category>
            <category><![CDATA[IPsec]]></category>
            <guid isPermaLink="false">4R1725ncbcxxmKyZueXmhw</guid>
            <dc:creator>Sharon Goldberg</dc:creator>
            <dc:creator>Amos Paul</dc:creator>
            <dc:creator>David Gauch</dc:creator>
        </item>
        <item>
            <title><![CDATA[Building a serverless, post-quantum Matrix homeserver]]></title>
            <link>https://blog.cloudflare.com/serverless-matrix-homeserver-workers/</link>
            <pubDate>Tue, 27 Jan 2026 14:00:00 GMT</pubDate>
            <description><![CDATA[ As a proof of concept, we built a Matrix homeserver to Cloudflare Workers — delivering encrypted messaging at the edge with automatic post-quantum cryptography. ]]></description>
            <content:encoded><![CDATA[ <p><sup><i>* This post was updated at 11:45 a.m. Pacific time to clarify that the use case described here is a proof of concept and a personal project. Some sections have been updated for clarity.</i></sup></p><p>Matrix is the gold standard for decentralized, end-to-end encrypted communication. It powers government messaging systems, open-source communities, and privacy-focused organizations worldwide. </p><p>For the individual developer, however, the appeal is often closer to home: bridging fragmented chat networks (like Discord and Slack) into a single inbox, or simply ensuring your conversation history lives on infrastructure you control. Functionally, Matrix operates as a decentralized, eventually consistent state machine. Instead of a central server pushing updates, homeservers exchange signed JSON events over HTTP, using a conflict resolution algorithm to merge these streams into a unified view of the room's history.</p><p><b>But there is a "tax" to running it. </b>Traditionally, operating a Matrix <a href="https://matrix.org/homeserver/about/"><u>homeserver</u></a> has meant accepting a heavy operational burden. You have to provision virtual private servers (VPS), tune PostgreSQL for heavy write loads, manage Redis for caching, configure <a href="https://www.cloudflare.com/learning/cdn/glossary/reverse-proxy/"><u>reverse proxies</u></a>, and handle rotation for <a href="https://www.cloudflare.com/application-services/products/ssl/">TLS certificates</a>. It’s a stateful, heavy beast that demands to be fed time and money, whether you’re using it a lot or a little.</p><p>We wanted to see if we could eliminate that tax entirely.</p><p><b>Spoiler: We could.</b> In this post, we’ll explain how we ported a Matrix homeserver to <a href="https://workers.cloudflare.com/"><u>Cloudflare Workers</u></a>. The resulting proof of concept is a serverless architecture where operations disappear, costs scale to zero when idle, and every connection is protected by <a href="https://www.cloudflare.com/learning/ssl/quantum/what-is-post-quantum-cryptography/"><u>post-quantum cryptography</u></a> by default. You can view the source code and <a href="https://github.com/nkuntz1934/matrix-workers"><u>deploy your own instance directly from Github</u></a>.</p><a href="https://deploy.workers.cloudflare.com/?url=https://github.com/nkuntz1934/matrix-workers"><img src="https://deploy.workers.cloudflare.com/button" /></a>
<p></p><p></p>
    <div>
      <h2>From Synapse to Workers</h2>
      <a href="#from-synapse-to-workers">
        
      </a>
    </div>
    <p>Our starting point was <a href="https://github.com/matrix-org/synapse"><u>Synapse</u></a>, the Python-based reference Matrix homeserver designed for traditional deployments. PostgreSQL for persistence, Redis for caching, filesystem for media.</p><p>Porting it to Workers meant questioning every storage assumption we’d taken for granted.</p><p>The challenge was storage. Traditional homeservers assume strong consistency via a central SQL database. Cloudflare <a href="https://developers.cloudflare.com/durable-objects/"><u>Durable Objects</u></a> offers a powerful alternative. This primitive gives us the strong consistency and atomicity required for Matrix state resolution, while still allowing the application to run at the edge.</p><p>We ported the core Matrix protocol logic — event authorization, room state resolution, cryptographic verification — in TypeScript using the Hono framework. D1 replaces PostgreSQL, KV replaces Redis, R2 replaces the filesystem, and Durable Objects handle real-time coordination.</p><p>Here’s how the mapping worked out:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1JTja38UZRbFygluawrnz1/9bce290e3070155c734e874c17051551/BLOG-3101_2.png" />
          </figure>
    <div>
      <h2>From monolith to serverless</h2>
      <a href="#from-monolith-to-serverless">
        
      </a>
    </div>
    <p>Moving to Cloudflare Workers brings several advantages for a developer: simple deployment, lower costs, low latency, and built-in security.</p><p><b>Easy deployment: </b>A traditional Matrix deployment requires server provisioning, PostgreSQL administration, Redis cluster management, <a href="https://www.cloudflare.com/application-services/solutions/certificate-lifecycle-management/">TLS certificate renewal</a>, load balancer configuration, monitoring infrastructure, and on-call rotations.</p><p>With Workers, deployment is simply: wrangler deploy. Workers handles TLS, load balancing, DDoS protection, and global distribution. </p><p><b>Usage-based costs: </b>Traditional homeservers cost money whether anyone is using them or not. Workers pricing is request-based, so you pay when you’re using it, but costs drop to near zero when everyone’s asleep. </p><p><b>Lower latency globally:</b> A traditional Matrix homeserver in us-east-1 adds 200ms+ latency for users in Asia or Europe. Workers, meanwhile, run in 300+ locations worldwide. When a user in Tokyo sends a message, the Worker executes in Tokyo. </p><p><b>Built-in security: </b>Matrix homeservers can be high-value targets: They handle encrypted communications, store message history, and authenticate users. Traditional deployments require careful hardening: firewall configuration, rate limiting, DDoS mitigation, WAF rules, IP reputation filtering.</p><p>Workers provide all of this by default. </p>
    <div>
      <h3>Post-quantum protection </h3>
      <a href="#post-quantum-protection">
        
      </a>
    </div>
    <p>Cloudflare deployed post-quantum hybrid key agreement across all <a href="https://www.cloudflare.com/learning/ssl/why-use-tls-1.3/"><u>TLS 1.3</u></a> connections in <a href="https://blog.cloudflare.com/post-quantum-for-all/"><u>October 2022</u></a>. Every connection to our Worker automatically negotiates X25519MLKEM768 — a hybrid combining classical X25519 with ML-KEM, the post-quantum algorithm standardized by NIST.</p><p>Classical cryptography relies on mathematical problems that are hard for traditional computers but trivial for quantum computers running Shor’s algorithm. ML-KEM is based on lattice problems that remain hard even for quantum computers. The hybrid approach means both algorithms must fail for the connection to be compromised.</p>
    <div>
      <h3>Following a message through the system</h3>
      <a href="#following-a-message-through-the-system">
        
      </a>
    </div>
    <p>Understanding where encryption happens matters for security architecture. When someone sends a message through our homeserver, here’s the actual path:</p><p>The sender’s client takes the plaintext message and encrypts it with Megolm — Matrix’s end-to-end encryption. This encrypted payload then gets wrapped in TLS for transport. On Cloudflare, that TLS connection uses X25519MLKEM768, making it quantum-resistant.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/wGGYZ4LYspufH1c4psmL1/28acad8ab8e6535525dda413669c2d74/BLOG-3101_3.png" />
          </figure><p>The Worker terminates TLS, but what it receives is still encrypted — the Megolm ciphertext. We store that ciphertext in D1, index it by room and timestamp, and deliver it to recipients. But we never see the plaintext. The message “Hello, world” exists only on the sender’s device and the recipient’s device.</p><p>When the recipient syncs, the process reverses. They receive the encrypted payload over another quantum-resistant TLS connection, then decrypt locally with their Megolm session keys.</p>
    <div>
      <h3>Two layers, independent protection</h3>
      <a href="#two-layers-independent-protection">
        
      </a>
    </div>
    <p>This protects via two encryption layers that operate independently:</p><p>The <a href="https://www.cloudflare.com/learning/ssl/transport-layer-security-tls/"><u>transport layer (TLS)</u></a> protects data in transit. It’s encrypted at the client and decrypted at the Cloudflare edge. With X25519MLKEM768, this layer is now post-quantum.</p><p>The <a href="https://www.cloudflare.com/learning/ddos/what-is-layer-7/"><u>application layer</u></a> (Megolm E2EE) protects message content. It’s encrypted on the sender’s device and decrypted only on recipient devices. This uses classical Curve25519 cryptography.</p>
    <div>
      <h3>Who sees what</h3>
      <a href="#who-sees-what">
        
      </a>
    </div>
    <p>Any Matrix homeserver operator — whether running Synapse on a VPS or this implementation on Workers — can see metadata: which rooms exist, who’s in them, when messages were sent. But no one in the infrastructure chain can see the message content, because the E2EE payload is encrypted on sender devices before it ever hits the network. Cloudflare terminates TLS and passes requests to your Worker, but both see only Megolm ciphertext. Media in encrypted rooms is encrypted client-side before upload, and private keys never leave user devices.</p>
    <div>
      <h3>What traditional deployments would need</h3>
      <a href="#what-traditional-deployments-would-need">
        
      </a>
    </div>
    <p>Achieving post-quantum TLS on a traditional Matrix deployment would require upgrading OpenSSL or BoringSSL to a version supporting ML-KEM, configuring cipher suite preferences correctly, testing client compatibility across all Matrix apps, monitoring for TLS negotiation failures, staying current as PQC standards evolve, and handling clients that don’t support PQC gracefully.</p><p>With Workers, it’s automatic. Chrome, Firefox, and Edge all support X25519MLKEM768. Mobile apps using platform TLS stacks inherit this support. The security posture improves as Cloudflare’s <a href="https://developers.cloudflare.com/ssl/post-quantum-cryptography/"><u>PQC</u></a> deployment expands — no action required on our part.</p>
    <div>
      <h2>The storage architecture that made it work</h2>
      <a href="#the-storage-architecture-that-made-it-work">
        
      </a>
    </div>
    <p>The key insight from porting Tuwunel was that different data needs different consistency guarantees. We use each Cloudflare primitive for what it does best.</p>
    <div>
      <h3>D1 for the data model</h3>
      <a href="#d1-for-the-data-model">
        
      </a>
    </div>
    <p>D1 stores everything that needs to survive restarts and support queries: users, rooms, events, device keys. Over 25 tables covering the full Matrix data model. </p>
            <pre><code>CREATE TABLE events (
	event_id TEXT PRIMARY KEY,
	room_id TEXT NOT NULL,
	sender TEXT NOT NULL,
	event_type TEXT NOT NULL,
	state_key TEXT,
	content TEXT NOT NULL,
	origin_server_ts INTEGER NOT NULL,
	depth INTEGER NOT NULL
);
</code></pre>
            <p><a href="https://www.cloudflare.com/developer-platform/products/d1/">D1’s SQLite foundation</a> meant we could port Tuwunel’s queries with minimal changes. Joins, indexes, and aggregations work as expected.</p><p>We learned one hard lesson: D1’s eventual consistency breaks foreign key constraints. A write to rooms might not be visible when a subsequent write to events checks the foreign key. We removed all foreign keys and enforce referential integrity in application code.</p>
    <div>
      <h3>KV for ephemeral state</h3>
      <a href="#kv-for-ephemeral-state">
        
      </a>
    </div>
    <p>OAuth authorization codes live for 10 minutes, while refresh tokens last for a session.</p>
            <pre><code>// Store OAuth code with 10-minute TTL
kv.put(&amp;format!("oauth_code:{}", code), &amp;token_data)?
	.expiration_ttl(600)
	.execute()
	.await?;</code></pre>
            <p>KV’s global distribution means OAuth flows work fast regardless of where users are located.</p>
    <div>
      <h3>R2 for media</h3>
      <a href="#r2-for-media">
        
      </a>
    </div>
    <p>Matrix media maps directly to R2, so you can upload an image, get back a content-addressed URL – and egress is free.</p>
    <div>
      <h3>Durable Objects for atomicity</h3>
      <a href="#durable-objects-for-atomicity">
        
      </a>
    </div>
    <p>Some operations can’t tolerate eventual consistency. When a client claims a one-time encryption key, that key must be atomically removed. If two clients claim the same key, encrypted session establishment fails.</p><p>Durable Objects provide single-threaded, strongly consistent storage:</p>
            <pre><code>#[durable_object]
pub struct UserKeysObject {
	state: State,
	env: Env,
}

impl UserKeysObject {
	async fn claim_otk(&amp;self, algorithm: &amp;str) -&gt; Result&lt;Option&lt;Key&gt;&gt; {
    	// Atomic within single DO - no race conditions possible
    	let mut keys: Vec&lt;Key&gt; = self.state.storage()
        	.get("one_time_keys")
        	.await
        	.ok()
        	.flatten()
        	.unwrap_or_default();

    	if let Some(idx) = keys.iter().position(|k| k.algorithm == algorithm) {
        	let key = keys.remove(idx);
        	self.state.storage().put("one_time_keys", &amp;keys).await?;
        	return Ok(Some(key));
    	}
    	Ok(None)
	}
}</code></pre>
            <p>We use UserKeysObject for E2EE key management, RoomObject for real-time room events like typing indicators and read receipts, and UserSyncObject for to-device message queues. The rest flows through D1.</p>
    <div>
      <h3>Complete end-to-end encryption, complete OAuth</h3>
      <a href="#complete-end-to-end-encryption-complete-oauth">
        
      </a>
    </div>
    <p>Our implementation supports the full Matrix E2EE stack: device keys, cross-signing keys, one-time keys, fallback keys, key backup, and dehydrated devices.</p><p>Modern Matrix clients use OAuth 2.0/OIDC instead of legacy password flows. We implemented a complete OAuth provider, with dynamic client registration, PKCE authorization, RS256-signed JWT tokens, token refresh with rotation, and standard OIDC discovery endpoints.
</p>
            <pre><code>curl https://matrix.example.com/.well-known/openid-configuration
{
  "issuer": "https://matrix.example.com",
  "authorization_endpoint": "https://matrix.example.com/oauth/authorize",
  "token_endpoint": "https://matrix.example.com/oauth/token",
  "jwks_uri": "https://matrix.example.com/.well-known/jwks.json"
}
</code></pre>
            <p>Point Element or any Matrix client at the domain, and it discovers everything automatically.</p>
    <div>
      <h2>Sliding Sync for mobile</h2>
      <a href="#sliding-sync-for-mobile">
        
      </a>
    </div>
    <p>Traditional Matrix sync transfers megabytes of data on initial connection,  draining mobile battery and data plans.</p><p>Sliding Sync lets clients request exactly what they need. Instead of downloading everything, clients get the 20 most recent rooms with minimal state. As users scroll, they request more ranges. The server tracks position and sends only deltas.</p><p>Combined with edge execution, mobile clients can connect and render their room list in under 500ms, even on slow networks.</p>
    <div>
      <h2>The comparison</h2>
      <a href="#the-comparison">
        
      </a>
    </div>
    <p>For a homeserver serving a small team:</p><table><tr><th><p> </p></th><th><p><b>Traditional (VPS)</b></p></th><th><p><b>Workers</b></p></th></tr><tr><td><p>Monthly cost (idle)</p></td><td><p>$20-50</p></td><td><p>&lt;$1</p></td></tr><tr><td><p>Monthly cost (active)</p></td><td><p>$20-50</p></td><td><p>$3-10</p></td></tr><tr><td><p>Global latency</p></td><td><p>100-300ms</p></td><td><p>20-50ms</p></td></tr><tr><td><p>Time to deploy</p></td><td><p>Hours</p></td><td><p>Seconds</p></td></tr><tr><td><p>Maintenance</p></td><td><p>Weekly</p></td><td><p>None</p></td></tr><tr><td><p>DDoS protection</p></td><td><p>Additional cost</p></td><td><p>Included</p></td></tr><tr><td><p>Post-quantum TLS</p></td><td><p>Complex setup</p></td><td><p>Automatic</p></td></tr></table><p><sup>*</sup><sup><i>Based on public rates and metrics published by DigitalOcean, AWS Lightsail, and Linode as of January 15, 2026.</i></sup></p><p>The economics improve further at scale. Traditional deployments require capacity planning and over-provisioning. Workers scale automatically.</p>
    <div>
      <h2>The future of decentralized protocols</h2>
      <a href="#the-future-of-decentralized-protocols">
        
      </a>
    </div>
    <p>We started this as an experiment: could Matrix run on Workers? It can—and the approach can work for other stateful protocols, too.</p><p>By mapping traditional stateful components to Cloudflare’s primitives — Postgres to D1, Redis to KV, mutexes to Durable Objects — we can see  that complex applications don't need complex infrastructure. We stripped away the operating system, the database management, and the network configuration, leaving only the application logic and the data itself.</p><p>Workers offers the sovereignty of owning your data, without the burden of owning the infrastructure.</p><p>I have been experimenting with the implementation and am excited for any contributions from others interested in this kind of service. </p><p>Ready to build powerful, real-time applications on Workers? Get started with<a href="https://developers.cloudflare.com/workers/"> <u>Cloudflare Workers</u></a> and explore<a href="https://developers.cloudflare.com/durable-objects/"> <u>Durable Objects</u></a> for your own stateful edge applications. Join our<a href="https://discord.cloudflare.com"> <u>Discord community</u></a> to connect with other developers building at the edge.</p> ]]></content:encoded>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Durable Objects]]></category>
            <category><![CDATA[D1]]></category>
            <category><![CDATA[Cloudflare Workers KV]]></category>
            <category><![CDATA[R2]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Rust]]></category>
            <category><![CDATA[WebAssembly]]></category>
            <category><![CDATA[Post-Quantum]]></category>
            <category><![CDATA[Encryption]]></category>
            <guid isPermaLink="false">6VOVAMNwIZ18hMaUlC6aqp</guid>
            <dc:creator>Nick Kuntz</dc:creator>
        </item>
        <item>
            <title><![CDATA[Policy, privacy and post-quantum: anonymous credentials for everyone]]></title>
            <link>https://blog.cloudflare.com/pq-anonymous-credentials/</link>
            <pubDate>Thu, 30 Oct 2025 13:00:00 GMT</pubDate>
            <description><![CDATA[ The world is adopting anonymous credentials for digital privacy, but these systems are vulnerable to quantum computers. This post explores the cryptographic challenges and promising research paths toward building new, quantum-resistant credentials from the ground up. ]]></description>
            <content:encoded><![CDATA[ <p></p><p>The Internet is in the midst of one of the most complex transitions in its history: the migration to <a href="https://www.cloudflare.com/en-gb/pqc/"><u>post-quantum (PQ) cryptography.</u></a> Making a system safe against quantum attackers isn't just a matter of replacing elliptic curves and RSA with PQ alternatives, such as <a href="https://csrc.nist.gov/pubs/fips/203/final"><u>ML-KEM</u></a> and <a href="https://csrc.nist.gov/pubs/fips/204/final"><u>ML-DSA</u></a>. These algorithms have higher costs than their classical counterparts, making them unsuitable as drop-in replacements in many situations.</p><p>Nevertheless, we're <a href="https://blog.cloudflare.com/pq-2025/"><u>making steady progress</u></a> on the most important systems. As of this writing, <a href="https://radar.cloudflare.com/adoption-and-usage#post-quantum-encryption"><u>about 50%</u></a> of <a href="https://www.cloudflare.com/learning/ssl/transport-layer-security-tls/"><u>TLS connections</u></a> to Cloudflare's edge are safe against <a href="https://en.wikipedia.org/wiki/Harvest_now,_decrypt_later"><u>store-now/harvest-later attacks</u></a>. Quantum safe authentication is further out, as it will require more significant changes to how certificates work. Nevertheless, this year we've <a href="https://blog.cloudflare.com/bootstrap-mtc/"><u>taken a major step</u></a> towards making TLS deployable at scale with PQ certificates.</p><p>That said, TLS is only the lowest hanging fruit. There are <a href="https://github.com/fancy-cryptography/fancy-cryptography"><u>many more ways</u></a> we have come to rely on cryptography than key exchange and authentication and which aren’t as easy to migrate. In this blog post, we'll take a look at <b>Anonymous Credentials (ACs)</b>.</p><p>ACs solve a common privacy dilemma: how to prove a specific fact (for example that one has had a valid driver’s license for more than three years) without over-sharing personal information (like the place of birth)? Such problems are fundamental to a number of use cases, and ACs may provide the foundation we need to make these applications as private as possible.</p><p>Just like for TLS, the central question for ACs is whether there are drop-in, PQ replacements for its classical primitives that will work at the scale required, or will it be necessary to re-engineer the application to mitigate the cost of PQ.</p><p>We'll take a stab at answering this question in this post. We'll focus primarily on an emerging use case for ACs described in a <a href="https://blog.cloudflare.com/private-rate-limiting/"><u>concurrent post</u></a>: rate-limiting requests from agentic AI platforms and users. This demanding, high-scale use case is the perfect lens through which to evaluate the practical readiness of today's post-quantum research. We'll use it as our guiding problem to measure each cryptographic approach.</p><p>We'll first explore the current landscape of classical AC adoption across the tech industry and the public sector. Then, we’ll discuss what cryptographic researchers are currently looking into on the post-quantum side. Finally, we’ll take a look at what it'll take to bridge the gap between theory and real-world applications.</p><p>While anonymous credentials are only seeing their first real-world deployments in recent years, it is critical to start thinking about the post-quantum challenge concurrently. This isn’t a theoretical, too-soon problem given the store-now decrypt-later threat. If we wait for mass adoption before solving post-quantum anonymous credentials, ACs risk being dead on arrival. Fortunately, our survey of the state of the art shows the field is close to a practical solution. Let’s start by reviewing real-world use-cases of ACs. </p>
    <div>
      <h2>Real world (classical) anonymous credentials</h2>
      <a href="#real-world-classical-anonymous-credentials">
        
      </a>
    </div>
    <p>In 2026, the European Union is <a href="https://eur-lex.europa.eu/eli/reg/2024/1183/oj"><u>set to launch its digital identity wallet</u></a>, a system that will allow EU citizens, residents and businesses to digitally attest to their personal attributes. This will enable them, for example, to display their driver’s license on their phone or <a href="https://educatedguesswork.org/posts/age-verification-id/"><u>perform age</u></a> <a href="https://soatok.blog/2025/07/31/age-verification-doesnt-need-to-be-a-privacy-footgun/"><u>verification</u></a>. Cloudflare's use cases for ACs are a bit different and revolve around keeping our customers secure by, for example, rate limiting bots and humans as we <a href="https://blog.cloudflare.com/privacy-pass-standard/"><u>currently do with Privacy Pass</u></a>. The EU wallet is a massive undertaking in identity provisioning, and our work operates at a massive scale of traffic processing. Both initiatives are working to solve a shared fundamental problem: allowing an entity to prove a specific attribute about themselves without compromising their privacy by revealing more than they have to.</p><p>The EU's goal is a fully mobile, secure, and user-friendly digital ID. The current technical plan is ambitious, as laid out in the <a href="https://ec.europa.eu/digital-building-blocks/sites/spaces/EUDIGITALIDENTITYWALLET/pages/900014854/Version+2.0+of+the+Architecture+and+Reference+Framework+now+available"><u>Architecture Reference Framework (ARF)</u></a>. It defines the key privacy goals of unlinkability to guarantee that if a user presents attributes multiple times, the recipients cannot link these separate presentations to conclude that they concern the same user. However, currently proposed solutions fail to achieve this. The framework correctly identifies the core problem: attestations contain <i>unique, fixed elements such as hash values, […], public keys, and signatures</i> that colluding entities could store and compare to track individuals.</p><p>In its present form, the ARF's recommendation to mitigate cross-session linkability is <i>limited-time attestations</i>. The framework acknowledges in the text that this would <i>only partially mitigate Relying Party linkability</i>. An alternative proposal that would mitigate linkability risks are single-use credentials. They are not considered at the moment due to <i>complexity and management overhead</i>. The framework therefore leans on <i>organisational and enforcement measure</i>s to deter collusion instead of providing a stronger guarantee backed by cryptography.</p><p>This reliance on trust assumptions could become problematic, especially in the sensitive context of digital identity. When asked for feedback, c<a href="https://github.com/eu-digital-identity-wallet/eudi-doc-architecture-and-reference-framework/issues/200"><u>ryptographic researchers agree</u></a> that the proper solution would be to adopt anonymous credentials. However, this solution presents a long-term challenge. Well-studied methods for anonymous credentials, such as those based on <a href="https://datatracker.ietf.org/doc/draft-irtf-cfrg-bbs-signatures/"><u>BBS signatures</u></a>, are vulnerable to quantum computers. While some <a href="https://datatracker.ietf.org/doc/rfc9474/"><u>anonymous</u></a> <a href="https://datatracker.ietf.org/doc/draft-schlesinger-cfrg-act/"><u>schemes</u></a> are PQ-unlinkable, meaning that user privacy is preserved even when cryptographically relevant quantum computers exist, new credentials could be forged. This may be an attractive target for, say, a nation state actor.</p><p>New cryptography also faces deployment challenges: in the EU, only approved cryptographic primitives, as listed in the <a href="https://www.sogis.eu/documents/cc/crypto/SOGIS-Agreed-Cryptographic-Mechanisms-1.3.pdf"><u>SOG-IS catalogue,</u></a> can be used. At the time of writing, this catalogue is limited to established algorithms such as RSA or ECDSA. But when it comes to post-quantum cryptography, SOG-IS is <a href="https://www.sogis.eu/documents/cc/crypto/SOGIS-Agreed-Cryptographic-Mechanisms-1.3.pdf"><u>leaving the problem wide open</u></a>.</p><p>The wallet's first deployment will not be quantum-secure. However, with the transition to post-quantum algorithms being ahead of us, as soon as 2030 for high-risk use cases per <a href="https://digital-strategy.ec.europa.eu/en/library/coordinated-implementation-roadmap-transition-post-quantum-cryptography"><u>the EU roadma</u></a>p, research in a post-quantum compatible alternative for anonymous credentials is critical. This will encompass<b> </b><i>standardizing more cryptography.</i></p><p>Regarding existing large scale deployments, the US has allowed digital ID on smartphones since 2024. They <a href="https://www.tsa.gov/digital-id/participating-states"><u>can be used at TSA checkpoints</u></a> for instance. The <a href="https://www.dhs.gov/science-and-technology/privacy-preserving-digital-credential-wallets-verifiers"><u>Department of Homeland Security lists funding for six privacy-preserving digital credential wallets and verifiers on their website.</u></a> This early exploration and engagement is a positive sign, and highlights the need to plan for privacy-preserving presentations. </p><p>Finally, ongoing efforts at the Internet Engineering Task Force (IETF)<b> </b>aim<b> </b>to build a more private Internet by standardizing advanced cryptographic techniques. Active individual drafts (i.e., not yet adopted by a working group), such as <a href="https://datatracker.ietf.org/doc/draft-google-cfrg-libzk/"><u>Longfellow</u></a> and Anonymous Credit Tokens (<a href="https://datatracker.ietf.org/doc/draft-schlesinger-cfrg-act/"><u>ACT</u></a>), and adopted drafts like Anonymous Rate-limited Credentials (<a href="https://datatracker.ietf.org/doc/draft-yun-privacypass-crypto-arc/"><u>ARC</u></a>), propose more flexible multi-show anonymous credentials that incorporate developments over the last several years. At IETF 117 in 2023, <a href="https://www.irtf.org/anrw/2023/slides-117-anrw-sessc-not-so-low-hanging-fruit-security-and-privacy-research-opportunities-for-ietf-protocols-00.pdf"><u>post-quantum anonymous credentials and deployable generic anonymous credentials were presented as a research opportunity</u></a>. Check out our <a href="https://blog.cloudflare.com/private-rate-limiting/"><u>post on rate limiting agents</u></a> for details.</p><p>Before we get into the state-of-the-art for PQ, allow us to try to crystalize a set of requirements for real world applications.</p>
    <div>
      <h3>Requirements</h3>
      <a href="#requirements">
        
      </a>
    </div>
    <p>Given the diversity of use cases, adoption of ACs will be made easier by the fact that they can be built from a handful of powerful primitives. (More on this in our <a href="https://blog.cloudflare.com/private-rate-limiting/"><u>concurrent post</u></a>.) As we'll see in the next section, we don't yet have drop-in, PQ alternatives for these kinds of primitives. The "building blocks" of PQ ACs are likely to look quite different, and we're going to know something about what we're building towards.</p><p>For our purposes, we can think of an anonymous credential as a kind of fancy <a href="https://en.wikipedia.org/wiki/Blind_signature"><b><u>blind signature</u></b></a>. What's that you ask? A blind signature scheme has two phases: <b>issuance</b>, in which the server signs a message chosen by the client; and <b>presentation</b>, in which the client reveals the message and the signature to the server. The scheme should be <b>unlinkable</b> in the sense that the server can't link any message and signature to the run of the issuance protocol in which it was produced. It should also be <b>unforgeable</b> in the sense that no client can produce a valid signature without interacting with the server.</p><p>The key difference between ACs and blind signatures is that, during presentation of an AC, the client only presents <i>part of the message</i> in plaintext; the rest of the message is kept secret. Typically, the message has three components:</p><ol><li><p>Private <b>state</b>, such as a counter that, for example, keeps track of the number of times the credential was presented. The client would prove to the server that the state is "valid", for example, a counter with value $0 \leq C \leq N$, without revealing $C$. In many situations, it's desirable to allow the server to update this state upon successful presentation, for example, by decrementing the counter. In the context of rate limiting, this is the number of how many requests are left for a credential.</p></li><li><p>A random value called the <b>nullifier</b> that is revealed to the server during presentation. In rate-limiting, the nullifier prevents a user from spending a credential with a given state more than once.</p></li><li><p>Public <b>attributes</b> known to both the client and server that bind the AC to some application context. For example, this might represent the window of time in which the credential is valid (without revealing the exact time it was issued).</p></li></ol><p>Such ACs are well-suited for rate limiting requests made by the client. Here the idea is to prevent the client from making more than some maximum number of requests during the credential's lifetime. For example, if the presentation limit is 1,000 and the validity window is one hour, then the clients can make up to 0.27 requests/second on average before it gets throttled.</p><p>It's usually desirable to enforce rate limits on a <b>per-origin</b> basis. This means that if the presentation limit is 1,000, then the client can make at most 1,000 requests to any website that can verify the credential. Moreover, it can do so safely, i.e., without breaking unlinkability across these sites.</p><p>The current generation of ACs being considered for standardization at IETF are only <b>privately verifiable,</b> meaning the server issuing the credential (the <b>issuer</b>) must share a private key with the server verifying the credential (the <b>origin</b>). This will be sufficient for some deployment scenarios, but many will require <b>public verifiability</b>, where the origin only needs the issuer's public key. This is possible with BBS-based credentials, for example.</p><p>Finally, let us say a few words about round complexity. An AC is <b>round optimal</b> if issuance and presentation both complete in a single HTTP request and response. In our survey of PQ ACs, we found a number of papers that discovered neat tricks that reduce bandwidth (the total number of bits transferred between the client and server) at the cost of additional rounds. However, for use cases like ours, <b>round optimality</b> is an absolute necessity, especially for presentation. Not only do multiple rounds have a high impact on latency, they also make the implementation far more complex.</p><p>Within these constraints, our goal is to develop PQ ACs that have as low communication cost (i.e., bandwidth consumption) and runtime as possible in the context of rate-limiting.</p>
    <div>
      <h2>"Ideal world" (PQ) anonymous credentials</h2>
      <a href="#ideal-world-pq-anonymous-credentials">
        
      </a>
    </div>
    <p>The academic community has produced a number of promising post-quantum ACs. In our survey of the state of the art, we evaluated several leading schemes, scoring them on their underlying primitives and performance to determine which are truly ready for the Internet. To understand the challenges, it is essential to first grasp the cryptographic building blocks used in ACs today. We’ll now discuss some of the core concepts that frequently appear in the field.</p>
    <div>
      <h3>Relevant cryptographic paradigms</h3>
      <a href="#relevant-cryptographic-paradigms">
        
      </a>
    </div>
    
    <div>
      <h4>Zero-knowledge proofs</h4>
      <a href="#zero-knowledge-proofs">
        
      </a>
    </div>
    <p>Zero-knowledge proofs (ZKPs) are a cryptographic protocol that allows a <i>prover</i> to convince a <i>verifier</i> that a statement is true without revealing the secret information, or <i>witness</i>. ZKPs play a central role in ACs: they allow proving statements of the secret part of the credential's state without revealing the state itself. This is achieved by transforming the statement into a mathematical representation, such as a set of polynomial equations over a finite field. The prover then generates a proof by performing complex operations on this representation, which can only be completed correctly if they possess the valid witness.</p><p>General-purpose ZKP systems, like <a href="https://eprint.iacr.org/2018/046"><u>Scalable Transparent Arguments of Knowledge (STARKs)</u></a>, can prove the integrity of <i>any</i> computation up to a certain size. In a STARK-based system, the computational trace is represented as a <i>set of polynomials</i>. The prover then constructs a proof by evaluating these polynomials and committing to them using cryptographic hash functions. The verifier can then perform a quick probabilistic check on this proof to confirm that the original computation was executed correctly. Since the proof itself is just a collection of hashes and sampled polynomial values, it is secure against quantum computers, providing a statistically sound guarantee that the claimed result is valid.</p>
    <div>
      <h4>Cut-and-Choose</h4>
      <a href="#cut-and-choose">
        
      </a>
    </div>
    <p>Cut-and-choose is a cryptographic technique designed to ensure a prover’s honest behaviour by having a verifier check a random subset of their work. The prover first commits to multiple instances of a computation, after which the verifier randomly chooses a portion to be <i>cut open</i> by revealing the underlying secrets for inspection. If this revealed subset is correct, the verifier gains high statistical confidence that the remaining, un-opened instances are also correct.</p><p>This technique is important because while it is a generic tool used to build protocols secure against malicious adversaries, it also serves as a crucial case study. Its security is not trivial; for example, practical attacks on cut-and-choose schemes built with (post-quantum) homomorphic encryption have succeeded by <a href="https://eprint.iacr.org/2025/1890.pdf"><u>attacking the algebraic structure of the encoding</u></a>, not the encryption itself. This highlights that even generic constructions must be carefully analyzed in their specific implementation to prevent subtle vulnerabilities and information leaks.</p>
    <div>
      <h4>Sigma Protocols</h4>
      <a href="#sigma-protocols">
        
      </a>
    </div>
    <p><a href="https://datatracker.ietf.org/doc/draft-irtf-cfrg-sigma-protocols/01/"><u>Sigma protocols</u></a> follow a more structured approach that does not require us to throw away any computations. The <a href="https://pages.cs.wisc.edu/~mkowalcz/628.pdf"><u>three-move protocol</u></a> starts with a <i>commitment</i> phase where the prover generates some randomness<i>,</i> which is added to the input to generate the commitment, and sends the commitment to the verifier. Then, the verifier <i>challenges </i>the prover with an unpredictable challenge. To finish the proof, the prover provides a <i>response</i> in which they combine the initial randomness with the verifier’s challenge in a way that is only possible if the secret value, such as the solution to a discrete logarithm problem, is known.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1ihEZ5KhWBQ0PZF5pTc0Bi/e35de03a89af0c2254bcc114041f6904/image4.png" />
          </figure><p><sup>Depiction of a Sigma protocol flow, where the prover commits to their witness $w$, the verifier challenges the prover to prove knowledge about $w$, and the prover responds with a mathematical statement that the verifier can either accept or reject.</sup></p><p>In practice, the prover and verifier don't run this interactive protocol. Instead, they make it non-interactive using a technique known as the <a href="https://link.springer.com/content/pdf/10.1007/3-540-47721-7_12.pdf"><u>Fiat-Shamir transformation</u></a>. The idea is that the prover generates the challenge <i>itself</i>, by deriving it from its own commitment. It may sound a bit odd, but it works quite well. In fact, it's the basis of signatures like ECDSA and even PQ signatures like ML-DSA.</p>
    <div>
      <h4>MPC in the head</h4>
      <a href="#mpc-in-the-head">
        
      </a>
    </div>
    <p>Multi-party computation (MPC) is a cryptographic tool that allows multiple parties to jointly compute a function over their inputs without revealing their individual inputs to the other parties. <a href="https://web.cs.ucla.edu/~rafail/PUBLIC/77.pdf"><u>MPC in the Head</u></a> (MPCitH) is a technique to generate zero-knowledge proofs by simulating a multi-party protocol <i>in the head</i> of the prover.</p><p>The prover simulates the state and communication for each virtual party, commits to these simulations, and shows the commitments to the verifier. The verifier then challenges the prover to open a subset of these virtual parties. Since MPC protocols are secure even if a minority of parties are dishonest, revealing this subset doesn't leak the secret, yet it convinces the verifier that the overall computation was correct. </p><p>This paradigm is particularly useful to us because it's a flexible way to build post-quantum secure ZKPs. MPCitH constructions build their security from symmetric-key primitives (like hash functions). This approach is also transparent, requiring no trusted setup. While STARKs share these post-quantum and transparent properties, MPCitH often offers faster prover times for many computations. Its primary trade-off, however, is that its proofs scale linearly with the size of the circuit to prove, while STARKs are succinct, meaning their proof size grows much slower.</p>
    <div>
      <h4>Rejection sampling</h4>
      <a href="#rejection-sampling">
        
      </a>
    </div>
    <p>When a randomness source is biased or outputs numbers outside the desired range, rejection sampling can correct the distribution. For example, imagine you need a random number between 1 and 10, but your computer only gives you random numbers between 0 and 255. (Indeed, this is the case!) The rejection sampling algorithm calls the RNG until it outputs a number below 11 and above 0: </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/ogslPSn4DJYx3R5jGZ3mi/7ab640864dc26d6e1e2eb53c25f628ea/image6.png" />
          </figure><p>Calling the generator over and over again may seem a bit wasteful. An efficient implementation can be realized with an eXtendable Output Function (XOF). A XOF takes an input, for example a seed, and computes an arbitrarily-long output. An example is the SHAKE family (part of the <a href="https://csrc.nist.gov/pubs/fips/202/final"><u>SHA3 standard</u></a>), and the recently proposed round-reduced version of SHAKE called <a href="https://datatracker.ietf.org/doc/rfc9861/"><u>TurboSHAKE</u></a>.</p><p>Let’s imagine you want to have three numbers between 1 and 10. Instead of calling the XOF over and over, you can also ask the XOF for several bytes of output. Since each byte has a probability of 3.52% to be in range, asking the XOF for 174 bytes is enough to have a greater than 99% chance of finding at least three usable numbers. In fact, we can be even smarter than this: 10 fits in four bits, so we can split the output bytes into lower and higher <a href="https://en.wikipedia.org/wiki/Nibble"><u>nibbles</u></a>. The probability of a nibble being in the desired range is now 56.4%:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4W98tjgA7gIkaM7A5LBMyi/7b12bbfd22e53b84439a7c9e690605d9/image2.png" />
          </figure><p><sup>Rejection sampling by batching queries. </sup></p><p>Rejection sampling is a part of many cryptographic primitives, including many we'll discuss in the schemes we look at below.</p>
    <div>
      <h3>Building post-quantum ACs</h3>
      <a href="#building-post-quantum-acs">
        
      </a>
    </div>
    <p>Classical anonymous credentials (ACs), such as ARC and ACT, are built from algebraic groups- specifically, elliptic curves, which are very efficient. Their security relies on the assumption that certain mathematical problems over these groups are computationally hard. The premise of post-quantum cryptography, however, is that quantum computers can solve these supposedly hard problems. The most intuitive solution is to replace elliptic curves with a post-quantum alternative. In fact, cryptographers have been working on a replacement for a number of years: <a href="https://eprint.iacr.org/2018/383"><u>CSIDH</u></a>. </p><p>This raises the key question: can we simply adapt a scheme like ARC by replacing its elliptic curves with CSIDH? The short answer is <b>no</b>, due to a critical roadblock in constructing the necessary zero-knowledge proofs. While we can, in theory, <a href="https://eprint.iacr.org/2023/1614"><u>build the required Sigma protocols or MPC-in-the-Head (MPCitH) proofs from CSIDH</u></a>, they have a prerequisite that makes them unusable in practice: they require a <b>trusted setup</b> to ensure the prover cannot cheat. This requirement is a non-starter, as <a href="https://eprint.iacr.org/2022/518"><u>no algorithm for performing a trusted setup in CSIDH exists</u></a>. The trusted setup for sigma protocols can be replaced by a combination of <a href="https://eprint.iacr.org/2016/505"><u>generic techniques from multi-party computation</u></a> and cut-and-choose protocols, but that adds significant computation cost to the already computationally expensive isogeny operations.</p><p>This specific difficulty highlights a more general principle. The high efficiency of classical credentials like ARC is deeply tied to the rich algebraic structure of elliptic curves. Swapping this component for a post-quantum alternative, or moving to generic constructions, fundamentally alters the design and its trade-offs. We must therefore accept that post-quantum anonymous credentials cannot be a simple "lift-and-shift" of today's schemes. They will require new designs built from different cryptographic primitives, such as lattices or hash functions.</p>
    <div>
      <h3>Prefabricated schemes from generic approaches</h3>
      <a href="#prefabricated-schemes-from-generic-approaches">
        
      </a>
    </div>
    <p>At Cloudflare, we explored a <a href="https://eprint.iacr.org/2023/414"><u>post-quantum privacy pass construction in 2023</u></a> that closely resembles the functionality needed for anonymous credentials. The main result is a generic construction that composes separate, quantum-secure building blocks: a digital signature scheme and a general-purpose ZKP system:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4dpmFzSv7HG5JHEEqu7D9o/ea1f02c37c0e36dc0972dfd1044fa9a3/image8.png" />
          </figure><p>The figure shows a cryptographic protocol divided into two main phases: (1.) Issuance: The user commits to a message (without revealing it) and sends the commitment to the server. The server signs the commitment and returns this signed commitment, which serves as a token. The user verifies the server's signature. (2.) Redemption: To use the token, the user presents it and constructs a proof. This proof demonstrates they have a valid signature on the commitment and opens the commitment to reveal the original message. If the server validates the proof, the user and server continue (e.g., to access a rate-limited origin).</p><p>The main appeal of this modular design is its flexibility. The experimental <a href="https://github.com/guruvamsi-policharla/zkdilithium"><u>implementation</u></a> uses a modified version of the signature ML-DSA signatures and STARKs, but the components can be easily swapped out. The design provides strong, composable security guarantees derived directly from the underlying parts. A significant speedup for the construction came from replacing the hash function SHA3 in ML-DSA with the zero-knowledge friendly <a href="https://eprint.iacr.org/2019/458"><u>Poseidon</u></a>.</p><p>However, the modularity of our post-quantum Privacy Pass construction <a href="https://zkdilithium.cloudflareresearch.com/index.html"><u>incurs a significant performance overhead</u></a> demonstrated in a clear trade-off between proof generation time and size: a fast 300 ms proof generation requires a large 173 kB signature, while a 4.8s proof generation time cuts the size of the signature nearly in half. A balanced parameter set, which serves as a good benchmark for any dedicated solution to beat, took 660 ms to sign and resulted in a 112 kB signature. The implementation is currently a proof of concept, with perhaps some room for optimization. Alternatively, a different signature like <a href="https://datatracker.ietf.org/doc/draft-ietf-cose-falcon/"><u>FN-DSA</u></a> could offer speed improvements: while its issuance is more complex, its verification is far more straightforward, boiling down to a simple hash-to-lattice computation and a norm check.</p><p>However, while this construction gives a functional baseline, these figures highlight the performance limitations for a real-time rate limiting system, where every millisecond counts. The 660 ms signing time strongly motivates the development of <i>dedicated</i> cryptographic constructions that trade some of the modularity for performance.</p>
    <div>
      <h3>Solid structure: Lattices</h3>
      <a href="#solid-structure-lattices">
        
      </a>
    </div>
    <p><a href="https://blog.cloudflare.com/lattice-crypto-primer/"><u>Lattices</u></a> are a natural starting point when discussing potential post-quantum AC candidates. NIST standardized ML-DSA and ML-KEM as signature and KEM algorithms, both of which are based on lattices. So, are lattices the answer to post-quantum anonymous credentials?</p><p>The answer is a bit nuanced. While explicit anonymous credential schemes from lattices exist, they have shortcomings that prevent real-world deployment: for example, a <a href="https://eprint.iacr.org/2023/560.pdf"><u>recent scheme</u></a> sacrifices round-optimality for smaller communication size, which is unacceptable for a service like Privacy Pass where every second counts. Given that our RTT is 100ms or less for the majority of users, each extra communication round adds tangible latency especially for those on slower Internet connections. When the final credential size is still over 100 kB, the trade-offs are hard to justify. So, our search continues. We expand our horizon by looking into <i>blind signatures </i>and whether we can adapt them for anonymous credentials.</p>
    <div>
      <h4>Two-step approach: Hash-and-sign</h4>
      <a href="#two-step-approach-hash-and-sign">
        
      </a>
    </div>
    <p>A prominent paradigm in lattice-based signatures is the <i>hash-and-sign</i> construction. Here, the message is first hashed to a point in the lattice. Then, the signer uses their secret key, a <a href="https://eprint.iacr.org/2007/432"><u>lattice trapdoor</u></a>, to generate a vector that, when multiplied with the private key, evaluates to the hashed point in the lattice. This is the core mechanism behind signature schemes like FN-DSA.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/66hA0KmluGoGO4I2SHAGTv/1a465c6c810e4f17df3112b96ed816da/image1.png" />
          </figure><p>Adapting hash-and-sign for blind signatures is tricky, since the signer may not learn the message. This introduces a significant security challenge: If the user can request signatures on arbitrary points, they can mount an attack to extract the trapdoor by repeatedly requesting signatures for carefully chosen arbitrary points. These points can be used to reconstruct a short basis, which is equivalent to a key recovery. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1lyCHqOTL477mFGSWjH3dv/48ffe46acfbe81b692c2ba30f383634b/image9.png" />
          </figure><p>The standard defense against this attack is to require the user to prove in zero-knowledge that the point they are asking to be signed is the blinded output of the specified hash function. However, proving hash preimages leads to the same problem as in the generic post-quantum privacy pass paper: proving a conventional hash function (like SHA3) inside a ZKP is computationally expensive and has a large communication complexity.</p><p>This difficult trade-off is at the heart of recent academic work. The <a href="https://eprint.iacr.org/2023/077.pdf"><u>state-of-the-art paper</u></a> presents two lattice-based blind signature schemes with small signature sizes of 22 KB for a signature and 48 kB for a privately-verifiable protocol that may be more useful in a setting like anonymous credential. However, this focus on the final signature size comes at the cost of an impractical <i>issuance</i>. The user must provide ZKPs for the correct hash and lattice relations that, by the paper’s own analysis, can add to<i> several hundred kilobytes</i> and take<i> 20 seconds to generate and 10 seconds to verify</i>.</p><p>While these results are valuable for advancing the field, this trade-off is a significant barrier for any large-scale, practical system. For our use case, a protocol that increases the final signature size moderately in exchange for a more efficient and lightweight issuance process would be a more suitable and promising direction.</p>
    <div>
      <h4>Best of two signatures: Hash-and-sign with aborts</h4>
      <a href="#best-of-two-signatures-hash-and-sign-with-aborts">
        
      </a>
    </div>
    <p>A promising technique for blind signatures combines the hash-and-sign paradigm with <i>Fiat-Shamir with aborts</i>, a method that relies on rejection sampling signatures. In this approach, the signer repeatedly attempts to generate a signature and aborts any result that may leak information about the secret key. This process ensures the final signature is statistically independent of the key and is used in modern signatures like ML-DSA. The <a href="https://eprint.iacr.org/2014/1027"><u>Phoenix signature</u></a> scheme uses <i>hash-and-sign with aborts</i>, where a message is first hashed into the lattice and signed, with rejection sampling employed to break the dependency between the signature and the private key.</p><p>Building on this foundation is an <a href="https://eprint.iacr.org/2024/131"><u>anonymous credential scheme for hash-and-sign with aborts</u></a>. The main improvement over hash-and-sign anonymous credentials is that, instead of proving the validity of a hash, the user commits to their attributes, which avoids costly zero-knowledge proofs.</p><p>The scheme is <a href="https://github.com/Chair-for-Security-Engineering/lattice-anonymous-credentials"><u>fully implemented</u></a> and credentials with attribute proofs just under 80 KB and signatures under 7 kB. The scheme takes less than 400 ms for issuance and 500 ms for showing the credential. The protocol also has a lot of features necessary for anonymous credentials, allowing users to prove relations between attributes and request pseudonyms for different instances.</p><p>This research presents a compelling step towards real-world deployability by combining state-of-the-art techniques to achieve a much healthier balance between performance and security. While the underlying mathematics are a bit more complex, the scheme is fully implemented and with a proof of knowledge of a signature at 40 kB and a prover time under a second, the scheme stands out as a great contender. However, for practical deployment, these figures would likely need a significant speedup to be usable in real-time systems. An improvement seems plausible, given recent <a href="https://eprint.iacr.org/2024/1952"><u>advances in lattice samplers</u></a>. Though the exact scale we can achieve is unclear. Still, we think it would be worthwhile to nudge the underlying design paradigm a little closer to our use cases.</p>
    <div>
      <h3>Do it yourself: MPC-in-the-head </h3>
      <a href="#do-it-yourself-mpc-in-the-head">
        
      </a>
    </div>
    <p>While the lattice-based hash-and-sign with aborts scheme provides one path to post-quantum signatures, an alternative approach is emerging from the MPCitH variant VOLE-in-the-Head <a href="https://eprint.iacr.org/2023/996"><u>(VOLEitH)</u></a>. </p><p>This scheme builds on <a href="https://eprint.iacr.org/2017/617"><u>Vector Oblivious Linear Evaluation (VOLE)</u></a>, an interactive protocol where one party's input vector is processed with another's secret value <i>delta</i>, creating a <i>correlation</i>. This VOLE correlation is used as a cryptographic commitment to the prover’s input. The system provides a zero-knowledge proof because the prover is bound by this correlation and cannot forge a solution without knowing the secret delta. The verifier, in turn, just has to verify that the final equation holds when the commitment is opened. This system is <i>linearly homomorphic</i>, which means that two commitments can be combined. This property is ideal for the <i>commit-and-prove</i> paradigm, where the prover first commits to the witnesses and then proves the validity of the circuit gate by gate. The primary trade-off is that the proofs are linear in the size of the circuit, but they offer substantially better runtimes. We also use linear-sized proofs for ARC and ACT.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6o073F0y7J7RxxHuDb4BSY/1ac0c4fc8b154dd77a8d3294016cbd32/image4.png" />
          </figure><p><sup>Example of evaluating a circuit gate by first committing to each wire and then proving the composition. This is easy for linear gates.</sup></p><p>This commit-and-prove approach allows <a href="https://link.springer.com/chapter/10.1007/978-3-031-91134-7_14"><u>VOLEitH</u></a> to efficiently prove the evaluation of symmetric ciphers, which are quantum-resistant. The transformation to a non-interactive protocol follows the standard MPCitH method: the prover commits to all secret values, a challenge is used to select a subset to reveal, and the prover proves consistency.</p><p>Efficient implementations operate over two mathematical fields (binary and prime) simultaneously, allowing these ZK circuits to handle both arithmetic and bitwise functions (like XORs) efficiently. Based on this foundation, a <a href="https://www.youtube.com/watch?v=VMeaF9xgbcw"><u>recent talk</u></a> teased the potential for blind signatures from the multivariate quadratic signature scheme <a href="https://pqmayo.org/about/"><u>MAYO</u></a> with sizes of just 7.5 kB and signing/verification times under 50 ms.</p><p>The VOLEitH approach, as a general-purpose solution system, represents a promising new direction for performant constructions. There are a <a href="https://pqc-mirath.org"><u>number</u></a> <a href="https://mqom.org"><u>of</u></a> <a href="https://pqc-perk.org"><u>competing</u></a> <a href="https://sdith.org"><u>in-the-head</u></a> schemes in the <a href="https://csrc.nist.gov/projects/pqc-dig-sig"><u>NIST competition for additional signature schemes</u></a>, including <a href="https://faest.info/authors.html"><u>one based on VOLEitH</u></a>. The current VOLEitH literature focuses on high-performance digital signatures, and an explicit construction for a full anonymous credential system has not yet been proposed. This means that features standard to ACs, such as multi-show unlinkability or the ability to prove relations between attributes, are not yet part of the design, whereas they are explicitly supported by the lattice construction. However, the preliminary results show great potential for performance, and it will be interesting to see the continued cryptanalysis and feature development from this line of VOLEitH in the area of anonymous credentials, especially since the general-purpose construction allows adding features easily.
</p><table><tr><td><p><b>Approach</b></p></td><td><p><b>Pros</b></p></td><td><p><b>Cons</b></p></td><td><p><b>Practical Viability</b></p></td></tr><tr><td><p><a href="https://eprint.iacr.org/2023/414"><u>Generic Composition</u></a></p></td><td><p>Flexible construction, strong security</p></td><td><p>Large signatures (112 kB), slow (660 ms)</p></td><td><p>Low: Performance is not great</p></td></tr><tr><td><p><a href="https://eprint.iacr.org/2023/077.pdf"><u>Hash-and-sign</u></a></p></td><td><p>Potentially tiny signatures, lots of optimization potential</p></td><td><p>Current implementation large and slow</p></td><td><p>Low: Performance is not great</p></td></tr><tr><td><p><a href="https://eprint.iacr.org/2024/131"><u>Hash-and-sign with aborts</u></a></p></td><td><p>Full AC system, good balance in communication</p></td><td><p>Slow runtimes (1s)</p></td><td><p>Medium: promising but performance would need to improve</p></td></tr><tr><td><p><a href="https://www.youtube.com/watch?v=VMeaF9xgbcw"><u>VOLEitH</u></a></p></td><td><p>Excellent potential performance (&lt;50ms, 7.5 kB)</p></td><td><p>not a full AC system, not peer-reviewed</p></td><td><p>Medium: promising research direction, no full solution available so far</p></td></tr></table>
    <div>
      <h2>Closing the gap</h2>
      <a href="#closing-the-gap">
        
      </a>
    </div>
    <p>My (that is Lena's) internship focused on a critical question: what should we look at next to build ACs for the Internet? For us, "the right direction" means developing protocols that can be integrated with real world applications, and developed collaboratively at the IETF. To make these a reality, we need researchers to look beyond blind signatures; we need a complete privacy-preserving protocol that combines blind signatures with efficient zero-knowledge proofs and properties like multi-show credentials that have an internal state. The issuance should also be sublinear in communication size with the number of presentations.</p><p>So, with the transition to post-quantum cryptography on the horizon, what are our thoughts on the current IETF proposals? A 2022 NIST presentation on the current state of anonymous credentials states that <a href="https://csrc.nist.gov/csrc/media/Presentations/2022/stppa4-revoc-decent/images-media/20221121-stppa4--baldimtsi--anon-credentials-revoc-decentral.pdf"><u>efficient post-quantum secure solutions are basically non-existent</u></a>. We argue that the last three years show nice developments in lattices and MPCitH anonymous credentials, but efficient post-quantum protocols still need work. Moving protocols into a post-quantum world isn't just a matter of swapping out old algorithms for new ones. A common approach on constructing post-quantum versions of classical protocols is swapping out the building blocks for their quantum-secure counterpart. </p><p>We believe this approach is essential, but not forward-looking. In addition to identifying how modern concerns can be accommodated on old cryptographic designs, we should be building new, post-quantum native protocols.</p><ul><li><p>For ARC, the conceptual path to a post-quantum construction seems relatively straightforward. The underlying cryptography follows a similar structure as the lattice-based anonymous credentials, or, when accepting a protocol with fewer features, the <a href="https://eprint.iacr.org/2023/414"><u>generic post-quantum privacy-pass</u></a> construction. However, we need to support per-origin rate-limiting, which allows us to transform a token at an origin without leaking us being able to link the redemption to redemptions at other origins, a feature that none of the post-quantum anonymous credential protocols or blind signatures support. Also, ARC is sublinear in communication size with respect to the number of tokens issued, which so far only the hash-and-sign with abort lattices achieve, although the notion of “limited shows” is not present in the current proposal. In addition, it would be great to gauge efficient implementations, especially for blind signatures, as well as looking into efficient zero-knowledge proofs. </p></li><li><p>For ACT, we need the protocols for ARC and an additional state. Even for the simplest counter, we need the ability to homomorphically subtract from that balance within the credential itself. This is a much more complex cryptographic requirement. It would also be interesting to see a post-quantum double-spend prevention that enforces the sequential nature of ACT. </p></li></ul><p>Working on ACs and other privacy-preserving cryptography inevitably leads to a major bottleneck: efficient zero-knowledge proofs, or to be more exact, efficiently proving hash function evaluations. In a ZK circuit, multiplications are expensive. Each wire in the circuit that performs a multiplication requires a cryptographic commitment, which adds communication overhead. In contrast, other operations like XOR can be virtually "free." This makes a huge difference in performance. For example, SHAKE (the primitive used in ML-DSA) can be orders of magnitude slower than arithmetization-friendly hash functions inside a ZKP. This is why researchers and implementers are already using <a href="https://eprint.iacr.org/2019/458"><u>Poseidon</u></a> or <a href="https://eprint.iacr.org/2023/323"><u>Poseidon2</u></a> to make their protocols faster.</p><p>Currently, <a href="https://www.poseidon-initiative.info/"><u>Ethereum</u></a> is <a href="https://x.com/VitalikButerin/status/1894681713613164888"><u>seriously considering migrating Ethereum to the Poseidon hash</u></a> and calls for cryptanalysis, but there is no indication of standardization. This is a problem: papers increasingly use different instantiations of Poseidon to fit their use-case, and there <a href="https://eprint.iacr.org/2016/492"><u>are</u></a> <a href="https://eprint.iacr.org/2023/323"><u>more</u></a> <a href="https://eprint.iacr.org/2022/840"><u>and</u></a> <a href="https://eprint.iacr.org/2025/1893"><u>more</u></a> <a href="https://eprint.iacr.org/2025/926"><u>zero</u></a>-<a href="https://eprint.iacr.org/2020/1143"><u>knowledge</u></a> <a href="https://eprint.iacr.org/2019/426"><u>friendly</u></a> <a href="https://eprint.iacr.org/2023/1025"><u>hash</u></a> <a href="https://eprint.iacr.org/2021/1038"><u>functions</u></a> <a href="https://eprint.iacr.org/2022/403"><u>coming</u></a> <a href="https://eprint.iacr.org/2025/058"><u>out</u></a>, tailored to different use-cases. We would like to see at least one XOF and one hash each for a prime field and for a binary field, ideally with some security levels. And also, is Poseidon the best or just the most well-known ZK friendly cipher? Is it always secure against quantum computers (like we believe AES to be), and are there other attacks like the <a href="https://eprint.iacr.org/2025/950"><u>recent</u></a> <a href="https://eprint.iacr.org/2025/937"><u>attacks</u></a> on round-reduced versions?</p><p>Looking at algebra and zero-knowledge brings us to a fundamental debate in modern cryptography. Imagine a line representing the spectrum of research: On one end, you have protocols built on very well-analyzed standard assumptions like the <a href="https://blog.cloudflare.com/lattice-crypto-primer/#breaking-lattice-cryptography-by-finding-short-vectors"><u>SIS problem</u></a> on lattices or the collision resistance of SHA3. On the other end, you have protocols that gain massive efficiency by using more algebraic structure, which in turn relies on newer, stronger cryptographic assumptions. Breaking novel hash functions is somewhere in the middle. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2BMtbDoVnrmKeTvhCyfOjK/616438127351eedf6ff41db282a0511e/image7.png" />
          </figure><p>The answer for the Internet can’t just be to relent and stay at the left end of our graph to be safe. For the ecosystem to move forward, we need to have confidence in both. We need more research to validate the security of ZK-friendly primitives like Poseidon, and we need more scrutiny on the stronger assumptions that enable efficient algebraic methods.</p>
    <div>
      <h2>Conclusion</h2>
      <a href="#conclusion">
        
      </a>
    </div>
    <p>As we’ve explored, the cryptographic properties that make classical ACs efficient, particularly the rich structure of elliptic curves, do not have direct post-quantum equivalents. Our survey of the state of the art from generic compositions using STARKs, to various lattice-based schemes, and promising new directions like MPC-in-the-head, reveals a field full of potential but with no clear winner. The trade-offs between communication cost, computational cost, and protocol rounds remain a significant barrier to practical, large-scale deployment, especially in comparison to elliptic curve constructions.</p><p>To bridge this gap, we must move beyond simply building post-quantum blind signatures. We challenge our colleagues in academia and industry to develop complete, post-quantum native protocols that address real-world needs. This includes supporting essential features like the per-origin rate-limiting required for ARC or the complex stateful credentials needed for ACT.</p><p>A critical bottleneck for all these approaches is the lack of efficient, standardized, and well-analyzed zero-knowledge-friendly hash functions. We need to research zero-knowledge friendly primitives and build industry-wide confidence to enable efficient post-quantum privacy.</p><p>If you’re working on these problems, or you have experience in the management and deployment of classical credentials, now is the time to engage. The world is rapidly adopting credentials for everything from digital identity to bot management, and it is our collective responsibility to ensure these systems are private and secure for a post-quantum future. We can tell for certain that there are more discussions to be had, and if you’re interested in helping to build this more secure and private digital world, we’re hiring 1,111 interns over the course of next year, and have open positions!</p> ]]></content:encoded>
            <category><![CDATA[AI Bots]]></category>
            <category><![CDATA[Post-Quantum]]></category>
            <category><![CDATA[IETF]]></category>
            <category><![CDATA[European Union]]></category>
            <category><![CDATA[Elliptic Curves]]></category>
            <category><![CDATA[Cryptography]]></category>
            <guid isPermaLink="false">JA04hlqr6TaeGhkvyutbt</guid>
            <dc:creator>Lena Heimberger</dc:creator>
            <dc:creator>Christopher Patton</dc:creator>
        </item>
        <item>
            <title><![CDATA[State of the post-quantum Internet in 2025]]></title>
            <link>https://blog.cloudflare.com/pq-2025/</link>
            <pubDate>Tue, 28 Oct 2025 13:00:00 GMT</pubDate>
            <description><![CDATA[ Today over half of human-initiated traffic with Cloudflare is protected against harvest-now/decrypt-later with post-quantum encryption. What once was a cool science project, is the new security baseline for the Internet. We’re not done yet: in this blog post we’ll take measure where we are, what we expect for the coming years, and what you can do today. ]]></description>
            <content:encoded><![CDATA[ <p></p><p>This week, the last week of October 2025, we reached a major milestone for Internet security: the majority of human-initiated traffic with Cloudflare is <a href="https://radar.cloudflare.com/adoption-and-usage#post-quantum-encryption"><u>using</u></a> post-quantum encryption mitigating the <a href="https://blog.cloudflare.com/the-quantum-menace/"><u>threat</u></a> of <a href="https://en.wikipedia.org/wiki/Harvest_now,_decrypt_later"><u>harvest-now/decrypt-later</u></a>.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1EUFTKSnJptvd5WDGvB9Rf/4865f75c71e43f2c261d393322d24f34/image5.png" />
          </figure><p>We want to use this joyous moment to give an update on the current state of the migration of the Internet to post-quantum cryptography and the long road ahead. Our last <a href="https://blog.cloudflare.com/pq-2024/"><u>overview</u></a> was 21 months ago, and quite a lot has happened since. A lot of it has been passed as we <a href="https://blog.cloudflare.com/pq-2024/"><u>predicted</u></a>: finalization of the NIST standards; broad adoption of post-quantum encryption; more detailed roadmaps from regulators; progress on building quantum computers; some cryptography was broken (not to worry: nothing close to what’s deployed); and new exciting cryptography was proposed.</p><p>But there were also a few surprises: there was a giant leap in progress towards Q-day by improving quantum algorithms, and we had a proper scare because of a new quantum algorithm. We’ll cover all this and more: what we expect for the coming years; and what you can do today.</p>
    <div>
      <h2>The quantum threat</h2>
      <a href="#the-quantum-threat">
        
      </a>
    </div>
    <p>First things first: why are we changing our cryptography? It’s because of <b>quantum computers</b>. <a href="https://www.cloudflare.com/learning/ssl/quantum/what-is-quantum-computing/"><u>These marvelous devices</u></a>, instead of restricting themselves to zeroes and ones, compute using more of what nature actually affords us: quantum superposition, interference, and entanglement. This allows quantum computers to excel at certain very specific computations, notably simulating nature itself, which will be very helpful in developing new materials.</p><p>Quantum computers are not going to replace regular computers, though: they’re actually much worse than regular computers at most tasks that matter for our daily lives. Think of them as graphic cards or neural engines — specialized devices for specific computations, not general-purpose ones.</p><p>Unfortunately, quantum computers also <a href="https://blog.cloudflare.com/the-quantum-menace"><u>excel</u></a> at breaking key cryptography that still is in common use today, such as RSA and elliptic curves (ECC). Thus, we are moving to <b>post-quantum cryptography</b>: cryptography designed to be resistant against quantum attack. We’ll discuss the exact impact on the different types of cryptography later on.</p><p>For now, quantum computers are rather anemic: they’re simply not good enough today to crack any real-world cryptographic keys. That doesn’t mean we shouldn’t worry yet: encrypted traffic can be <a href="https://en.wikipedia.org/wiki/Harvest_now,_decrypt_later"><u>harvested today</u></a>, and decrypted after <b>Q-day</b>: the day that quantum computers are capable of breaking today’s still widely used cryptography such as RSA-2048. We call that a “harvest-now-decrypt-later” attack.</p><p>Using factoring as a benchmark, quantum computers don’t impress at all: the largest number factored by a quantum computer without cheating is 15, a record that’s easily beaten in a <a href="https://eprint.iacr.org/2025/1237.pdf"><u>variety of funny ways</u></a>. It’s tempting to disregard quantum computers until they start beating classical computers on factoring, but that would be a big mistake. Even conservative estimates place Q-day <a href="https://youtu.be/nJxENYdsB6c?si=doosb_aZRpQgo6X8&amp;t=1302"><u>less than three years</u></a> after the day that quantum computers beat classical computers on factoring. So how do we track progress?</p>
    <div>
      <h3>Quantum numerology</h3>
      <a href="#quantum-numerology">
        
      </a>
    </div>
    <p>There are two categories to consider in the march towards Q-day: progress on quantum hardware, and algorithmic improvements to the software that runs on that hardware. We have seen significant progress on both fronts.</p>
    <div>
      <h4>Progress on quantum hardware</h4>
      <a href="#progress-on-quantum-hardware">
        
      </a>
    </div>
    <p>Like clockwork, every year there are news stories of new quantum computers with record-breaking number of qubits. This focus on counting qubits is also quite misleading. To start, quantum computers are analogue machines, and there is always some noise interfering with the computation.</p><p>There are big differences between the different types of technology used to build quantum computers: <a href="https://en.wikipedia.org/wiki/Transmon"><u>silicon-based</u></a> quantum computers seem to scale well, are quick to execute instructions, but have very noisy qubits. This does not mean they’re useless: with <a href="https://en.wikipedia.org/wiki/Quantum_error_correction"><u>quantum error correcting codes</u></a> one can effectively turn millions of noisy silicon qubits into a few thousand high-fidelity ones, which could be enough to <a href="https://quantum-journal.org/papers/q-2021-04-15-433/"><u>break RSA</u></a>. <a href="https://www.quantinuum.com/products-solutions/quantinuum-systems"><u>Trapped-ion quantum computers</u></a>, on the other hand, have much less noise, but have been harder to scale. Only a few hundred-thousand trapped-ion qubits could potentially draw the curtain on RSA-2048.</p><div>
  
</div>
<p></p><p><sup>Timelapse of </sup><a href="https://sam-jaques.appspot.com/quantum_landscape"><sup><u>state-of-art</u></sup></a><sup> in quantum computing from 2021 through 2025 by qubit count on the x-axis and noise on the y-axis. The dots in the gray area are the various quantum computers out there. Once the shaded gray area hits the left-most red line, we’re in trouble as that means a quantum computer can break large RSA keys. Compiled by </sup><a href="https://sam-jaques.appspot.com/"><sup><u>Samuel Jaques</u></sup></a><sup> of the University of Waterloo.</sup></p><p>We’re only scratching the surface with the number of qubits and noise. There are low-level details that can make a big difference, such as the interconnectedness of qubits. More importantly, the graph doesn’t capture how scalable the engineering behind the records is.</p><p>To wit, on these graphs the progress on quantum computers seems to have stalled the last two years, whereas for experts, Google’s <a href="https://blog.google/technology/research/google-willow-quantum-chip/"><u>December 2024 Willow announcement</u></a> that is unremarkable on the graph, is in reality a <a href="https://scottaaronson.blog/?p=8525"><u>real milestone</u></a> achieving the first logical qubit in the surface code in a scalable manner. <a href="https://sam-jaques.appspot.com/quantum_landscape_2024"><u>Quoting</u></a> Sam Jaques:</p><blockquote><p>When I first read these results [Willow’s achievements], I felt chills of “Oh wow, quantum computing is actually real”.</p></blockquote><p>It’s a real milestone, but not an unexpected leap. Quoting Sam again:</p><blockquote><p>Despite my enthusiasm, this is more or less where we should expect to be, and maybe a bit late. All of the big breakthroughs they demonstrated are steps we needed to take to even hope to reach the 20 million qubit machine that could break RSA. There are no unexpected breakthroughs. Think of it like the increases in transistor density of classical chips each year: an impressive feat, but ultimately business-as-usual.</p></blockquote><p>Business-as-usual is also the strategy: the superconducting qubit approach pursued by Google for Willow has always had the clearest path forward attacking the difficulties head-on requiring fewest leaps in engineering.</p><p>Microsoft pursues the opposite strategy with their bet on <a href="https://en.wikipedia.org/wiki/Topological_quantum_computer"><u>topological qubits</u></a>. These are qubits that in theory would mostly not be unaffected by noise. However, they have not been fully realized in hardware. If these can be built in a scalable way, they’d be far superior to superconducting qubits. But we don’t even know if these can be built to begin with. Early 2025 Microsoft announced the <a href="https://scottaaronson.blog/?p=8669"><u>Majorana 1</u></a> chip, which demonstrates how these could be built. The chip is far from a full demonstrator though: it doesn’t support any computation and hence doesn’t even show up in Sam’s comparison graph earlier.</p><p>In between topological and superconducting qubits, there are many other approaches that labs across the world pursue that do show up in the graph, such as QuEra with <a href="https://www.quera.com/neutral-atom-platform"><u>neutral atoms</u></a> and Quantinuum with <a href="https://www.quantinuum.com/products-solutions/quantinuum-systems/system-model-h2"><u>trapped ions</u></a>.</p><p>Progress on the hardware side of getting to Q-day has received by far the most amount of press interest. The biggest breakthrough in the last two years isn’t on the hardware side though.</p>
    <div>
      <h3>Progress on quantum software</h3>
      <a href="#progress-on-quantum-software">
        
      </a>
    </div>
    
    <div>
      <h4>The biggest breakthrough so far: Craig Gidney’s optimisations</h4>
      <a href="#the-biggest-breakthrough-so-far-craig-gidneys-optimisations">
        
      </a>
    </div>
    <p>We thought we’d need about <a href="https://quantum-journal.org/papers/q-2021-04-15-433/"><u>20 million qubits</u></a> with the superconducting approach to break RSA-2048. It turns out we can do it with much less. In a stunningly comprehensive June 2025 paper, <a href="https://algassert.com/about.html"><u>Craig Gidney</u></a> shows that with clever quantum software optimisations we need fewer than <a href="https://arxiv.org/pdf/2505.15917"><u>one million qubits</u></a>. This is the reason the red lines in Sam’s graph above, marking the size of a quantum computer to break RSA, dramatically shift to the left in 2025.</p><p>To put this achievement into perspective, let’s just make a wild guess and say Google can maintain a sort of Moore’s law and doubles the number of physical qubits every one-and-a-half years. That’s a much faster pace than Google demonstrated so far, but it’s also not unthinkable they could achieve this once the groundwork has been laid. Then it’d take until 2052 to reach 20 million qubits, but only until 2045 to reach one million: Craig single-handedly brought Q-day <b>seven years</b> closer!</p><p>How much further can software optimisations go? Pushing it lower than 100,000 superconducting qubits seems impossible to Sam, and <a href="https://sam-jaques.appspot.com/quantum_landscape_2025"><u>he’d expect</u></a> more than 242,000 superconducting qubits are required to break RSA-2048. With the wild guess on quantum computer progress before, that’d correspond to a Q-day of 2039 and 2041+ respectively.</p><p>Although Craig’s estimate makes detailed and reasonable assumptions on the architecture of a large-scale superconducting qubits quantum computer, it’s still a guess, and these estimates could be off quite a bit.</p>
    <div>
      <h4>A proper scare: Chen’s algorithm</h4>
      <a href="#a-proper-scare-chens-algorithm">
        
      </a>
    </div>
    <p>On the algorithmic side, we might not only see improvements to existing quantum algorithms, but also the discovery of completely new quantum algorithms. April 2024, Yilei Chen published <a href="https://eprint.iacr.org/2024/555"><u>a preprint</u></a> claiming to have found such a new quantum algorithm to solve certain lattice problems, which are close, but not the same as those we rely on for the post-quantum cryptography we deploy. This caused a proper stir: even if it couldn’t attack our post-quantum algorithms today, could Chen’s algorithm be improved? To get a sense for potential improvements, you need to understand what the algorithm is really doing on a higher level. With Chen’s algorithm that’s hard, as it’s very complex, much more so than Shor’s quantum algorithm that breaks RSA. So it took some time for experts to <a href="https://nigelsmart.github.io/LWE.html"><u>start</u></a> <a href="https://sam-jaques.appspot.com/static/files/555-notes.pdf"><u>seeing</u></a> limitations to Chen’s approach, and in fact, after ten days they discovered a fundamental bug in the algorithm: the approach doesn’t work. Crisis averted.</p><p>What to take from this? Optimistically, this is business as usual for cryptography, and lattices are in a better shape now as one avenue of attack turned out to be a dead end. Realistically, it <i>is</i> a reminder that we have a lot of eggs in the lattices basket. As we’ll see later, presently there isn’t a real alternative that works everywhere.</p><p>Proponents of quantum key distribution (QKD) might chime in that QKD solves exactly that by being secure thanks to the laws of nature. Well, there are some asterixes to put on that claim, but more fundamentally no one has figured out how to scale QKD beyond point-to-point connections, <a href="https://blog.cloudflare.com/you-dont-need-quantum-hardware/"><u>as we argue in this blog post</u></a>.</p><p>It’s good to speculate about what cryptography might be broken by a completely new attack, but let’s not forget the matter at hand: a lot of cryptography is going to be broken by quantum computers for sure. Q-day is coming; the question is when.</p>
    <div>
      <h2>Is Q-day always fifteen years away?</h2>
      <a href="#is-q-day-always-fifteen-years-away">
        
      </a>
    </div>
    <p>If you've been working on or around cryptography and security long enough, then you have probably heard that "Q-day is X years away" every year for the last several years. This can make it feel like Q-day is always "some time in the future" — until we put such a claim in the proper context.</p>
    <div>
      <h3>What do experts think?</h3>
      <a href="#what-do-experts-think">
        
      </a>
    </div>
    <p>Since 2019, the <a href="https://globalriskinstitute.org/"><u>Global Risk Institute</u></a> has performed a yearly survey amongst experts, asking how probable it is that RSA-2048 will be broken within 5, 10, 15, 20 or 30 years. These are the results <a href="https://globalriskinstitute.org/publication/2024-quantum-threat-timeline-report/"><u>for 2024</u></a>, whose interviews happened before Willow’s release and Gidney’s breakthrough.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3dx58nMhiJJd3DsQkaHwYF/84e9d8781912925d3b745f50291b00df/image6.png" />
          </figure><p><sup>Global Risk Institute expert survey results from 2024 on the likelihood of a quantum computer breaking RSA-2048 within different timelines.</sup></p><p>As the middle column in this chart shows, well over half of the interviewed experts thought there was at least a ~50% chance that a quantum computer will break RSA-2048 within 15 years. Let’s look up the historical answers from <a href="https://globalriskinstitute.org/publication/quantum-threat-timeline/"><u>2019</u></a>, <a href="https://globalriskinstitute.org/publication/quantum-threat-timeline-report-2020/"><u>2020</u></a>, <a href="https://globalriskinstitute.org/publication/2021-quantum-threat-timeline-report-global-risk-institute-global-risk-institute/"><u>2021</u></a>, <a href="https://globalriskinstitute.org/publication/2022-quantum-threat-timeline-report/"><u>2022</u></a>, and <a href="https://globalriskinstitute.org/publication/2023-quantum-threat-timeline-report/"><u>2023</u></a>. Here we plot the likelihood for Q-day within 15 years (of the time of the interview):</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4rMVWq9lDr49n9BmDkH2Ye/73d14f83f553becedf29dd11ce25deb1/image10.png" />
          </figure><p><sup>Historical answers in the quantum threat timeline reports for the chance of Q-day within 15 years.</sup></p><p>This shows that answers are slowly trending to more certainty, but at the rate we would expect? With six years of answers, we can plot how consistent the predictions are over a year: does the 15-year estimate for 2019 match the 10-year estimate for 2024?</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1cc2fWho4kYRjhJebG6Vll/12fdb65939b8e0143606d04747cfcca9/Screenshot_2025-10-28_at_12.28.49.png" />
          </figure><p><sup>Historical answers in the quantum threat timeline report over the years on the date of Q-day. The x-axis is the alleged year for Q-day and the y-axis shows the fraction of interviewed experts that think it’s at least ~50% (left) or 70% (right) likely to happen then.</sup></p><p>If we ask experts when Q-day could be with about even odds (graph on the left), then they mostly keep saying the same thing over the years: yes, could be 15 years away. However, if we press for more certainty, and ask for Q-day with &gt;70% probability (graph on the right), then the experts are mostly consistent over the years. For instance: one-fifth thought 2034 both in the 2019 and 2024 interviews.</p><p>So, if you want a consistent answer from an expert, don’t ask them when Q-day could be, but when it’s probably there. Now, it’s good fun to guess about Q-day, but the honest answer is that no one really knows for sure: there are just too many unknowns. And in the end, the date of Q-day is far less important than the deadlines set by regulators.</p>
    <div>
      <h3>What action do regulators take?</h3>
      <a href="#what-action-do-regulators-take">
        
      </a>
    </div>
    <p>We can also look at the timelines of various regulators. In 2022, the National Security Agency (NSA) released their <a href="https://media.defense.gov/2025/May/30/2003728741/-1/-1/0/CSA_CNSA_2.0_ALGORITHMS.PDF"><u>CNSA 2.0 guidelines</u></a>, which has deadlines between 2030 and 2033 for migrating to post-quantum cryptography. Also in 2022, the US federal government <a href="http://web.archive.org/web/20240422052137/https://www.whitehouse.gov/briefing-room/statements-releases/2022/05/04/national-security-memorandum-on-promoting-united-states-leadership-in-quantum-computing-while-mitigating-risks-to-vulnerable-cryptographic-systems/"><u>set 2035</u></a> as the target to have the United States fully migrated, from which the new administration hasn’t deviated. In 2024 Australia set 2030 as their <a href="https://www.theregister.com/2024/12/17/australia_dropping_crypto_keys/"><u>aggressive deadline</u></a> to migrate. Early 2025, the UK NCSC matched the common <a href="https://www.ncsc.gov.uk/guidance/pqc-migration-timelines"><u>2035</u></a> as the deadline for the United Kingdom. Mid-2025, the European Union published <a href="https://digital-strategy.ec.europa.eu/en/library/coordinated-implementation-roadmap-transition-post-quantum-cryptography"><u>their roadmap</u></a> with 2030 and 2035 as deadlines depending on the application.</p><p>Far from all national regulators have provided post-quantum migration timelines, but those that do generally stick to the 2030–2035 timeframe.</p>
    <div>
      <h3>When is Q-day?</h3>
      <a href="#when-is-q-day">
        
      </a>
    </div>
    <p>So when will quantum computers start causing trouble? Whether it’s 2034 or 2050, for sure it will be <b>too soon</b>. The immense success of cryptography over fifty years means it’s all around us now, from dishwasher, to pacemaker, to satellite. Most upgrades will be easy, and fit naturally in the product’s lifecycle, but there will be a long tail of difficult and costly upgrades.</p><p>Now, let’s take a look at the migration to post-quantum cryptography.</p>
    <div>
      <h2>Mitigating the quantum threat: two migrations</h2>
      <a href="#mitigating-the-quantum-threat-two-migrations">
        
      </a>
    </div>
    <p>To help prioritize, it is important to understand that there is a big difference in the difficulty, impact, and urgency of the post-quantum migration for the different kinds of cryptography required to create secure connections. In fact, for most organizations there will be two post-quantum migrations: <b>key agreement</b> and <b>signatures / certificates</b>. Let’s explain this for the case of creating a secure connection when visiting a website in a browser.</p>
    <div>
      <h3>Already post-quantum secure: symmetric cryptography</h3>
      <a href="#already-post-quantum-secure-symmetric-cryptography">
        
      </a>
    </div>
    <p>The cryptographic workhorse of a connection is a <b>symmetric cipher </b>such as AES-GCM. It’s what you would think of when thinking of cryptography: both parties, in this case the browser and server, have a shared key, and they encrypt / decrypt their messages with the same key. Unless you have that key, you can’t read anything, or modify anything.</p><p>The good news is that symmetric ciphers, such as <a href="https://blog.cloudflare.com/go-crypto-bridging-the-performance-gap/"><u>AES-GCM</u></a>, are already post-quantum secure. There is a common misconception that <a href="https://en.wikipedia.org/wiki/Grover%27s_algorithm"><u>Grover’s quantum algorithm</u></a> requires us to double the length of symmetric keys. On closer inspection of the algorithm, it’s clear that it is <a href="https://blog.cloudflare.com/nist-post-quantum-surprise#grover-s-algorithm"><u>not</u></a> <a href="https://www.youtube.com/watch?v=eB4po9Br1YY"><u>practical</u></a>. The way <a href="https://www.nist.gov/"><u>NIST</u></a>, the US National Institute for Standards and Technology (who have been spearheading the standardization of post-quantum cryptography) defines their post-quantum security levels is very telling. They define a specific security level by saying the scheme should be as hard to crack using either a classical or quantum computer as an existing symmetric cipher as follows:</p><table><tr><td><p><b>Level</b></p></td><td><p><b>Definition,</b> as least as hard to break as … </p></td><td><p><b>Example</b></p></td></tr><tr><td><p>1</p></td><td><p>To recover the key of <b>AES-128</b> by exhaustive search</p></td><td><p>ML-KEM-512, SLH-DSA-128s</p></td></tr><tr><td><p>2</p></td><td><p>To find a collision in <b>SHA256</b> by exhaustive search</p></td><td><p>ML-DSA-44</p></td></tr><tr><td><p>3</p></td><td><p>To recover the key of <b>AES-192</b> by exhaustive search</p></td><td><p>ML-KEM-768, ML-DSA-65</p></td></tr><tr><td><p>4</p></td><td><p>To find a collision in <b>SHA384</b> by exhaustive search</p></td><td><p></p></td></tr><tr><td><p>5</p></td><td><p>To recover the key of <b>AES-256</b> by exhaustive search</p></td><td><p>ML-KEM-1024, SLH-DSA-256s, ML-DSA-87</p></td></tr></table><p><sup>NIST PQC security levels, higher is harder to break (“more secure”). The examples ML-DSA, SLH-DSA and ML-KEM are covered below.</sup></p><p>There are good intentions behind suggesting doubling the key lengths of symmetric cryptography. In many use cases, the extra cost is not that high, and it mitigates any theoretical risk completely. Scaling symmetric cryptography is cheap: double the bits is typically far less than half the cost. So on the surface, it is simple advice.</p><p>But if we insist on AES-256, it seems only logical to insist on NIST PQC level 5 for the public key cryptography as well. The problem is that public key cryptography does not scale very well. Depending on the scheme, going from level 1 to level 5 typically more than doubles data usage and CPU cost. As we’ll see, deploying post-quantum signatures at level 1 is already painful, and deploying them at level 5 is debilitating.</p><p>But more importantly, organizations only have limited resources. We wouldn’t want an organization to prioritize upgrading AES-128 at the cost of leaving the definitely quantum-vulnerable RSA around.</p>
    <div>
      <h3>First migration: key agreement</h3>
      <a href="#first-migration-key-agreement">
        
      </a>
    </div>
    <p>Symmetric ciphers are not enough on their own: how do I know which key to use when visiting a website for the first time? The browser can’t just send a random key, as everyone listening in would see that key as well. You’d think it’s impossible, but there is some clever math to solve this, so that the browser and server can agree on a shared key. Such a scheme is called a <b>key agreement </b>mechanism, and is performed in the TLS <a href="https://www.cloudflare.com/learning/ssl/what-happens-in-a-tls-handshake/"><u>handshake</u></a>. In 2024 almost all traffic is secured with <a href="https://en.wikipedia.org/wiki/Curve25519"><u>X25519</u></a>, a Diffie–Hellman-style key agreement, but its security is completely broken by <a href="https://en.wikipedia.org/wiki/Shor%27s_algorithm"><u>Shor’s algorithm</u></a> on a quantum computer. Thus, any communication secured today with Diffie–Hellman, when stored, can be decrypted in the future by a quantum computer.</p><p>This makes it <b>urgent</b> to upgrade key agreement today. Luckily post-quantum key agreement is relatively straight-forward to deploy, and as we saw before, half the requests with Cloudflare end 2025 are already secured with post-quantum key agreement!</p>
    <div>
      <h3>Second migration: signatures / certificates</h3>
      <a href="#second-migration-signatures-certificates">
        
      </a>
    </div>
    <p>The key agreement allows secure agreement on a key, but there is a big gap: we do not know <i>with whom</i> we agreed on the key. If we only do key agreement, an attacker in the middle can do separate key agreements with the browser and server, and re-encrypt any exchanged messages. To prevent this we need one final ingredient: authentication.</p><p>This is achieved using <b>signatures</b>. When visiting a website, say <a href="https://cloudflare.com"><u>cloudflare.com</u></a>, the web server presents a <b>certificate</b> signed by a <a href="https://en.wikipedia.org/wiki/Certificate_authority"><u>certification authority</u></a> (CA) that vouches that the public key in that certificate is controlled by <a href="https://cloudflare.com"><u>cloudflare.com</u></a>. In turn, the web server signs the handshake and shared key using the private key corresponding to the public key in the certificate. This allows the client to be sure that they’ve done a key agreement with <a href="https://cloudflare.com"><u>cloudflare.com</u></a>.</p><p>RSA and ECDSA are commonly used traditional signature schemes today. Again, Shor’s algorithm makes short work of them, allowing a quantum attacker to forge any signature. That means that an attacker with a quantum computer can impersonate (and <a href="https://en.wikipedia.org/wiki/Man-in-the-middle_attack"><u>MitM</u></a>) any website for which we accept non post-quantum certificates.</p><p>This attack can only be performed after quantum computers are able to crack RSA / ECDSA. This makes upgrading signature schemes for TLS on the face of it less urgent, as we only need to have everyone migrated before Q-day rolls around. Unfortunately, we will see that migration to post-quantum signatures is much <b>more difficult</b>, and will require more time.</p>
    <div>
      <h2>Progress timeline</h2>
      <a href="#progress-timeline">
        
      </a>
    </div>
    <p>Before we dive into the technical challenges of migrating the Internet to post-quantum cryptography, let’s have a look at how we got here, and what to expect in the coming years. Let’s start with how post-quantum cryptography came to be.</p>
    <div>
      <h3>Origin of post-quantum cryptography</h3>
      <a href="#origin-of-post-quantum-cryptography">
        
      </a>
    </div>
    <p>Physicists Feynman and Manin independently proposed quantum computers <a href="https://plato.stanford.edu/entries/qt-quantcomp/"><u>around 1980</u></a>. It took another 14 years before Shor published <a href="https://ieeexplore.ieee.org/abstract/document/365700"><u>his algorithm</u></a> attacking RSA / ECC. Most post-quantum cryptography predates Shor’s famous algorithm.</p><p>There are various branches of post-quantum cryptography, of which the most prominent are lattice-based, hash-based, multivariate, code-based, and isogeny-based. Except for isogeny-based cryptography, none of these were initially conceived as post-quantum cryptography. In fact, early code-based and hash-based schemes are contemporaries of RSA, being proposed in the 1970s, and comfortably predate the publication of Shor’s algorithm in 1994. Also, the first multivariate scheme from 1988 is comfortably older than Shor’s algorithm. It is a nice coincidence that the most successful branch, lattice-based cryptography, is Shor’s closest contemporary, being proposed <a href="https://dl.acm.org/doi/pdf/10.1145/237814.237838"><u>in 1996</u></a>. For comparison, elliptic curve cryptography, which is widely used today, was first proposed in 1985.</p><p>In the years after the publication of Shor’s algorithm, cryptographers took measure of the existing cryptography: what’s clearly broken, and what could be post-quantum secure? In 2006, the first annual <a href="https://postquantum.cr.yp.to/"><u>International Workshop on Post-Quantum Cryptography</u></a> took place. From that conference, an introductory text <a href="https://www.researchgate.net/profile/Nicolas-Sendrier-2/publication/226115302_Code-Based_Cryptography/links/540d62d50cf2df04e7549388/Code-Based-Cryptography.pdf"><u>was prepared</u></a>, which holds up rather well as an introduction to the field. A notable caveat is the <a href="https://eprint.iacr.org/2022/214.pdf"><u>demise</u></a> of the <a href="https://www.pqcrainbow.org/"><u>Rainbow</u></a> signature scheme. In that same year, 2006, the elliptic-curve key-agreement X25519 <a href="https://cr.yp.to/ecdh/curve25519-20060209.pdf"><u>was proposed</u></a>, which now secures the majority of Internet connections, either on its own or as a hybrid with the post-quantum ML-KEM-768. </p>
    <div>
      <h2>NIST completes the first generation of PQC standards</h2>
      <a href="#nist-completes-the-first-generation-of-pqc-standards">
        
      </a>
    </div>
    <p>Ten years later, in 2016, <a href="https://nist.gov"><u>NIST</u></a>, the US National Institute of Standards and Technology, <a href="https://csrc.nist.gov/CSRC/media/Projects/Post-Quantum-Cryptography/documents/call-for-proposals-final-dec-2016.pdf"><u>launched a public competition</u></a> to standardize post-quantum cryptography. They used a similar open format as was used to standardize <a href="https://en.wikipedia.org/wiki/Advanced_Encryption_Standard"><u>AES</u></a> in 2001, and <a href="https://en.wikipedia.org/wiki/NIST_hash_function_competition"><u>SHA3</u></a> in 2012. Anyone can participate by submitting schemes and evaluating the proposals. Cryptographers from all over the world submitted algorithms. To focus attention, the list of submissions were whittled down over three rounds. From the original 82, based on public feedback, eight made it into the final round. From those eight, in 2022, NIST chose to <a href="https://blog.cloudflare.com/nist-post-quantum-surprise"><u>pick four to standardize first</u></a>: one <b>KEM </b>(for key agreement) and three signature schemes.</p><table><tr><td><p><b>Old name</b></p></td><td><p><b>New name</b></p></td><td><p><b>Branch</b></p></td></tr><tr><td><p>Kyber</p></td><td><p><b>ML-KEM</b> (<a href="https://csrc.nist.gov/pubs/fips/203/final"><u>FIPS 203</u></a>)
Module-lattice based Key-Encapsulation Mechanism Standard</p></td><td><p>Lattice-based</p></td></tr><tr><td><p>Dilithium</p></td><td><p><b>ML-DSA </b>(<a href="https://csrc.nist.gov/pubs/fips/204/final"><u>FIPS 204</u></a>)</p><p>Module-lattice based Digital Signature Standard</p></td><td><p>Lattice-based</p></td></tr><tr><td><p>SPHINCS<sup>+</sup></p></td><td><p><b>SLH-DSA</b> (<a href="https://csrc.nist.gov/pubs/fips/205/final"><u>FIPS 205</u></a>)</p><p>Stateless Hash-Based Digital Signature Standard</p></td><td><p>Hash-based</p></td></tr><tr><td><p>Falcon</p></td><td><p><b>FN-DSA </b>(not standardised yet)<b>
</b>FFT over NTRU lattices Digital Signature Standard</p></td><td><p>Lattice-based</p></td></tr></table><p>The final standards for the first three have been published August 2024. FN-DSA is late and we’ll discuss that later.</p><p>ML-KEM is the only post-quantum key agreement standardised now, and despite some occasional difficulty with its larger key sizes, it’s mostly a drop-in upgrade.</p><p>The situation is rather different with the signatures: it’s quite telling that NIST chose to pursue standardising three already. And there are even more signatures set to be <a href="https://blog.cloudflare.com/another-look-at-pq-signatures/"><u>standardized in the future</u></a>. The reason is that none of the proposed signatures are close to ideal. In short, they all have much larger keys and signatures than we’re used to.</p><p>From a security standpoint SLH-DSA is the most conservative choice, but also the worst performer. For public key and signature sizes, FN-DSA is as good as it gets for these three, but it is difficult to implement signing safely because of floating-point arithmetic. Due to FN-DSA’s limited applicability and design complexity, NIST chose to focus on the other three schemes first.</p><p>This leaves ML-DSA as the default pick. More in depth comparisons are included below.</p>
    <div>
      <h2>Adoption of PQC in protocol standards</h2>
      <a href="#adoption-of-pqc-in-protocol-standards">
        
      </a>
    </div>
    <p>Having NIST’s standards is not enough. It’s also required to standardize the way the new algorithms are used in higher level protocols. In many cases, such as key agreement in TLS, this can be as simple as assigning an identifier to the new algorithms. In other cases, such as <a href="https://www.cloudflare.com/dns/dnssec/how-dnssec-works/"><u>DNSSEC</u></a>, it requires a bit more thought. Many working groups at the <a href="https://www.ietf.org/"><u>IETF</u></a> have been preparing for years for the arrival of NIST’s final standards, and we expected many protocol integrations to be finalized soon after, before the end of 2024. That was too optimistic: some are done, but many are not finished yet.</p><p>Let’s start with the good news and look at what is done.</p><ul><li><p>The hybrid TLS key agreement <a href="https://datatracker.ietf.org/doc/draft-ietf-tls-ecdhe-mlkem/"><u>X25519MLKEM768</u></a> that combines X25519 and ML-KEM-768 (more about it later) is ready to use and is indeed quite widely deployed. Other protocols are likewise adopting ML-KEM in a hybrid mode of operation, such as <a href="https://datatracker.ietf.org/doc/draft-ietf-ipsecme-ikev2-mlkem/"><u>IPsec</u></a>, which is ready to go for simple setups. (For certain setups, there is a <a href="https://datatracker.ietf.org/doc/draft-ietf-ipsecme-ikev2-downgrade-prevention/"><u>little wrinkle</u></a> that still needs to be figured out. We’ll cover that in a future blog post.)

It might be surprising that the corresponding RFCs have not been published yet. Registering a key agreement to TLS or IPsec does not require an RFC though. In both cases, the RFC is still being pursued to avoid confusion for those that would expect an RFC, and for TLS it’s required to mark the key agreement as recommended.</p></li><li><p>For signatures, ML-DSA’s integration <a href="https://datatracker.ietf.org/doc/draft-ietf-lamps-dilithium-certificates/"><u>in X.509</u></a> certificates and <a href="https://datatracker.ietf.org/doc/draft-ietf-tls-mldsa/"><u>TLS</u></a> are good to go. The former is a freshly minted RFC, and the latter doesn’t require one.</p></li></ul><p>Now, for the bad news. At the time of writing, October 2025, the IETF hasn’t <a href="https://datatracker.ietf.org/doc/draft-ietf-lamps-pq-composite-sigs/"><u>locked down</u></a> how to do hybrid certificates: certificates where both a post-quantum and a traditional signature scheme are combined. But it’s close. We hope this’ll be figured out early 2026.</p><p>But if it’s just assigning some identifiers, what’s the cause of the delay? Mostly it’s about choice. Let’s start with the choices that had to be made in ML-DSA.</p>
    <div>
      <h4>ML-DSA delays: much ado about prehashing and private key formats</h4>
      <a href="#ml-dsa-delays-much-ado-about-prehashing-and-private-key-formats">
        
      </a>
    </div>
    <p>The two major topics of discussion for ML-DSA certificates were prehashing and the private key format.</p><p>Prehashing is where one part of the system hashes the message, and another creates the final signatures. This is useful, if you don’t want to send a big file to an <a href="https://en.wikipedia.org/wiki/Hardware_security_module"><u>HSM</u></a> to sign. Early drafts of ML-DSA  support prehashing with SHAKE256, but that <a href="https://csrc.nist.gov/csrc/media/Projects/post-quantum-cryptography/documents/faq/fips204-sec6-03192025.pdf"><u>was</u></a> not obvious. In the final version of ML-DSA, NIST included two variants: regular ML-DSA, and an explicitly prehashed version, where you are allowed to choose any hash. Having different variants is not ideal, as users will have to choose which one to pick; not all software might support all variants; and testing/validation has to be done for all. It’s not controversial to want to pick just one variant, but the issue <a href="https://globalplatform.org/wp-content/uploads/2025/01/4_ML-DSA-and-ML-KEM-Landmines-1.pdf"><u>is</u></a> <a href="https://keymaterial.net/2024/11/05/hashml-dsa-considered-harmful/"><u>which</u></a>. After plenty of debate, regular ML-DSA was chosen.</p><p>The second matter is <a href="https://datatracker.ietf.org/meeting/122/materials/slides-122-pquip-the-great-private-key-war-of-25-02.pdf"><u>private key forma</u></a>t. Because of the way that candidates are compared on performance benchmarks, it looks good for the original ML-DSA submission to cache some computation in the private key. This means that the private key is larger (several kilobytes) than it needs to be and requires more validation steps. It was suggested to cut the private key down to its bare essentials: just a 32-byte <i>seed</i>. For the final standard, NIST decided to allow both the seed and the original larger private key. This is not <a href="https://keymaterial.net/2025/02/19/how-not-to-format-a-private-key/"><u>ideal</u></a>: better stick to one of the two. In this case, the IETF wasn’t able to make a choice, and even added a third option: a pair of both the seed and expanded private key. Technically almost everyone agreed that <i>seed</i> is the superior choice, but the reason it wasn’t palatable is that some vendors already created keys for which they didn’t keep the <i>seed</i> around. Yes, we already have post-quantum legacy. It took almost a year to make these two choices.</p>
    <div>
      <h4>Hybrids require many choices</h4>
      <a href="#hybrids-require-many-choices">
        
      </a>
    </div>
    <p>To define an ML-DSA hybrid signature scheme, there are many more choices to make. With which traditional scheme to combine ML-DSA? What security levels on both sides. Then we also need to make choices for both schemes: which private key format to use? Which hash to use with ECDSA? Hybrids have new questions of their own. Do we allow reuse of the keys in the hybrid, and for that, do we want to prevent stripping attacks? Also, the question of prehashing returns with a third option: prehash on the hybrid level.</p><p>The <a href="https://datatracker.ietf.org/doc/draft-ietf-lamps-pq-composite-sigs/12/"><u>October 2025 draft</u></a> for ML-DSA hybrid signatures contains 18 variants, down from <a href="https://datatracker.ietf.org/doc/draft-ietf-lamps-pq-composite-sigs/03/"><u>26</u></a> a year earlier. Again, everyone agrees that that is too much, but it’s been hard to whittle it down further. To help end-users choose, a short list was added, which started with three options, and of course grew itself to <a href="https://www.ietf.org/archive/id/draft-ietf-lamps-pq-composite-sigs-12.html#section-11.3"><u>six</u></a>. Of those, we think MLDSA44-ECDSA-P256-SHA256 will see wide support and use on the Internet.</p><p>Now, let’s return to key agreement for which the standards have been set.</p>
    <div>
      <h2>TLS stacks get support for ML-KEM</h2>
      <a href="#tls-stacks-get-support-for-ml-kem">
        
      </a>
    </div>
    <p>The next step is software support. Not all ecosystems can move at the same speed, but we’ve seen major adoption of post-quantum key agreement to counter store-now/decrypt-later already. Recent versions of all major browsers, and many TLS libraries and platforms, notably OpenSSL, Go, and recent Apple OSes have enabled X25519MLKEM768 by default. We keep an overview <a href="https://developers.cloudflare.com/ssl/post-quantum-cryptography/pqc-support/"><u>here</u></a>.</p><p>Again, for TLS there is a big difference again between key agreement and signatures. For key agreement, the server and client can add and enable support for post-quantum key agreement independently. Once enabled on both sides, TLS negotiation will use post-quantum key agreement. We go into detail on TLS negotiation in <a href="https://blog.cloudflare.com/post-quantum-for-all#tls-anchor"><u>this blog post</u></a>. If your product just uses TLS, your store-now/decrypt-now problem could be solved by a simple software update of the TLS library.</p><p>Post-quantum <a href="https://www.cloudflare.com/application-services/products/ssl/">TLS certificates</a> are more of a hassle. Unless you control both ends, you’ll need to install two certificates: one post-quantum certificate for the new clients, and a traditional one for the old clients. If you aren’t using <a href="https://www.cloudflare.com/application-services/solutions/certificate-lifecycle-management/">automated issuance of certificates</a> yet, this might be a good reason to <a href="https://letsencrypt.org/docs/client-options/"><u>check that out</u></a>. TLS allows the client to signal which signature schemes it supports so that the server can choose to serve a post-quantum certificate only to those clients that support it. Unfortunately, although almost all TLS libraries support setting up multiple certificates, not all servers expose that configuration. If they do, it will still require a configuration change in most cases. (Although undoubtedly <a href="https://caddyserver.com/"><u>caddy</u></a> will do it for you.)</p><p>Talking about post-quantum certificates: it will take some time before Certification Authorities (CAs) can issue them. Their <a href="https://csrc.nist.gov/glossary/term/hardware_security_module_hsm"><u>HSMs</u></a> will first need (hardware) support, which then will need to be audited. Also, the <a href="https://cabforum.org/"><u>CA/Browser forum</u></a> needs to approve the use of the new algorithms. Root programs have different opinions about timelines. From the grapevine, we hear one of the root programs is preparing a pilot to accept one-year ML-DSA-87 certificates, perhaps even before the end of 2025. A CA/Browser forum ballot is <a href="https://github.com/cabforum/servercert/pull/624"><u>being drafted</u></a> to support this. Chrome on the other hand, <a href="https://www.youtube.com/live/O_BXzJv16zQ?t=19274s"><u>prefers</u></a> to solve the large certificate issue first. For the early movers, the audits are likely to be the bottleneck, as there will be a lot of submissions after the publication of the NIST standards. Although we’ll see the first post-quantum certificates in 2026, it’s unlikely they will be broadly available or trusted by all browsers before 2027.</p><p>We are in an interesting in-between time, where a lot of Internet traffic is protected by post-quantum key agreement, but not a single public post-quantum certificate is used.</p>
    <div>
      <h2>The search continues for more schemes</h2>
      <a href="#the-search-continues-for-more-schemes">
        
      </a>
    </div>
    <p>NIST is not quite done standardizing post-quantum cryptography. There are two more post-quantum competitions running: <b>round 4</b> and the <b>signatures onramp</b>.</p>
    <div>
      <h3>Round 4 winner: HQC</h3>
      <a href="#round-4-winner-hqc">
        
      </a>
    </div>
    <p>NIST only standardized one post-quantum key agreement so far: ML-KEM. They’d like to have a second one, a <b>backup KEM</b>, not based on lattices in case those turn out to be weaker than expected. To find it,  they extended the original competition with a fourth round to pick a backup KEM among the finalists. In March 2025, <a href="https://www.nist.gov/news-events/news/2025/03/nist-selects-hqc-fifth-algorithm-post-quantum-encryption#:~:text=NIST%20has%20chosen%20a%20new,were%20discovered%20in%20ML%2DKEM."><u>HQC</u></a> was <a href="https://www.nist.gov/news-events/news/2025/03/nist-selects-hqc-fifth-algorithm-post-quantum-encryption#:~:text=NIST%20has%20chosen%20a%20new,were%20discovered%20in%20ML%2DKEM."><u>selected</u></a> to be standardized.</p><p>HQC performs much worse than ML-KEM on every single metric. HQC-1, the lowest security level variant, requires 7kB of data on the wire. This is almost double the 3kB required for ML-KEM-1024, the highest security level variant. There is a similar gap in CPU performance. Also HQC scales worse with security level: where ML-KEM-1024 is about double the cost of ML-KEM-512, the highest security level of HQC requires three times the data (21kB!) and more than four times the compute.</p><p>What about the security? To hedge against gradually improved attacks, ML-KEM-768 has a clear edge over HQC-1, it performs much better, and it has a huge security margin at level 3 compared to level 1. What about leaps? Both ML-KEM and HQC use a similar algebraic structure on top of plain lattices and codes respectively: it is not inconceivable that a breakthrough there could apply to both. Now, also without the algebraic structure, codes and lattices feel related. We’re well into speculation: a catastrophic attack on lattices might not affect codes, but it wouldn’t be surprising too if it did. After all, RSA and ECC that are more dissimilar are both broken by quantum computers.</p><p>There might still be peace of mind to keep HQC around just in case. Here, we’d like to share an anecdote from the chaotic week when it was not clear yet that Chen’s quantum algorithm against lattices was flawed. What to replace ML-KEM with if it would be affected? HQC was briefly considered, but it was clear that an adjusted variant of ML-KEM would still be much more performant.</p><p>Stepping back: that we’re looking for a <i>second</i> efficient KEM is a luxury position. If I were granted a wish for a new post-quantum scheme, I wouldn’t ask for a better KEM, but for a better signature scheme. Let’s see if I get lucky.</p>
    <div>
      <h3>Signatures onramp</h3>
      <a href="#signatures-onramp">
        
      </a>
    </div>
    <p>In late 2022, after announcing the first four picks, NIST also called a new competition, dubbed the <i>signatures onramp</i>, to find <a href="https://csrc.nist.gov/projects/pqc-dig-sig"><u>additional signature schemes</u></a>. The competition has two goals. The first is hedging against cryptanalytic breakthroughs against lattice-based cryptography. NIST would like to standardize a signature that performs better than SLH-DSA (both in size and compute), but is not based on lattices. Secondly, they’re looking for a signature scheme that might do well in use cases where the current roster doesn’t do well: we will discuss those at length later on in this post.</p><p>In July 2023, NIST posted the <a href="https://csrc.nist.gov/news/2023/additional-pqc-digital-signature-candidates"><u>40 submissions</u></a> they received for a first round of public review. The cryptographic community got to work, and as is quite normal for a first round, many of the schemes were broken within a week. By February 2024, ten submissions were broken completely, and several others were weakened drastically. Out of the standing candidates, in October 2024, NIST selected 14 submissions for the second round.</p><p>A year ago, we wrote <a href="https://blog.cloudflare.com/another-look-at-pq-signatures/"><u>a blog post</u></a> covering these 14 submissions in great detail. The short of it: there has been amazing progress on post-quantum signature schemes. We will touch briefly upon them later on, and give some updates on the advances since last year. It is worth mentioning that just like the main post-quantum competition, the selection process will take many years. It is unlikely that any of these onramp signature schemes will be standardized before 2028 — if they’re not broken in the first place. That means that although they’re very welcome in the future, we can’t trust that better signature schemes will solve our problems today. As Eric Rescorla, the editor of TLS 1.3, <a href="https://educatedguesswork.org/posts/pq-emergency/"><u>writes</u></a>: “You go to war with the algorithms you have, not the ones you wish you had.”</p><p>With that in mind, let's look at the progress of deployments.</p>
    <div>
      <h2>Migrating the Internet to post-quantum key agreement</h2>
      <a href="#migrating-the-internet-to-post-quantum-key-agreement">
        
      </a>
    </div>
    <p>Now that we have the big picture, let’s dive into some finer details about this X25519MLKEM768 that’s widely deployed now.</p><p>First the post-quantum part. ML-KEM was submitted under the name <a href="https://pq-crystals.org/kyber/index.shtml"><u>CRYSTALS-Kyber</u></a>. Even though it’s a US standard, its designers work in industry and academia across France, Switzerland, the Netherlands, Belgium, Germany, Canada, China, and the United States. Let’s have a look at its performance.</p>
    <div>
      <h2>ML-KEM versus X25519</h2>
      <a href="#ml-kem-versus-x25519">
        
      </a>
    </div>
    <p>Today the vast majority of clients use the traditional key agreement X25519. Let’s compare that to ML-KEM.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/VCx6lbwzhKt4FywhRAZbk/4b7956adbb9a7690d3c3c6ce5d830fe1/Screenshot_2025-10-28_at_13.41.31.png" />
          </figure><p><sup>Size and CPU compared between X25519 and ML-KEM. Performance varies considerably by hardware platform and implementation constraints, and should be taken as a rough indication only.</sup></p><p>ML-KEM-512, -768 and -1024 aim to be as resistant to (quantum) attack as AES-128, -192 and -256 respectively. Even at the AES-128 level, ML-KEM is much bigger than X25519, requiring 800+768=1,568 bytes over the wire, whereas X25519 requires a mere 64 bytes.</p><p>On the other hand, even ML-KEM-1024 is typically significantly faster than X25519, although this can vary quite a bit depending on your platform and implementation.</p>
    <div>
      <h2>ML-KEM-768 and X25519</h2>
      <a href="#ml-kem-768-and-x25519">
        
      </a>
    </div>
    <p>We are not taking advantage of that speed boost just yet. Like many other early adopters, we like to play it safe and deploy a <b>hybrid</b> key-agreement <a href="https://datatracker.ietf.org/doc/draft-ietf-tls-ecdhe-mlkem/"><u>combining</u></a> X25519 and ML-KEM-768. This combination might surprise you for two reasons.</p><ol><li><p>Why combine X25519 (“128 bits of security”) with ML-KEM-768 (“192 bits of security”)?</p></li><li><p>Why bother with the non post-quantum X25519?</p></li></ol><p>The apparent security level mismatch is a hedge against improvements in cryptanalysis in lattice-based cryptography. There is a lot of trust in the (non post-quantum) security of X25519: matching AES-128 is more than enough. Although we are comfortable in the security of ML-KEM-512 today, over the coming decades cryptanalysis could improve. Thus, we’d like to keep a margin for now.</p><p>The inclusion of X25519 has two reasons. First, there is always a remote chance that a breakthrough renders all variants of ML-KEM insecure. In that case, X25519 still provides non-post-quantum security, and our post-quantum migration didn’t make things worse.</p><p>More important is that we do not only worry about attacks on the algorithm, but also on the implementation. A noteworthy example where we dodged a bullet is that of <a href="https://kyberslash.cr.yp.to/"><u>KyberSlash</u></a>, a timing attack that affected many implementations of Kyber (an earlier version of ML-KEM), including <a href="https://github.com/cloudflare/circl/security/advisories/GHSA-9763-4f94-gfch"><u>our own</u></a>. Luckily KyberSlash does not affect Kyber as it is used in TLS. A similar implementation mistake that would actually affect TLS, is likely to require an active attacker. In that case, the likely aim of the attacker wouldn’t be to decrypt data decades down the line, but steal a cookie or other token, or inject a payload. Including X25519 prevents such an attack.</p><p>So how well do ML-KEM-768 and X25519 together perform in practice?</p>
    <div>
      <h2>Performance and protocol ossification</h2>
      <a href="#performance-and-protocol-ossification">
        
      </a>
    </div>
    
    <div>
      <h3>Browser experiments</h3>
      <a href="#browser-experiments">
        
      </a>
    </div>
    <p>Being well aware of potential compatibility and performance issues, Google started <a href="https://security.googleblog.com/2016/07/experimenting-with-post-quantum.html"><u>a first experiment</u></a> with post-quantum cryptography back in 2016, the same year NIST started their competition. This was followed up by a second larger joint experiment by <a href="https://blog.cloudflare.com/towards-post-quantum-cryptography-in-tls/"><u>Cloudflare</u></a> and <a href="https://www.imperialviolet.org/2018/12/12/cecpq2.html"><u>Google</u></a> in 2018. We tested two different hybrid post-quantum key agreements: CECPQ2, which is a combination of the lattice-based NTRU-HRSS and X25519, and CECPQ2b, a combination of the isogeny-based SIKE and again X25519. NTRU-HRSS is very similar to ML-KEM in size, but is computationally somewhat more taxing on the client-side. SIKE on the other hand, has very small keys, is computationally very expensive, and was <a href="https://eprint.iacr.org/2022/975.pdf"><u>completely broken</u></a> in 2022. With respect to TLS handshake times, X25519+NTRU-HRSS performed very well.</p><p>Unfortunately, a small but significant fraction of clients experienced broken connections with NTRU-HRSS. The reason: the size of the NTRU-HRSS keyshares. In the past, when creating a TLS connection, the first message sent by the client, the so-called <i>ClientHello</i>, almost always fit within a single network packet. The TLS specification allows for a larger <i>ClientHello</i>, however no one really made use of that. Thus, protocol ossification strikes again as there are some middleboxes, load-balancers, and other software that tacitly assume the <i>ClientHello</i> always fits in a single packet.</p>
    <div>
      <h2>Long road to 50%</h2>
      <a href="#long-road-to-50">
        
      </a>
    </div>
    <p>Over the subsequent years, we kept experimenting with PQ, switching to Kyber in 2022, and ML-KEM in 2024. Chrome did a great job reaching out to vendors whose products were <a href="https://tldr.fail/"><u>incompatible</u></a>. If it were not for these compatibility issues, we would’ve likely seen Chrome ramp up post-quantum key agreement five years earlier. It took until March 2024 before Chrome felt comfortable enough to enable post-quantum key agreement by default on Desktop. After that many other clients, and all major browsers, have joined Chrome in enabling post-quantum key agreement by default. An incomplete timeline:</p><table><tr><td><p>July 2016</p></td><td><p>Chrome’s <a href="https://security.googleblog.com/2016/07/experimenting-with-post-quantum.html"><u>first experiment with PQ</u></a> (CECPQ)</p></td></tr><tr><td><p>June 2018</p></td><td><p><a href="https://blog.cloudflare.com/the-tls-post-quantum-experiment/"><u>Cloudflare</u></a> / <a href="https://www.imperialviolet.org/2018/12/12/cecpq2.html"><u>Google</u></a> experiment (CECPQ2)</p></td></tr><tr><td><p>October 2022</p></td><td><p>Cloudflare <a href="https://blog.cloudflare.com/post-quantum-for-all/"><u>enables</u></a> PQ by default server side</p></td></tr><tr><td><p>November 2023</p></td><td><p>Chrome ramps up PQ to 10% on Desktop</p></td></tr><tr><td><p>March 2024</p></td><td><p>Chrome <a href="https://blog.chromium.org/2024/05/advancing-our-amazing-bet-on-asymmetric.html"><u>enables</u></a> PQ by default on Desktop</p></td></tr><tr><td><p>August 2024</p></td><td><p>Go <a href="https://github.com/golang/go/issues/67061"><u>enables</u></a> PQ by default</p></td></tr><tr><td><p>November 2024</p></td><td><p>Chrome enables PQ by default on Android and Firefox on Desktop.</p></td></tr><tr><td><p>April 2025</p></td><td><p><a href="https://openssl-library.org/post/2025-04-08-openssl-35-final-release/"><u>OpenSSL</u></a> enables PQ by default</p></td></tr><tr><td><p>October 2025</p></td><td><p>Apple is <a href="https://support.apple.com/en-us/122756"><u>rolling out</u></a> PQ by default with the release of iOS / iPadOS / macOS 26.</p></td></tr></table><p>It’s noteworthy that there is a gap between Chrome enabling PQ on Desktop and on Android. Although ML-KEM doesn’t have a large performance impact, as seen in the graphs, it’s certainly not negligible, especially on the long tail of slower connections more prevalent on mobile, and it required more consideration to proceed.</p><p>But we’re finally here now: over 50% (and rising!) of human traffic is protected against store-now/decrypt-later, making post-quantum key agreement the new security baseline for the Web.</p><p>Browsers are one side of the equation, what about servers?</p>
    <div>
      <h3>Server-side support</h3>
      <a href="#server-side-support">
        
      </a>
    </div>
    <p>Back in 2022 we <a href="https://blog.cloudflare.com/post-quantum-for-all/"><u>enabled</u></a> post-quantum key agreement server side for basically all customers. Google did the same for most of their servers (except GCP) in 2023. Since then many have followed. Jan Schaumann has been posting regular scans of the top 100k domains. In his September 2025 post, <a href="https://www.netmeister.org/blog/pqc-use-2025-09.html"><u>he reports</u></a> 39% support PQ now, up from 28% only six months earlier. In his survey, we see not only support rolling out on large service providers, such as Amazon, Fastly, Squarespace, Google, and Microsoft, but also a trickle of self-hosted servers adding support hosted at Hetzner and OVHcloud.</p><p>This is the publicly accessible web. What about servers behind a service like Cloudflare?</p>
    <div>
      <h3>Support at origins</h3>
      <a href="#support-at-origins">
        
      </a>
    </div>
    <p>In <a href="https://blog.cloudflare.com/post-quantum-to-origins"><u>September 2023</u></a>, we added support for our customers to enable post-quantum key agreement on connections from Cloudflare to their origins. That’s connection (3) in the following diagram:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7dRJxj1f2otMM41sEKFoFG/d722378a6f74c4033787897334bb4e7a/image12.png" />
          </figure><p><sup>Typical connection flow when a visitor requests an uncached page.</sup></p><p>Back in 2023 only 0.5% of origins supported post-quantum key agreement. Through 2024 that hasn’t changed much. This year, in 2025, we see support slowly pick up with software support rolling out, and we’re now at 3.7%.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6LaKKWKWTli5NETFHlQ1za/e9eb1e750a72e62bdc522207451e7085/image7.png" />
          </figure><p><sup>Fraction of origins that support the post-quantum key agreement X25519MLKEM768.</sup></p><p>3.7% doesn’t sound impressive at all compared to the previous 50% and 39% for clients and public servers respectively, but it’s nothing to scoff at. There is much more diversity in origins than there are in clients: many more people have to do something to make that number move up. But it’s still a more than seven-fold increase, and let’s not forget that back in 2024 we celebrated reaching 1.8% of client support.For customers, origins aren’t always easy to upgrade at all. Does that mean missing out on post-quantum security? No, not necessarily: you can secure the connection between Cloudflare and your origin by setting up <a href="https://developers.cloudflare.com/cloudflare-one/connections/connect-networks/"><u>Cloudflare Tunnel</u></a> as a sidecar to your origin.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3bfxtdySAPtc9hn9Qroztz/8233f1fecbed214b9584af6648488587/image3.png" />
          </figure>
    <div>
      <h3>Ossification</h3>
      <a href="#ossification">
        
      </a>
    </div>
    <p>Support is all well and good, but as we saw with browser experiments, protocol ossification is a big concern. What does it look like with origins? Well, it depends.</p><p>There are two ways to enable post-quantum key agreement: the fast way, and the slow but safer way. In both cases, if the origin doesn’t support post-quantum, they’ll fall back safely to traditional key agreement. We explain the details in this <a href="https://blog.cloudflare.com/post-quantum-to-origins"><u>blog post</u></a>, but in short, in the fast way we send the post-quantum keys immediately, and in the safer way we postpone it by one roundtrip using <i>HelloRetryRequest</i>. All major browsers use the fast way.</p><p>We have been regularly scanning all origins to see what they support. The good news is that all origins supported the safe but slow method. The fast method didn’t fare as well, as we found that 0.05% of connections would break. That’s too high to enable the fast method by default. We did enable PQ to origins using the safer method by default for all non-enterprise customers and enterprise customers can opt in.</p><p>We are not satisfied though until it’s fast and enabled for everyone. That’s why we’ll <a href="https://blog.cloudflare.com/automatically-secure/#post-quantum-era"><u>automatically enable</u></a> post-quantum to origins using the fast method for all customers, if our scans show it’s safe.</p>
    <div>
      <h3>Internal connections</h3>
      <a href="#internal-connections">
        
      </a>
    </div>
    <p>So far all the connections we’ve been talking about are between Cloudflare and external parties. There are also a lot of internal connections within Cloudflare (marked 2 in the two diagrams above.) In 2023 we <a href="https://blog.cloudflare.com/post-quantum-cryptography-ga/"><u>made a big push</u></a> to upgrade our internal connections to post-quantum key agreement. Compared to all the other post-quantum efforts we pursue, this has been, by far, the biggest job: we asked every engineering team in the company to stop what they’re doing; take stock of the data and connections that their products secure; and upgrade them to post-quantum key agreement. In most cases the upgrade was simple. In fact, many teams were already upgraded by pulling in software updates. Still, figuring out that you’re already done can take quite some time! On a positive note, we didn’t see any performance or ossification issues in this push.</p><p>We have upgraded the majority of internal connections, but a long tail remains, which we continue to work on. The most important connection that we didn’t get to upgrade in 2023 is the connection between WARP client and Cloudflare. In September 2025 we <a href="https://blog.cloudflare.com/post-quantum-warp/"><u>upgraded it</u></a>, by moving from Wireguard to QUIC.</p>
    <div>
      <h2>Outlook</h2>
      <a href="#outlook">
        
      </a>
    </div>
    <p>As we’ve seen, post-quantum key agreement, despite initial trouble with protocol ossification, has been straightforward to deploy. In the vast majority of cases it’s an uneventful software update. And with 50% deployment (and rising), it’s the new security baseline for the Internet.</p><p>Let’s turn to the second, more difficult migration.</p>
    <div>
      <h2>Migrating the Internet to post-quantum signatures</h2>
      <a href="#migrating-the-internet-to-post-quantum-signatures">
        
      </a>
    </div>
    <p>Now, we’ll turn our attention to upgrading the signatures used on the Internet.</p>
    <div>
      <h2>The zoo of post-quantum signatures</h2>
      <a href="#the-zoo-of-post-quantum-signatures">
        
      </a>
    </div>
    <p>We wrote a <a href="https://blog.cloudflare.com/another-look-at-pq-signatures/"><u>long deep dive</u></a> in the field of post-quantum signature schemes last year, November 2024. Most of that is still up-to-date, but there have been some exciting developments. Here we’ll just go over some highlights and some exciting updates of last year.</p><p>Let’s start by sizing up the post-quantum signatures we have available today at the AES-128 security level: ML-DSA-44 and the two variants of SLH-DSA. We use ML-DSA-44 as the baseline, as that’s the scheme that’s going to see the most widespread use initially. As a comparison, we also include the venerable Ed25519 and RSA-2048 in wide use today, as well as FN-DSA-512 which will be standardised soon and a sample of nine for TLS promising signature schemes from the signatures onramp.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4NC2lO6hXKEFOgaVgQ7ExO/a0a65ddbb24d11ad96405f19aa344f4b/Screenshot_2025-10-28_at_13.18.54.png" />
          </figure><p><sup>Comparison of various signature schemes at the security level of AES-128. CPU times vary significantly by platform and implementation constraints and should be taken as a rough indication only. ⚠️ FN-DSA signing time when using fast but dangerous floating-point arithmetic — see warning below. ⚠️ SQISign signing is not timing side-channel secure.</sup></p><p>It is immediately clear that none of the post-quantum signature schemes comes even close to being a drop-in replacement for Ed25519 (which is comparable to ECDSA P-256) as most of the signatures are simply much bigger. The exceptions are SQISign, MAYO, SNOVA, and UOV from the onramp, but they’re far from ideal. MAYO, SNOVA, and UOV have large public keys, and SQISign requires a great amount of computation.</p>
    <div>
      <h3>Be careful with FN-DSA</h3>
      <a href="#be-careful-with-fn-dsa">
        
      </a>
    </div>
    <p>Looking ahead a bit: the best of the first competition seems to be FN-DSA-512. FN-DSA-512’s signatures and public key together are <i>only</i> 1,563 bytes, with somewhat reasonable signing time. FN-DSA has an <b>achilles heel</b> though — for acceptable signing performance, it requires fast floating-point arithmetic. Without it, signing is about 20 times slower. But speed is not enough, as the floating-point arithmetic has to run in constant time — without it, the FN-DSA private key can be recovered by timing signature creation. Writing safe FN-DSA implementations has turned out to be quite challenging, which makes FN-DSA dangerous when signatures are generated on the fly, such as in a TLS handshake. It is good to stress that this only affects signing. FN-DSA verification does not require floating-point arithmetic (and during verification there wouldn’t be a private key to leak anyway.)</p>
    <div>
      <h2>There are many signatures on the web</h2>
      <a href="#there-are-many-signatures-on-the-web">
        
      </a>
    </div>
    <p>The biggest pain-point of migrating the Internet to post-quantum signatures, is that there are a lot of signatures even in a single connection. When you visit this very website for the first time, we send <b>five signatures and two public keys</b>.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6frWEoCLnBEZ5qztV8XoT4/25bd315190d8914f42f282679d6f525a/image9.png" />
          </figure><p>The majority of these are for the <b>certificate chain</b>: the CA signs the intermediate certificate, which signs the leaf certificate, which in turn signs the TLS transcript to prove the authenticity of the server. If you’re keeping count: we’re still two signatures short.</p><p>These are for <b>SCTs</b> required for <a href="https://certificate.transparency.dev/howctworks/"><u>certificate transparency</u></a>. Certificate transparency (CT) is a key, but lesser known, part of the <a href="https://smallstep.com/blog/everything-pki/#web-pki-vs-internal-pki"><u>Web PKI</u></a>, the ecosystem that secures browser connections. Its goal is to publicly log every certificate issued, so that misissuances can be detected after the fact. It’s the system that’s behind <a href="http://crt.sh"><u>crt.sh</u></a> and <a href="https://blog.cloudflare.com/new-regional-internet-traffic-and-certificate-transparency-insights-on-radar/"><u>Cloudflare Radar</u></a>. CT has shown its value once more very recently by surfacing a <a href="https://blog.cloudflare.com/unauthorized-issuance-of-certificates-for-1-1-1-1/"><u>rogue certificate for 1.1.1.1</u></a>.</p><p>Certificate transparency works by having independent parties run <i>CT logs</i>. Before issuing a certificate, a CA must first submit it to at least two different CT logs. An SCT is a signature of a CT log that acts as a proof, a <i>receipt</i>, that the certificate has been logged.</p>
    <div>
      <h3>Tailoring signature schemes</h3>
      <a href="#tailoring-signature-schemes">
        
      </a>
    </div>
    <p>There are two aspects of how a signature can be used that are worthwhile to highlight: whether the <b>public key is included</b> with the signature, and whether the signature is <b>online</b> or <b>offline</b>.</p><p>For the SCTs and the signature of the root on the intermediate, the public key is not transmitted during the handshake. Thus, for those, a signature scheme with smaller signatures but larger public keys, such as MAYO, SNOVA, or UOV, would be particularly well-suited. For the other signatures, the public key is included, and it’s more important to minimize the sizes of the combined public key and signature.</p><p>The handshake signature is the only signature that is created online — all the other signatures are created ahead of time.  The handshake signature is created and verified only once, whereas the other signatures are typically verified many times by different clients. This means that for the handshake signature, it’s advantageous to balance signing and verification time which are both in the <i>hot path</i>, whereas for the other signatures having better verification time at the cost of slower signing is worthwhile. This is one of the advantages RSA still enjoys over elliptic curve signatures today.</p><p>Putting together different signature schemes is a fun puzzle, but it also comes with some drawbacks. Using multiple different schemes increases the attack surface because an algorithmic or implementation vulnerability in one compromises the whole. Also, the whole ecosystem needs to implement and optimize multiple algorithms, which is a significant burden.</p>
    <div>
      <h2>Putting it together</h2>
      <a href="#putting-it-together">
        
      </a>
    </div>
    <p>So, what are some reasonable combinations to try?</p>
    <div>
      <h3>With NIST’s current picks</h3>
      <a href="#with-nists-current-picks">
        
      </a>
    </div>
    <p>With the draft standards available today, we do not have a lot of options.</p><p>If we simply switch to ML-DSA-44 for all signatures, we’re adding 15kB of data that needs to be transmitted from the server to the client during the TLS handshake. Is that a lot? Probably. We will address that later on.</p><p>If we wait a bit and replace all but the handshake signature with FN-DSA-512, we’re looking at adding only 7kB. That’s much better, but I have to repeat that it’s difficult to implement FN-DSA-512 signing safely without timing side channels, and there is a good chance we’ll shoot ourselves in the foot if we’re not careful. Another way to shoot ourselves in the foot <i>today</i> is with stateful hash-based signatures, as we explain <a href="https://blog.cloudflare.com/pq-2024/#stateful-hash-based-signatures"><u>here</u></a>. All in all, FN-DSA-512 and stateful hash-based signatures tempt us with a similar and clear performance benefit over ML-DSA-44, but are difficult to use safely.</p>
    <div>
      <h3>Signatures on the horizon</h3>
      <a href="#signatures-on-the-horizon">
        
      </a>
    </div>
    <p>There are some promising new signature schemes submitted to the NIST onramp.</p><p>Purely looking at sizes, SQISign I is the clear winner, even beating RSA-2048. Unfortunately, the computation required for signing, and crucially verification, are too high. SQISign is in a worse position than FN-DSA with implementation security: it’s very complicated and it’s unclear how to perform signing in <i>constant time</i>. For niche applications, SQISign might be useful, but for general adoption verification times need to improve significantly, even if that requires a larger signature. Over the last few years there has been amazing progress in improving verification time; simplifying the algorithm; and <a href="https://eprint.iacr.org/2025/832"><u>implementation security</u></a> for (variants of) SQISign. They’re not there yet, but the gap has shrunk much more than we’d have expected. If the pace of improvement holds, then a future SQISign could well be viable for TLS.</p><p>One conservative contender is <a href="https://link.springer.com/chapter/10.1007/3-540-48910-X_15"><u>UOV (unbalanced oil and vinegar)</u></a>. It is an old multivariate scheme with a large public key (66.5kB), but small signatures (96 bytes). Over the decades, there have been many attempts to add some structure to UOV public keys, to get a better balance between public key and signature size. Many of these so-called <i>structured multivariate </i>schemes, which includes Rainbow and GeMMS, unfortunately have been broken dramatically <a href="https://eprint.iacr.org/2022/214.pdf"><u>“with a laptop over the weekend”</u></a>. MAYO and SNOVA, which we’ll get to in a bit, are the latest attempts at structured multivariate. UOV itself has remained mostly unscathed. Surprisingly in 2025, Lars Ran found a completely new <a href="https://eprint.iacr.org/2025/1143"><u>“wedges” attack</u></a> on UOV. It doesn’t affect UOV much, although SNOVA and MAYO are hit harder. Why the attack is noteworthy, is that it’s based on a relatively simple idea: it is surprising it wasn’t found before. Now, getting back to performance: if we combine UOV for the root and SCTs with ML-DSA-44 for the others, we’re looking at only 10kB — close to FN-DSA-512.</p><p>Now, let’s turn to the main event:</p>
    <div>
      <h3>The fight between MAYO versus SNOVA</h3>
      <a href="#the-fight-between-mayo-versus-snova">
        
      </a>
    </div>
    <p>Looking at the roster today, MAYO and particularly SNOVA look great from a performance standpoint. Last year, SNOVA and MAYO were closer in performance, but they have diverged quite a bit.</p><p><a href="https://pqmayo.org/"><u>MAYO</u></a> is designed by the cryptographer that broke <a href="https://eprint.iacr.org/2022/214.pdf"><u>Rainbow</u></a>. As a structured multivariate scheme, its security requires careful scrutiny, but its utility (assuming it is not broken) is very appealing. MAYO allows for a fine-grained tradeoff between signature and public key size. For the submission, to keep things simple, the authors proposed two concrete variants: MAYO<sub>one</sub> with balanced signature (454 bytes) and public key (1.4kB) sizes, and MAYO<sub>two</sub> that has signatures of 216 bytes, while keeping the public key manageable at 4.3kB. Verification times are excellent, while signing times are somewhat slower than ECDSA, but far better than RSA. Combining both variants in the obvious way, we’re only looking at 4.3kB. These numbers are a bit higher than last year, as MAYO adjusted its parameters again slightly to account for newly discovered attacks.</p><p>Over the competition, <a href="https://snova.pqclab.org/"><u>SNOVA</u></a> has been hit harder by attacks than MAYO. SNOVA’s response has been more aggressive: instead of just tweaking parameters to adjust, they have also made larger changes to the internals of the scheme, to counter the attacks and to get a performance improvement to boot. Combining SNOVA<sub>(37,17,16,2)</sub> and SNOVA<sub>(24,5,23,4)</sub> in the obvious way, we’re looking at adding just an amazing 2.1kB.</p><p>We see a face-off shaping up between the risky but much smaller SNOVA, and the conservative but slower MAYO. Zooming out, both have very welcome performance, and both are too risky to deploy now. Ran’s new wedges attack is an example that the field of multivariate cryptanalysis still holds surprises, and needs more eyes and time. It’s too soon to pick a winner between SNOVA and MAYO: let them continue to compete. Even if they turn out to be secure, neither is likely to be standardized by 2029, which means we cannot rely on them for the initial migration to post-quantum authentication.</p><p>Stepping back, is the 15kB for ML-DSA-44 actually that bad?</p>
    <div>
      <h2>Do we really care about the extra bytes?</h2>
      <a href="#do-we-really-care-about-the-extra-bytes">
        
      </a>
    </div>
    <p>On average, around 18 million TLS connections are established with Cloudflare per second. Upgrading each to ML-DSA, would take 2.1Tbps, which is 0.5% of our current total network capacity. No problem so far. The question is how these extra bytes affect performance.</p><p>It will take 15kB extra to swap in ML-DSA-44. That’s a lot compared to the typical handshake today, but it’s not a lot compared to the JavaScript and images served on many web pages. The key point is that the change we must make here affects every single TLS connection, whether it’s used for a bloated website, or a time-critical API call. Also, it’s not just about waiting a bit longer. If you have spotty cellular reception, that extra data can make the difference between being able to load a page, and having the connection time out. (As an aside, talking about bloat: many apps perform a <a href="https://thomwiggers.nl/publication/tls-on-android/tls-on-android.pdf"><u>surprisingly high number of TLS handshakes</u></a>).</p><p>Just like with key agreement, performance isn’t our only concern: we also want the connection to succeed in the first place. Back in 2021, <a href="https://blog.cloudflare.com/sizing-up-post-quantum-signatures/"><u>we ran an experiment</u></a> artificially enlarging the certificate chain to simulate larger post-quantum certificates. We summarize the result <a href="https://blog.cloudflare.com/pq-2024/#do-we-really-care-about-the-extra-bytes"><u>here</u></a>. One key take-away is that some clients or middleboxes don’t like certificate chains larger than 10kB. This is problematic for a <a href="https://eprint.iacr.org/2018/063.pdf"><u>single-certificate migration</u></a> strategy. In this approach, the server installs a single traditional certificate that contains a separate post-quantum certificate in a so-called non-critical extension. A client that does not support post-quantum certificates will ignore the extension. In this approach, installing the single certificate will immediately break all clients with compatibility issues, making it a non-starter. On the performance side there is also a steep drop in performance at 10kB because of the initial congestion window.</p><p>
</p><p>Is 9kB too much? The slowdown in TLS handshake time would be approximately 15%. We felt the latter is workable, but far from ideal: such a slowdown is noticeable and people might hold off deploying post-quantum certificates before it’s too late.
</p><p>Chrome is more cautious and set 10% as their target for maximum TLS handshake time regression. They <a href="https://dadrian.io/blog/posts/pqc-signatures-2024/#fnref:3"><u>report</u></a> that deploying post-quantum key agreement has already incurred a 4% slowdown in TLS handshake time, for the extra 1.1kB from server-to-client and 1.2kB from client-to-server. That slowdown is proportionally larger than the 15% we found for 9kB, but that could be explained by slower upload speeds than download speeds. </p><p>There has been pushback against the focus on TLS handshake times. One argument is that session resumption alleviates the need for sending the certificates again. A second argument is that the data required to visit a typical website dwarfs the additional bytes for post-quantum certificates. One example is this <a href="https://www.amazon.science/publications/the-impact-of-data-heavy-post-quantum-tls-1-3-on-the-time-to-last-byte-of-real-world-connections"><u>2024 publication</u></a>, where Amazon researchers have simulated the impact of large post-quantum certificates on data-heavy TLS connections. They argue that typical connections transfer multiple requests and hundreds of kilobytes, and for those the TLS handshake slowdown disappears in the margin.</p><p>Are session resumption and hundreds of kilobytes over a connection typical though? We’d like to share what we see. We focus on QUIC connections, which are likely initiated by browsers or browser-like clients. Of all QUIC connections with Cloudflare that carry at least one HTTP request, 27% are <a href="https://blog.cloudflare.com/even-faster-connection-establishment-with-quic-0-rtt-resumption/"><u>resumptions</u></a>, meaning that key material from a previous TLS connection is reused, avoiding the need to transmit certificates. The median number of bytes transferred from server-to-client over a resumed QUIC connection is 4.4kB, while the average is 259kB. For non-resumptions the median is 8.1kB and average is 583kB. This vast difference between median and average indicates that a small fraction of data-heavy connections skew the average. In fact, only 15.5% of all QUIC connections transfer more than 100kB.</p><p>The median certificate chain today (with compression) is <a href="https://datatracker.ietf.org/doc/html/draft-ietf-tls-cert-abridge-02#section-4"><u>3.2kB</u></a>. That means that almost 40% of all data transferred from server to client on more than half of the non-resumed QUIC connections are just for the certificates, and this only gets worse with post-quantum algorithms. For the majority of QUIC connections, using ML-DSA-44 as a drop-in replacement for classical signatures would more than double the number of transmitted bytes over the lifetime of the connection.</p><p>It sounds quite bad if the vast majority of data transferred for a typical connection is just for the post-quantum certificates. It’s still only a proxy for what is actually important: the effect on metrics relevant to the end-user, such as the browsing experience (e.g. <a href="https://web.dev/articles/optimize-lcp"><u>largest contentful paint</u></a>) and the amount of data those certificates take from a user’s monthly data cap. We will continue to investigate and get a better understanding of the impact.</p>
    <div>
      <h2>Way forward for post-quantum authentication</h2>
      <a href="#way-forward-for-post-quantum-authentication">
        
      </a>
    </div>
    <p>The path for migrating the Internet to post-quantum authentication is much less clear than with key agreement. Unless we can get performance much closer to today’s authentication, we expect the vast majority to keep post-quantum authentication disabled. Postponing enabling post-quantum authentication until Q-day draws near carries a real risk that we will not see the issues before it’s too late to fix. That’s why it’s essential to make post-quantum authentication performant enough to be turned on by default.</p><p>We’re exploring various ideas to reduce the number of signatures, in increasing order of ambition: leaving out intermediates; KEMTLS; and Merkle Tree Certificates. We covered these in <a href="https://blog.cloudflare.com/pq-2024/#reducing-number-of-signatures"><u>detail last year</u></a>. Most progress has been made on the last one: <a href="https://datatracker.ietf.org/doc/draft-davidben-tls-merkle-tree-certs/"><u>Merkle Tree Certificates</u></a> (MTC). In this proposal, in the common case, all signatures except the handshake signature are replaced by a short &lt;800 byte Merkle tree proof. This could well allow for post-quantum authentication that’s actually faster than using traditional certificates today! Together with Chrome, we’re going to try it out by the end of the year: read about it in <a href="https://blog.cloudflare.com/bootstrap-mtc/"><u>this blog</u></a> post.</p>
    <div>
      <h3>Not just TLS, authentication, and key agreement</h3>
      <a href="#not-just-tls-authentication-and-key-agreement">
        
      </a>
    </div>
    <p>Despite its length, in this blog post, we have only really touched upon migrating TLS. And even TLS we did not cover completely, as we have not discussed <a href="https://blog.cloudflare.com/announcing-encrypted-client-hello"><u>Encrypted ClientHello</u></a> (we didn’t forget about it). Although important, TLS is not the only protocol key to the security of the Internet. We want to briefly mention a few other challenges, but cannot go into detail. One particular challenge is DNSSEC, which is responsible for securing the resolution of domain names.</p><p>Although key agreement and signatures are the most widely used cryptographic primitives, over the last few years we have seen the adoption of more <a href="https://github.com/fancy-cryptography/fancy-cryptography"><u>esoteric cryptography</u></a> to serve more advanced use cases, such as unlinkable tokens with <a href="https://blog.cloudflare.com/privacy-pass-standard"><u>Privacy Pass</u></a> / <a href="https://blog.cloudflare.com/eliminating-captchas-on-iphones-and-macs-using-new-standardhttps://blog.cloudflare.com/eliminating-captchas-on-iphones-and-macs-using-new-standard"><u>PAT</u></a>, anonymous credentials, and <a href="https://blog.cloudflare.com/inside-geo-key-manager-v2"><u>attribute based encryption</u></a> to name a few. For most of these advanced cryptographic schemes, there is no known practical post-quantum alternative yet. Although to our delight there have been great advances in post-quantum anonymous credentials.</p>
    <div>
      <h2>What you can do today to stay safe against quantum attacks</h2>
      <a href="#what-you-can-do-today-to-stay-safe-against-quantum-attacks">
        
      </a>
    </div>
    <p>To summarize, there are two main post-quantum migrations to keep an eye on: key agreement, and certificates.</p><p>We recommend moving to <b>post-quantum key agreement </b>to counter store-now/decrypt-later attacks, which only requires a software update on both sides. That means that with the quick adoption (we’re <a href="https://developers.cloudflare.com/ssl/post-quantum-cryptography/pqc-support/"><u>keeping a list</u></a>) of X25519MLKEM768 across software and services, you might well be secure already against store-now/decrypt-later! On Cloudflare Radar you can <a href="https://radar.cloudflare.com/adoption-and-usage#browser-support"><u>check</u></a> whether your browser supports X25519MLKEM768; if you use Firefox, there is <a href="https://addons.mozilla.org/en-US/firefox/addon/pqspy/"><u>an extension</u></a> to check support of websites while you visit; you can scan whether your website supports it <a href="https://pqscan.io/"><u>here</u></a>; and you can use Wireshark to check for it <a href="https://www.netmeister.org/blog/tls-hybrid-kex.html"><u>on the wire</u></a>.</p><p>Those are just spot checks. For a proper migration, you’ll need to figure out where cryptography is used. That’s a tall order, as most organizations have a hard time tracking all software, services, and external vendors they use in the first place. There will be systems that are difficult to upgrade or have external dependencies, but in many cases it’s simple. In fact, in many cases, you’ll spend a lot of time to find out that they are already done.</p><p>As figuring out <i>what to do</i> is the bulk of the work, it’s perhaps tempting to split that out as a first milestone: create a detailed inventory first; the so-called <a href="https://github.com/IBM/CBOM"><u>cryptographic bill of materials</u></a> (CBOM). Don’t let an inventory become a goal on its own: we need to keep our eyes on the ball. Most cases are easy: if you figured out what to do to migrate in one case, don’t wait and context switch, but just do it. That doesn’t mean it’ll be fast: this is a marathon not a sprint, but you’ll be surprised how much ground can be covered by getting started.</p><p><b>Certificates.</b> At the time of writing this blog in October 2025, the final standards for post-quantum certificates are not set yet. Hopefully that won’t take too long to resolve. But there is much that you can do now to prepare for post-quantum certificates that you won’t regret at all. Keep software up-to-date. Automate certificate issuance. Ensure you can install multiple certificates.</p><p>In case you’re worried about protocol ossification, there is no reason to wait: the final post-quantum standards will not be very different from the draft. You can test with preliminary implementations (or large dummy certificates) today.</p><p>The post-quantum migration is quite unique. Typically, if cryptography is broken, it’s either sudden or gradually making it easy to ignore for a time. In both cases, migrations in the end are rushed. With the quantum threat, we know for sure that we’ll need to replace a lot of cryptography, but we also have time. Instead of just a chore, we invite you to see this as an opportunity: we have to do maintenance now on many systems that rarely get touched. Instead of just hotfixes, now is the opportunity to rethink past choices. </p><p>At least, if you start now. Good luck with your migration, and if you hit any issues, do reach out: ask-research@cloudflare.com</p> ]]></content:encoded>
            <category><![CDATA[Research]]></category>
            <category><![CDATA[Post-Quantum]]></category>
            <guid isPermaLink="false">7nIcJ4ZbXuMXHQ9tPi2P4f</guid>
            <dc:creator>Bas Westerbaan</dc:creator>
        </item>
        <item>
            <title><![CDATA[Keeping the Internet fast and secure: introducing Merkle Tree Certificates]]></title>
            <link>https://blog.cloudflare.com/bootstrap-mtc/</link>
            <pubDate>Tue, 28 Oct 2025 13:00:00 GMT</pubDate>
            <description><![CDATA[ Cloudflare is launching an experiment with Chrome to evaluate fast, scalable, and quantum-ready Merkle Tree Certificates, all without degrading performance or changing WebPKI trust relationships. ]]></description>
            <content:encoded><![CDATA[ <p>The world is in a race to build its first quantum computer capable of solving practical problems not feasible on even the largest conventional supercomputers. While the quantum computing paradigm promises many benefits, it also threatens the security of the Internet by breaking much of the cryptography we have come to rely on.</p><p>To mitigate this threat, Cloudflare is helping to migrate the Internet to Post-Quantum (PQ) cryptography. Today, <a href="https://radar.cloudflare.com/adoption-and-usage#post-quantum-encryption"><u>about 50%</u></a> of traffic to Cloudflare's edge network is protected against the most urgent threat: an attacker who can intercept and store encrypted traffic today and then decrypt it in the future with the help of a quantum computer. This is referred to as the <a href="https://en.wikipedia.org/wiki/Harvest_now,_decrypt_later"><u>harvest now, decrypt later</u></a><i> </i>threat.</p><p>However, this is just one of the threats we need to address. A quantum computer can also be used to crack a server's <a href="https://www.cloudflare.com/application-services/products/ssl/">TLS certificate</a>, allowing an attacker to impersonate the server to unsuspecting clients. The good news is that we already have PQ algorithms we can use for quantum-safe authentication. The bad news is that adoption of these algorithms in TLS will require significant changes to one of the most complex and security-critical systems on the Internet: the Web Public-Key Infrastructure (WebPKI).</p><p>The central problem is the sheer size of these new algorithms: signatures for ML-DSA-44, one of the most performant PQ algorithms standardized by NIST, are 2,420 bytes long, compared to just 64 bytes for ECDSA-P256, the most popular non-PQ signature in use today; and its public keys are 1,312 bytes long, compared to just 64 bytes for ECDSA. That's a roughly 20-fold increase in size. Worse yet, the average TLS handshake includes a number of public keys and signatures, adding up to 10s of kilobytes of overhead per handshake. This is enough to have a <a href="https://blog.cloudflare.com/another-look-at-pq-signatures/#how-many-added-bytes-are-too-many-for-tls"><u>noticeable impact</u></a> on the performance of TLS.</p><p>That makes drop-in PQ certificates a tough sell to enable today: they don’t bring any security benefit before Q-day — the day a cryptographically relevant quantum computer arrives — but they do degrade performance. We could sit and wait until Q-day is a year away, but that’s playing with fire. Migrations always take longer than expected, and by waiting we risk the security and privacy of the Internet, which is <a href="https://developers.cloudflare.com/ssl/edge-certificates/universal-ssl/"><u>dear to us</u></a>.</p><p>It's clear that we must find a way to make post-quantum certificates cheap enough to deploy today by default for everyone — not just those that can afford it. In this post, we'll introduce you to the plan we’ve brought together with industry partners to the <a href="https://datatracker.ietf.org/group/plants/about/"><u>IETF</u></a> to redesign the WebPKI in order to allow a smooth transition to PQ authentication with no performance impact (and perhaps a performance improvement!). We'll provide an overview of one concrete proposal, called <a href="https://datatracker.ietf.org/doc/draft-davidben-tls-merkle-tree-certs/"><u>Merkle Tree Certificates (MTCs)</u></a>, whose goal is to whittle down the number of public keys and signatures in the TLS handshake to the bare minimum required.</p><p>But talk is cheap. We <a href="https://blog.cloudflare.com/experiment-with-pq/"><u>know</u></a> <a href="https://blog.cloudflare.com/announcing-encrypted-client-hello/"><u>from</u></a> <a href="https://blog.cloudflare.com/why-tls-1-3-isnt-in-browsers-yet/"><u>experience</u></a> that, as with any change to the Internet, it's crucial to test early and often. <b>Today we're announcing our intent to deploy MTCs on an experimental basis in collaboration with Chrome Security.</b> In this post, we'll describe the scope of this experiment, what we hope to learn from it, and how we'll make sure it's done safely.</p>
    <div>
      <h2>The WebPKI today — an old system with many patches</h2>
      <a href="#the-webpki-today-an-old-system-with-many-patches">
        
      </a>
    </div>
    <p>Why does the TLS handshake have so many public keys and signatures?</p><p>Let's start with Cryptography 101. When your browser connects to a website, it asks the server to <b>authenticate</b> itself to make sure it's talking to the real server and not an impersonator. This is usually achieved with a cryptographic primitive known as a digital signature scheme (e.g., ECDSA or ML-DSA). In TLS, the server signs the messages exchanged between the client and server using its <b>secret key</b>, and the client verifies the signature using the server's <b>public key</b>. In this way, the server confirms to the client that they've had the same conversation, since only the server could have produced a valid signature.</p><p>If the client already knows the server's public key, then only <b>1 signature</b> is required to authenticate the server. In practice, however, this is not really an option. The web today is made up of around a billion TLS servers, so it would be unrealistic to provision every client with the public key of every server. What's more, the set of public keys will change over time as new servers come online and existing ones rotate their keys, so we would need some way of pushing these changes to clients.</p><p>This scaling problem is at the heart of the design of all PKIs.</p>
    <div>
      <h3>Trust is transitive</h3>
      <a href="#trust-is-transitive">
        
      </a>
    </div>
    <p>Instead of expecting the client to know the server's public key in advance, the server might just send its public key during the TLS handshake. But how does the client know that the public key actually belongs to the server? This is the job of a <b>certificate</b>.</p><p>A certificate binds a public key to the identity of the server — usually its DNS name, e.g., <code>cloudflareresearch.com</code>. The certificate is signed by a Certification Authority (CA) whose public key is known to the client. In addition to verifying the server's handshake signature, the client verifies the signature of this certificate. This establishes a chain of trust: by accepting the certificate, the client is trusting that the CA verified that the public key actually belongs to the server with that identity.</p><p>Clients are typically configured to trust many CAs and must be provisioned with a public key for each. Things are much easier however, since there are only 100s of CAs instead of billions. In addition, new certificates can be created without having to update clients.</p><p>These efficiencies come at a relatively low cost: for those counting at home, that's <b>+1</b> signature and <b>+1</b> public key, for a total of <b>2 signatures and 1 public key</b> per TLS handshake.</p><p>That's not the end of the story, however. As the WebPKI has evolved, so have these chains of trust grown a bit longer. These days it's common for a chain to consist of two or more certificates rather than just one. This is because CAs sometimes need to rotate<b> </b>their keys, just as servers do. But before they can start using the new key, they must distribute the corresponding public key to clients. This takes time, since it requires billions of clients to update their trust stores. To bridge the gap, the CA will sometimes use the old key to issue a certificate for the new one and append this certificate to the end of the chain.</p><p>That's<b> +1</b> signature and<b> +1</b> public key, which brings us to<b> 3 signatures and 2 public keys</b>. And we still have a little ways to go.</p>
    <div>
      <h3>Trust but verify</h3>
      <a href="#trust-but-verify">
        
      </a>
    </div>
    <p>The main job of a CA is to verify that a server has control over the domain for which it’s requesting a certificate. This process has evolved over the years from a high-touch, CA-specific process to a standardized, <a href="https://datatracker.ietf.org/doc/html/rfc8555/"><u>mostly automated process</u></a> used for issuing most certificates on the web. (Not all CAs fully support automation, however.) This evolution is marked by a number of security incidents in which a certificate was <b>mis-issued </b>to a party other than the server, allowing that party to impersonate the server to any client that trusts the CA.</p><p>Automation helps, but <a href="https://en.wikipedia.org/wiki/DigiNotar#Issuance_of_fraudulent_certificates"><u>attacks</u></a> are still possible, and mistakes are almost inevitable. <a href="https://blog.cloudflare.com/unauthorized-issuance-of-certificates-for-1-1-1-1/"><u>Earlier this year</u></a>, several certificates for Cloudflare's encrypted 1.1.1.1 resolver were issued without our involvement or authorization. This apparently occurred by accident, but it nonetheless put users of 1.1.1.1 at risk. (The mis-issued certificates have since been revoked.)</p><p>Ensuring mis-issuance is detectable is the job of the Certificate Transparency (CT) ecosystem. The basic idea is that each certificate issued by a CA gets added to a public <b>log</b>. Servers can audit these logs for certificates issued in their name. If ever a certificate is issued that they didn't request itself, the server operator can prove the issuance happened, and the PKI ecosystem can take action to prevent the certificate from being trusted by clients.</p><p>Major browsers, including Firefox and Chrome and its derivatives, require certificates to be logged before they can be trusted. For example, Chrome, Safari, and Firefox will only accept the server's certificate if it appears in at least two logs the browser is configured to trust. This policy is easy to state, but tricky to implement in practice:</p><ol><li><p>Operating a CT log has historically been fairly expensive. Logs ingest billions of certificates over their lifetimes: when an incident happens, or even just under high load, it can take some time for a log to make a new entry available for auditors.</p></li><li><p>Clients can't really audit logs themselves, since this would expose their browsing history (i.e., the servers they wanted to connect to) to the log operators.</p></li></ol><p>The solution to both problems is to include a signature from the CT log along with the certificate. The signature is produced immediately in response to a request to log a certificate, and attests to the log's intent to include the certificate in the log within 24 hours.</p><p>Per browser policy, certificate transparency adds <b>+2</b> signatures to the TLS handshake, one for each log. This brings us to a total of <b>5 signatures and 2 public keys</b> in a typical handshake on the public web.</p>
    <div>
      <h3>The future WebPKI</h3>
      <a href="#the-future-webpki">
        
      </a>
    </div>
    <p>The WebPKI is a living, breathing, and highly distributed system. We've had to patch it a number of times over the years to keep it going, but on balance it has served our needs quite well — until now.</p><p>Previously, whenever we needed to update something in the WebPKI, we would tack on another signature. This strategy has worked because conventional cryptography is so cheap. But <b>5 signatures and 2 public keys </b>on average for each TLS handshake is simply too much to cope with for the larger PQ signatures that are coming.</p><p>The good news is that by moving what we already have around in clever ways, we can drastically reduce the number of signatures we need.</p>
    <div>
      <h3>Crash course on Merkle Tree Certificates</h3>
      <a href="#crash-course-on-merkle-tree-certificates">
        
      </a>
    </div>
    <p><a href="https://datatracker.ietf.org/doc/draft-davidben-tls-merkle-tree-certs/"><u>Merkle Tree Certificates (MTCs)</u></a> is a proposal for the next generation of the WebPKI that we are implementing and plan to deploy on an experimental basis. Its key features are as follows:</p><ol><li><p>All the information a client needs to validate a Merkle Tree Certificate can be disseminated out-of-band. If the client is sufficiently up-to-date, then the TLS handshake needs just <b>1 signature, 1 public key, and 1 Merkle tree inclusion proof</b>. This is quite small, even if we use post-quantum algorithms.</p></li><li><p>The MTC specification makes certificate transparency a first class feature of the PKI by having each CA run its own log of exactly the certificates they issue.</p></li></ol><p>Let's poke our head under the hood a little. Below we have an MTC generated by one of our internal tests. This would be transmitted from the server to the client in the TLS handshake:</p>
            <pre><code>-----BEGIN CERTIFICATE-----
MIICSzCCAUGgAwIBAgICAhMwDAYKKwYBBAGC2ksvADAcMRowGAYKKwYBBAGC2ksv
AQwKNDQzNjMuNDguMzAeFw0yNTEwMjExNTMzMjZaFw0yNTEwMjgxNTMzMjZaMCEx
HzAdBgNVBAMTFmNsb3VkZmxhcmVyZXNlYXJjaC5jb20wWTATBgcqhkjOPQIBBggq
hkjOPQMBBwNCAARw7eGWh7Qi7/vcqc2cXO8enqsbbdcRdHt2yDyhX5Q3RZnYgONc
JE8oRrW/hGDY/OuCWsROM5DHszZRDJJtv4gno2wwajAOBgNVHQ8BAf8EBAMCB4Aw
EwYDVR0lBAwwCgYIKwYBBQUHAwEwQwYDVR0RBDwwOoIWY2xvdWRmbGFyZXJlc2Vh
cmNoLmNvbYIgc3RhdGljLWN0LmNsb3VkZmxhcmVyZXNlYXJjaC5jb20wDAYKKwYB
BAGC2ksvAAOB9QAAAAAAAAACAAAAAAAAAAJYAOBEvgOlvWq38p45d0wWTPgG5eFV
wJMhxnmDPN1b5leJwHWzTOx1igtToMocBwwakt3HfKIjXYMO5CNDOK9DIKhmRDSV
h+or8A8WUrvqZ2ceiTZPkNQFVYlG8be2aITTVzGuK8N5MYaFnSTtzyWkXP2P9nYU
Vd1nLt/WjCUNUkjI4/75fOalMFKltcc6iaXB9ktble9wuJH8YQ9tFt456aBZSSs0
cXwqFtrHr973AZQQxGLR9QCHveii9N87NXknDvzMQ+dgWt/fBujTfuuzv3slQw80
mibA021dDCi8h1hYFQAA
-----END CERTIFICATE-----</code></pre>
            <p>Looks like your average PEM encoded certificate. Let's decode it and look at the parameters:</p>
            <pre><code>$ openssl x509 -in merkle-tree-cert.pem -noout -text
Certificate:
    Data:
        Version: 3 (0x2)
        Serial Number: 531 (0x213)
        Signature Algorithm: 1.3.6.1.4.1.44363.47.0
        Issuer: 1.3.6.1.4.1.44363.47.1=44363.48.3
        Validity
            Not Before: Oct 21 15:33:26 2025 GMT
            Not After : Oct 28 15:33:26 2025 GMT
        Subject: CN=cloudflareresearch.com
        Subject Public Key Info:
            Public Key Algorithm: id-ecPublicKey
                Public-Key: (256 bit)
                pub:
                    04:70:ed:e1:96:87:b4:22:ef:fb:dc:a9:cd:9c:5c:
                    ef:1e:9e:ab:1b:6d:d7:11:74:7b:76:c8:3c:a1:5f:
                    94:37:45:99:d8:80:e3:5c:24:4f:28:46:b5:bf:84:
                    60:d8:fc:eb:82:5a:c4:4e:33:90:c7:b3:36:51:0c:
                    92:6d:bf:88:27
                ASN1 OID: prime256v1
                NIST CURVE: P-256
        X509v3 extensions:
            X509v3 Key Usage: critical
                Digital Signature
            X509v3 Extended Key Usage:
                TLS Web Server Authentication
            X509v3 Subject Alternative Name:
                DNS:cloudflareresearch.com, DNS:static-ct.cloudflareresearch.com
    Signature Algorithm: 1.3.6.1.4.1.44363.47.0
    Signature Value:
        00:00:00:00:00:00:02:00:00:00:00:00:00:00:02:58:00:e0:
        44:be:03:a5:bd:6a:b7:f2:9e:39:77:4c:16:4c:f8:06:e5:e1:
        55:c0:93:21:c6:79:83:3c:dd:5b:e6:57:89:c0:75:b3:4c:ec:
        75:8a:0b:53:a0:ca:1c:07:0c:1a:92:dd:c7:7c:a2:23:5d:83:
        0e:e4:23:43:38:af:43:20:a8:66:44:34:95:87:ea:2b:f0:0f:
        16:52:bb:ea:67:67:1e:89:36:4f:90:d4:05:55:89:46:f1:b7:
        b6:68:84:d3:57:31:ae:2b:c3:79:31:86:85:9d:24:ed:cf:25:
        a4:5c:fd:8f:f6:76:14:55:dd:67:2e:df:d6:8c:25:0d:52:48:
        c8:e3:fe:f9:7c:e6:a5:30:52:a5:b5:c7:3a:89:a5:c1:f6:4b:
        5b:95:ef:70:b8:91:fc:61:0f:6d:16:de:39:e9:a0:59:49:2b:
        34:71:7c:2a:16:da:c7:af:de:f7:01:94:10:c4:62:d1:f5:00:
        87:bd:e8:a2:f4:df:3b:35:79:27:0e:fc:cc:43:e7:60:5a:df:
        df:06:e8:d3:7e:eb:b3:bf:7b:25:43:0f:34:9a:26:c0:d3:6d:
        5d:0c:28:bc:87:58:58:15:00:00</code></pre>
            <p>While some of the parameters probably look familiar, others will look unusual. On the familiar side, the subject and public key are exactly what we might expect: the DNS name is <code>cloudflareresearch.com</code> and the public key is for a familiar signature algorithm, ECDSA-P256. This algorithm is not PQ, of course — in the future we would put ML-DSA-44 there instead.</p><p>On the unusual side, OpenSSL appears to not recognize the signature algorithm of the issuer and just prints the raw OID and bytes of the signature. There's a good reason for this: the MTC does not have a signature in it at all! So what exactly are we looking at?</p><p>The trick to leave out signatures is that a Merkle Tree Certification Authority (MTCA) produces its <i>signatureless</i> certificates <i>in batches</i> rather than individually. In place of a signature, the certificate has an <b>inclusion proof</b> of the certificate in a batch of certificates signed by the MTCA.</p><p>To understand how inclusion proofs work, let's think about a slightly simplified version of the MTC specification. To issue a batch, the MTCA arranges the unsigned certificates into a data structure called a <b>Merkle tree</b> that looks like this:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4LGhISsS07kbpSgDkqx8p2/68e3b36deeca7f97139654d2c769df68/image3.png" />
          </figure><p>Each leaf of the tree corresponds to a certificate, and each inner node is equal to the hash of its children. To sign the batch, the MTCA uses its secret key to sign the head of the tree. The structure of the tree guarantees that each certificate in the batch was signed by the MTCA: if we tried to tweak the bits of any one of the certificates, the treehead would end up having a different value, which would cause the signature to fail.</p><p>An inclusion proof for a certificate consists of the hash of each sibling node along the path from the certificate to the treehead:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4UZZHkRwsBLWXRYeop4rXv/8598cde48c27c112bc4992889f3d5799/image1.gif" />
          </figure><p>Given a validated treehead, this sequence of hashes is sufficient to prove inclusion of the certificate in the tree. This means that, in order to validate an MTC, the client also needs to obtain the signed treehead from the MTCA.</p><p>This is the key to MTC's efficiency:</p><ol><li><p>Signed treeheads can be disseminated to clients out-of-band and validated offline. Each validated treehead can then be used to validate any certificate in the corresponding batch, eliminating the need to obtain a signature for each server certificate.</p></li><li><p>During the TLS handshake, the client tells the server which treeheads it has. If the server has a signatureless certificate covered by one of those treeheads, then it can use that certificate to authenticate itself. That's <b>1 signature,1 public key and 1 inclusion proof</b> per handshake, both for the server being authenticated.</p></li></ol><p>Now, that's the simplified version. MTC proper has some more bells and whistles. To start, it doesn’t create a separate Merkle tree for each batch, but it grows a single large tree, which is used for better transparency. As this tree grows, periodically (sub)tree heads are selected to be shipped to browsers, which we call <b>landmarks</b>. In the common case browsers will be able to fetch the most recent landmarks, and servers can wait for batch issuance, but we need a fallback: MTC also supports certificates that can be issued immediately and don’t require landmarks to be validated, but these are not as small. A server would provision both types of Merkle tree certificates, so that the common case is fast, and the exceptional case is slow, but at least it’ll work.</p>
    <div>
      <h2>Experimental deployment</h2>
      <a href="#experimental-deployment">
        
      </a>
    </div>
    <p>Ever since early designs for MTCs emerged, we’ve been eager to experiment with the idea. In line with the IETF principle of “<a href="https://www.ietf.org/runningcode/"><u>running code</u></a>”, it often takes implementing a protocol to work out kinks in the design. At the same time, we cannot risk the security of users. In this section, we describe our approach to experimenting with aspects of the Merkle Tree Certificates design <i>without</i> changing any trust relationships.</p><p>Let’s start with what we hope to learn. We have lots of questions whose answers can help to either validate the approach, or uncover pitfalls that require reshaping the protocol — in fact, an implementation of an early MTC draft by <a href="https://www.cs.ru.nl/masters-theses/2025/M_Pohl___Implementation_and_Analysis_of_Merkle_Tree_Certificates_for_Post-Quantum_Secure_Authentication_in_TLS.pdf"><u>Maximilian Pohl</u></a> and <a href="https://www.ietf.org/archive/id/draft-davidben-tls-merkle-tree-certs-07.html#name-acknowledgements"><u>Mia Celeste</u></a> did exactly this. We’d like to know:</p><p><b>What breaks?</b> Protocol ossification (the tendency of implementation bugs to make it harder to change a protocol) is an ever-present issue with deploying protocol changes. For TLS in particular, despite having built-in flexibility, time after time we’ve found that if that flexibility is not regularly used, there will be buggy implementations and middleboxes that break when they see things they don’t recognize. TLS 1.3 deployment <a href="https://blog.cloudflare.com/why-tls-1-3-isnt-in-browsers-yet/"><u>took years longer</u></a> than we hoped for this very reason. And more recently, the rollout of PQ key exchange in TLS caused the Client Hello to be split over multiple TCP packets, something that many middleboxes <a href="https://tldr.fail/"><u>weren't ready for</u></a>.</p><p><b>What is the performance impact?</b> In fact, we expect MTCs to <i>reduce </i>the size of the handshake, even compared to today's non-PQ certificates. They will also reduce CPU cost: ML-DSA signature verification is about as fast as ECDSA, and there will be far fewer signatures to verify. We therefore expect to see a <i>reduction in latency</i>. We would like to see if there is a measurable performance improvement.</p><p><b>What fraction of clients will stay up to date? </b>Getting the performance benefit of MTCs requires the clients and servers to be roughly in sync with one another. We expect MTCs to have fairly short lifetimes, a week or so. This means that if the client's latest landmark is older than a week, the server would have to fallback to a larger certificate. Knowing how often this fallback happens will help us tune the parameters of the protocol to make fallbacks less likely.</p><p>In order to answer these questions, we are implementing MTC support in our TLS stack and in our certificate issuance infrastructure. For their part, Chrome is implementing MTC support in their own TLS stack and will stand up infrastructure to disseminate landmarks to their users.</p><p>As we've done in past experiments, we plan to enable MTCs for a subset of our free customers with enough traffic that we will be able to get useful measurements. Chrome will control the experimental rollout: they can ramp up slowly, measuring as they go and rolling back if and when bugs are found.</p><p>Which leaves us with one last question: who will run the Merkle Tree CA?</p>
    <div>
      <h3>Bootstrapping trust from the existing WebPKI</h3>
      <a href="#bootstrapping-trust-from-the-existing-webpki">
        
      </a>
    </div>
    <p>Standing up a proper CA is no small task: it takes years to be trusted by major browsers. That’s why Cloudflare isn’t going to become a “real” CA for this experiment, and Chrome isn’t going to trust us directly.</p><p>Instead, to make progress on a reasonable timeframe, without sacrificing due diligence, we plan to "mock" the role of the MTCA. We will run an MTCA (on <a href="https://github.com/cloudflare/azul/"><u>Workers</u></a> based on our <a href="https://blog.cloudflare.com/azul-certificate-transparency-log/"><u>StaticCT logs</u></a>), but for each MTC we issue, we also publish an existing certificate from a trusted CA that agrees with it. We call this the <b>bootstrap certificate</b>. When Chrome’s infrastructure pulls updates from our MTCA log, they will also pull these bootstrap certificates, and check whether they agree. Only if they do, they’ll proceed to push the corresponding landmarks to Chrome clients. In other words, Cloudflare is effectively just “re-encoding” an existing certificate (with domain validation performed by a trusted CA) as an MTC, and Chrome is using certificate transparency to keep us honest.</p>
    <div>
      <h2>Conclusion</h2>
      <a href="#conclusion">
        
      </a>
    </div>
    <p>With almost 50% of our traffic already protected by post-quantum encryption, we’re halfway to a fully post-quantum secure Internet. The second part of our journey, post-quantum certificates, is the hardest yet though. A simple drop-in upgrade has a noticeable performance impact and no security benefit before Q-day. This means it’s a hard sell to enable today by default. But here we are playing with fire: migrations always take longer than expected. If we want to keep an ubiquitously private and secure Internet, we need a post-quantum solution that’s performant enough to be enabled by default <b>today</b>.</p><p>Merkle Tree Certificates (MTCs) solves this problem by reducing the number of signatures and public keys to the bare minimum while maintaining the WebPKI's essential properties. We plan to roll out MTCs to a fraction of free accounts by early next year. This does not affect any visitors that are not part of the Chrome experiment. For those that are, thanks to the bootstrap certificates, there is no impact on security.</p><p>We’re excited to keep the Internet fast <i>and</i> secure, and will report back soon on the results of this experiment: watch this space! MTC is evolving as we speak, if you want to get involved, please join the IETF <a href="https://mailman3.ietf.org/mailman3/lists/plants@ietf.org/"><u>PLANTS mailing list</u></a>.</p> ]]></content:encoded>
            <category><![CDATA[Post-Quantum]]></category>
            <category><![CDATA[Research]]></category>
            <category><![CDATA[Cryptography]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[TLS]]></category>
            <category><![CDATA[Chrome]]></category>
            <category><![CDATA[Google]]></category>
            <category><![CDATA[IETF]]></category>
            <category><![CDATA[Transparency]]></category>
            <category><![CDATA[Rust]]></category>
            <category><![CDATA[Open Source]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <guid isPermaLink="false">4jURWdZzyjdrcurJ4LlJ1z</guid>
            <dc:creator>Luke Valenta</dc:creator>
            <dc:creator>Christopher Patton</dc:creator>
            <dc:creator>Vânia Gonçalves</dc:creator>
            <dc:creator>Bas Westerbaan</dc:creator>
        </item>
        <item>
            <title><![CDATA[Automatically Secure: how we upgraded 6,000,000 domains by default to get ready for the Quantum Future]]></title>
            <link>https://blog.cloudflare.com/automatically-secure/</link>
            <pubDate>Wed, 24 Sep 2025 14:00:00 GMT</pubDate>
            <description><![CDATA[ After a year since we started enabling Automatic SSL/TLS, we want to talk about these results, why they matter, and how we’re preparing for the next leap in Internet security. ]]></description>
            <content:encoded><![CDATA[ <p>The Internet is in constant motion. Sites scale, traffic shifts, and attackers adapt. Security that worked yesterday may not be enough tomorrow. That’s why the technologies that protect the web — such as Transport Layer Security (TLS) and emerging post-quantum cryptography (PQC) — must also continue to evolve. We want to make sure that everyone benefits from this evolution automatically, so we enabled the strongest protections by default.</p><p>During <a href="https://blog.cloudflare.com/introducing-automatic-ssl-tls-securing-and-simplifying-origin-connectivity/"><u>Birthday Week 2024</u></a>, we announced Automatic SSL/TLS: a service that scans origin server configurations of domains behind Cloudflare, and automatically upgrades them to the most secure encryption mode they support. In the past year, <b>this system has quietly strengthened security for more than 6 million domains </b>— ensuring Cloudflare can always connect to origin servers over the safest possible channel, without customers lifting a finger.</p><p>Now, a year after we started enabling Automatic SSL/TLS, we want to talk about these results, why they matter, and how we’re preparing for the next leap in Internet security.</p>
    <div>
      <h2>The Basics: TLS protocol</h2>
      <a href="#the-basics-tls-protocol">
        
      </a>
    </div>
    <p>Before diving in, let’s review the basics of Transport Layer Security (<a href="https://www.cloudflare.com/learning/ssl/transport-layer-security-tls/"><u>TLS</u></a>). The protocol allows two strangers (like a client and server) to communicate securely.</p><p>Every secure web session begins with a TLS handshake. Before a single byte of your data moves across the Internet, servers and clients need to agree on a shared secret key that will protect the confidentiality and integrity of your data. The key agreement handshake kicks off with a TLS <i>ClientHello</i> message. This message is the browser/client announcing, “Here’s who I want to talk to (via <a href="https://www.cloudflare.com/learning/ssl/what-is-sni/"><u>SNI</u></a>), and here are the key agreement methods I understand.” The server then proves who it is with its own credentials in the form of a certificate, and together they establish a shared secret key that will protect everything that follows. </p><p>TLS 1.3 added a clever shortcut: instead of waiting to be told which method to use for the shared key agreement, the browser can guess what key agreement the server supports, and include one or more <a href="https://blog.cloudflare.com/rfc-8446-aka-tls-1-3/"><u>keyshares</u></a> right away. If the guess is correct, the handshake skips an extra round trip and the secure connection is established more quickly. If the guess is wrong, the server responds with a <i>HelloRetryRequest</i> (HRR), telling the browser which key agreement method to retry with. This speculative guessing is a major reason TLS 1.3 is so much faster than TLS 1.2.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/W2t0sZjiliwZ0FGfFFG6k/113c44b54da1c0355d5bf76fba3080fa/1-2.png" />
          </figure><p>Once both sides agree, the chosen keyshare is used to create a shared secret that encrypts the messages they exchange and allows only the right parties to decrypt them.</p>
    <div>
      <h3>The nitty-gritty details of key agreement</h3>
      <a href="#the-nitty-gritty-details-of-key-agreement">
        
      </a>
    </div>
    <p>Up until recently, most of these handshakes have relied on <a href="https://blog.cloudflare.com/a-relatively-easy-to-understand-primer-on-elliptic-curve-cryptography/"><u>elliptic curve cryptography</u></a> (ECC) using a curve known as X25519. But looming on the horizon are quantum computers, which could one day break ECC algorithms like X25519 and others. To prepare, the industry is shifting toward post-quantum key agreement with <a href="https://nvlpubs.nist.gov/nistpubs/fips/nist.fips.203.pdf"><u>MLKEM</u></a>, deployed in a hybrid mode (<a href="https://datatracker.ietf.org/doc/draft-ietf-tls-ecdhe-mlkem/"><u>X25519 + MLKEM</u></a>). This ensures that even if quantum machines arrive, <a href="https://en.wikipedia.org/wiki/Harvest_now,_decrypt_later"><u>harvested traffic today</u></a> can’t be decrypted tomorrow. X25519 + MLKEM is <a href="https://radar.cloudflare.com/adoption-and-usage#post-quantum-encryption-adoption"><u>steadily rising to become the most popular</u></a> key agreement for connections to Cloudflare.</p><p>The TLS handshake model is the foundation for how we encrypt web communications today. The history of TLS is really the story of <i>iteration under pressure</i>. It’s a protocol that had to keep evolving, so trust on the web could keep pace with how Internet traffic has changed. It’s also what makes technologies like <b>Cloudflare’s Automatic SSL/TLS</b> possible, by abstracting decades of protocol battles and crypto engineering into a single click, so customer websites can be secured by default without requiring every operator to be a cryptography expert.</p>
    <div>
      <h2>History Lesson: Stumbles and Standards</h2>
      <a href="#history-lesson-stumbles-and-standards">
        
      </a>
    </div>
    <p>Early versions of TLS (then called SSL) in the 1990s suffered from weak keys, limited protection against attacks like <a href="https://en.wikipedia.org/wiki/Man-in-the-middle_attack"><u>man-in-the-middle</u></a>, and low adoption on the Internet. To stabilize things, the <a href="https://www.ietf.org/"><u>IETF</u></a> stepped in and released <a href="https://www.ietf.org/rfc/rfc2246.txt"><u>TLS 1.0</u></a>, followed by TLS <a href="https://datatracker.ietf.org/doc/html/rfc4346"><u>1.1</u></a> and <a href="https://datatracker.ietf.org/doc/html/rfc5246"><u>1.2</u></a> through the 2000s. These versions added stronger ciphers and patched new attack vectors, but years of fixes and extensions left the protocol bloated and hard to evolve.</p><p>The early 2010s marked a turning point. After the <a href="https://iapp.org/news/a/the-snowden-disclosures-10-years-on"><u>Snowden disclosures</u></a>, the Internet doubled down on encryption by default. Initiatives like <a href="https://en.wikipedia.org/wiki/Let%27s_Encrypt"><u>Let’s Encrypt</u></a>, the mass adoption of <a href="https://en.wikipedia.org/wiki/HTTPS"><u>HTTPS</u></a>, and Cloudflare’s own commitment to offer <a href="https://www.cloudflare.com/application-services/products/ssl/"><u>SSL/TLS for free</u></a> turned encryption from optional, expensive, and complex into an easy baseline requirement for a safer Internet.</p><p>All of this momentum led to <a href="https://datatracker.ietf.org/doc/html/rfc8446"><u>TLS 1.3</u></a> (2018), which cut away legacy baggage, locked in modern cipher suites, and made encrypted connections nearly as fast as the underlying transport protocols like TCP—and sometimes even faster with <a href="https://en.wikipedia.org/wiki/QUIC"><u>QUIC</u></a>.</p>
    <div>
      <h2>The CDN Twist</h2>
      <a href="#the-cdn-twist">
        
      </a>
    </div>
    <p>As Content Delivery Networks (CDNs) rose to prominence, they reshaped how TLS was deployed. Instead of a browser talking directly to a distant server hosting content (what Cloudflare calls an origin), it now spoke to the nearest edge data center, which may in-turn speak to an origin server on the client’s behalf.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7CTywdNaDxUXcGHVg5i1MP/975f9b0a74b2b5c5fb59ecb64d3268bb/2.png" />
          </figure><p>This created <b>two distinct TLS layers</b>:</p><ul><li><p><b>Edge ↔ Browser TLS:</b> The front door, built to quickly take on new improvements in security and performance. Edges and browsers adopt modern protocols (TLS 1.3, QUIC, session resumption) to cut down on latency.</p></li><li><p><b>Edge ↔ Origin TLS:</b> The backhaul, which must be more flexible. Origins might be older, more poorly maintained, run legacy TLS stacks, or require custom certificate handling.</p></li></ul><p>In practice, CDNs became <i>translators</i>: modernizing encryption at the edge while still bridging to legacy origins. It’s why you can have a blazing-fast TLS 1.3 session from your phone, even if the origin server behind the CDN hasn’t been upgraded in years. </p><p>This is where Automatic SSL/TLS sits in the story of how we secure Internet communications. </p>
    <div>
      <h2>Automatic SSL/TLS </h2>
      <a href="#automatic-ssl-tls">
        
      </a>
    </div>
    <p>Automatic SSL/TLS grew out of Cloudflare’s mission to ensure the web was as encrypted as possible. While we had initially spent an incredibly long time developing secure connections for the “front door” (from browsers to Cloudflare’s edge) with <a href="https://blog.cloudflare.com/introducing-universal-ssl/"><u>Universal SSL</u></a>, we knew that the “back door” (from Cloudflare’s edge to origin servers) would be slower and harder to upgrade. </p><p>One option we offered was <a href="https://developers.cloudflare.com/cloudflare-one/connections/connect-networks/"><u>Cloudflare Tunnel</u></a>, where a lightweight agent runs near the origin server and tunnels traffic securely back to Cloudflare. This approach ensures the connection always uses modern encryption, without requiring changes on the origin itself.</p><p>But not every customer uses Tunnel. Many connect origins directly to Cloudflare’s edge, where encryption depends on the origin server’s configuration. Traditionally this meant customers had to either manually select an <a href="https://developers.cloudflare.com/ssl/origin-configuration/ssl-modes/"><u>encryption mode</u></a> that worked for their origin server or rely on the default chosen by Cloudflare. </p><p>To improve the experience of choosing an encryption mode, we introduced our <a href="https://blog.cloudflare.com/ssl-tls-recommender/"><u>SSL/TLS Recommender</u></a> in 2021.</p><p>The Recommender scanned customer origin servers and then provided recommendations for their most secure encryption mode. For example, if the Recommender detected that an origin server was using a certificate signed by a trusted Certificate Authority (CA) such as Let’s Encrypt, rather than a self-signed certificate, it would recommend upgrading from <a href="https://developers.cloudflare.com/ssl/origin-configuration/ssl-modes/full/"><b><u>Full</u></b><u> encryption mode</u></a> to <a href="https://developers.cloudflare.com/ssl/origin-configuration/ssl-modes/full-strict/"><b><u>Full (Strict)</u></b><u> encryption mode</u></a>.</p><p>Based on how the origin responded, Recommender would tell customers if they could improve their SSL/TLS encryption mode to be more secure. The following encryption modes represent what the SSL/TLS Recommender could recommend to customers based on their origin responses: </p><table><tr><td><p><b>SSL/TLS mode</b></p></td><td><p><b>HTTP from visitor</b></p></td><td><p><b>HTTPS from visitor</b></p></td></tr><tr><td><p><a href="https://developers.cloudflare.com/ssl/origin-configuration/ssl-modes/off/"><u>Off</u></a></p></td><td><p>HTTP to Origin</p></td><td><p>HTTP to Origin</p></td></tr><tr><td><p><a href="https://developers.cloudflare.com/ssl/origin-configuration/ssl-modes/flexible/"><u>Flexible</u></a></p></td><td><p>HTTP to Origin</p></td><td><p>HTTP to Origin</p></td></tr><tr><td><p><a href="https://developers.cloudflare.com/ssl/origin-configuration/ssl-modes/full/"><u>Full</u></a></p></td><td><p>HTTP to Origin</p></td><td><p>HTTPS to Origin without certification validation check</p></td></tr><tr><td><p><a href="https://developers.cloudflare.com/ssl/origin-configuration/ssl-modes/full-strict/"><u>Full (strict)</u></a></p></td><td><p>HTTP to Origin</p></td><td><p>HTTPS to Origin with certificate validation check</p></td></tr><tr><td><p><a href="https://developers.cloudflare.com/ssl/origin-configuration/ssl-modes/ssl-only-origin-pull/"><u>Strict (SSL-only origin pull)</u></a></p></td><td><p>HTTPS to Origin with certificate validation check</p></td><td><p>HTTPS to Origin with certificate validation check</p></td></tr></table><p>However, in the three years after launching our Recommender we discovered something troubling: of the over two million domains using Recommender, <b>only 30% of the recommendations that the system provided were followed</b>. A significant number of users would not complete the next step of pushing the button to inform Cloudflare that we could communicate with their origin over a more secure setting. </p><p>We were seeing sub-optimal settings that our customers could upgrade from without risk of breaking their site, but for various reasons, our users did not follow through with the recommendations. So we pushed forward by building a system that worked with Recommender and actioned the recommendations by default. </p>
    <div>
      <h2>How does Automatic SSL/TLS work? </h2>
      <a href="#how-does-automatic-ssl-tls-work">
        
      </a>
    </div>
    <p>Automatic SSL/TLS<b> </b>works by crawling websites, looking for content over both HTTP and HTTPS, then comparing the results for compatibility. It also performs checks against the TLS certificate presented by the origin and looks at the type of content that is served to ensure it matches. If the downloaded content matches, Automatic SSL/TLS elevates the encryption level for the domain to the compatible and stronger mode, without risk of breaking the site.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/49AaKdddEOgWXk1Oxlg2Qp/be44b863e2f4c797fa58c8b81f93f51a/3.png" />
          </figure><p>More specifically, these are the steps that Automatic SSL/TLS takes to upgrade domain’s security: </p><ol><li><p>Each domain is scheduled for a scan <b>once per month</b> (or until it reaches the maximum supported encryption mode).</p></li><li><p>The scan evaluates the current <a href="https://developers.cloudflare.com/ssl/origin-configuration/ssl-modes/"><u>encryption mode</u></a> for the domain. If it’s lower than what the Recommender thinks the domain can support based on the <a href="https://blog.cloudflare.com/introducing-automatic-ssl-tls-securing-and-simplifying-origin-connectivity/#:~:text=When%20the%20Recommender,recommendation%20is%20followed."><u>results</u></a> of its probes and content scans, the system begins a gradual upgrade.</p></li><li><p>Automatic SSL/TLS begins to upgrade the domain by connecting with origins over the more secure mode starting with just 1% of its traffic.</p></li><li><p>If connections to the origin succeed, the result is logged as successful.</p><ol><li><p>If they fail, the system records the failure to Cloudflare’s control plane and aborts the upgrade. Traffic is immediately downgraded back to the previous SSL/TLS setting to ensure seamless operation.</p></li></ol></li><li><p>If no issues are found, the new SSL/TLS encryption mode is applied to traffic in 10% increments until 100% of traffic uses the recommended mode.</p></li><li><p>Once 100% of traffic has been successfully upgraded with no TLS-related errors, the domain’s SSL/TLS setting is permanently updated.</p></li><li><p><b>Special handling for Flexible → Full/Strict:</b> These upgrades are more cautious because customers’ <a href="https://developers.cloudflare.com/cache/how-to/cache-keys/"><u>cache keys</u></a> are changed (from <code>http</code> to <code>https</code> origin scheme).</p><ol><li><p>In this situation, traffic ramps up from 1% to 10% in 1% increments, allowing customers’ cache to warm-up.</p></li><li><p>After 10%, the system resumes the standard 10% increments until 100%.</p></li></ol></li></ol><p>We know that transparency and visibility are critical, especially when automated systems make changes. To keep customers informed, Automatic SSL/TLS sends a weekly digest to account <a href="https://developers.cloudflare.com/fundamentals/manage-members/roles/"><u>Super Administrators</u></a> whenever updates are made to domain encryption modes. This way, you always have visibility into what changed and when.  </p><p>In short, Automatic SSL/TLS automates what used to be trial and error: finding the strongest SSL/TLS mode your site can support while keeping everything working smoothly.</p>
    <div>
      <h2>How are we doing so far?  </h2>
      <a href="#how-are-we-doing-so-far">
        
      </a>
    </div>
    <p>So far we have onboarded <b>all Free, Pro, and Business domains to use Automatic SSL/TLS</b>. We also have enabled this for <b>all new domains</b> that will onboard onto Cloudflare regardless of plantype. Soon, we will start onboarding Enterprise customers as well. If you already have an Enterprise domain and want to try out Automatic SSL/TLS we encourage you to <a href="https://developers.cloudflare.com/ssl/origin-configuration/ssl-modes/#automatic-ssltls-default"><u>enable it in the SSL/TLS section</u></a> of the dashboard or via the API. </p><p>As of the publishing of this blog, we’ve upgraded over<b> 6 million domains</b> to be more secure without the website operators needing to manually configure anything on Cloudflare. </p><table><tr><td><p><b>Previous Encryption Mode</b></p></td><td><p><b>Upgraded Encryption Mode</b></p></td><td><p><b>Number of domains</b></p></td></tr><tr><td><p>Flexible</p></td><td><p>Full</p></td><td><p>~ 2,200,000</p></td></tr><tr><td><p>Flexible</p></td><td><p>Full (strict)</p></td><td><p>~ 2,000,000</p></td></tr><tr><td><p>Full </p></td><td><p>Full (strict)</p></td><td><p>~ 1,800,000</p></td></tr><tr><td><p>Off</p></td><td><p>Full</p></td><td><p>~ 7,000</p></td></tr><tr><td><p>Off</p></td><td><p>Full (strict)</p></td><td><p>~ 5,000</p></td></tr></table><p>We’re most excited about the over 4 million domains that moved from <a href="https://developers.cloudflare.com/ssl/origin-configuration/ssl-modes/flexible/"><u>Flexible</u></a> or <a href="https://developers.cloudflare.com/ssl/origin-configuration/ssl-modes/off/"><u>Off</u></a>, which uses HTTP to origin servers, to <a href="https://developers.cloudflare.com/ssl/origin-configuration/ssl-modes/full/"><u>Full</u></a> or <a href="https://developers.cloudflare.com/ssl/origin-configuration/ssl-modes/full-strict/"><u>Strict</u></a>, which uses HTTPS. </p><p>If you have a reason to use a particular encryption mode (e.g., on a test domain that isn’t production ready) you can always disable Automatic SSL/TLS and manually set the encryption mode that works best for your use case.</p><p>Today, SSL/TLS mode works on a domain-wide level, which can feel blunt. This means that one suboptimal subdomain can keep the entire domain in a less secure TLS setting, to ensure availability. Our long-term goal is to make these controls more precise, so that Automatic SSL/TLS and encryption modes can optimize security per origin or subdomain, rather than treating every hostname the same.</p>
    <div>
      <h2>Impact on origin-facing connections</h2>
      <a href="#impact-on-origin-facing-connections">
        
      </a>
    </div>
    <p>Since we began onboarding domains to <b>Automatic SSL/TLS</b> in late 2024 and early 2025, we’ve been able to measure how origin connections across our network are shifting toward stronger security. Looking at the ratios across all origin requests, the trends are clear:</p><ul><li><p><b>Encryption is rising.</b> Plaintext connections are steadily declining, a reflection of Automatic SSL/TLS helping millions of domains move to HTTPS by default. We’ve seen <b>a correlated 7-8% reduction in plaintext origin-bound connections.</b> Still, some origins remain on outdated configurations, and these should be upgraded to keep pace with modern security expectations.</p></li><li><p><b>TLS 1.3 is surging.</b> Since late 2024, TLS 1.3 adoption has climbed sharply, now making up the majority of encrypted origin traffic (almost 60%). While Automatic SSL/TLS doesn’t control which TLS version an origin supports, this shift is an encouraging sign for both performance and security.</p></li><li><p><b>Older versions are fading.</b> Month after month, TLS 1.2 continues to shrink, while TLS 1.0 and 1.1 are now so rare they barely register.</p></li></ul><p>The decline in plaintext connections is encouraging, but it also highlights a long tail of servers still relying on outdated packages or configurations. Sites like <a href="https://www.ssllabs.com/ssltest/"><u>SSL Labs</u></a> can be used, for instance, to check a server’s TLS configuration. However, simply copy-pasting settings to achieve a high rating can be risky, so we encourage customers to review their origin TLS configurations carefully. In addition, <a href="https://developers.cloudflare.com/ssl/origin-configuration/origin-ca/"><u>Cloudflare origin CA</u></a> or <a href="https://developers.cloudflare.com/cloudflare-one/connections/connect-networks/"><u>Cloudflare Tunnel</u></a> can help provide guidance for upgrading origin security.</p>
    <div>
      <h3>Upgraded domain results</h3>
      <a href="#upgraded-domain-results">
        
      </a>
    </div>
    <p>Instead of focusing on the entire network of origin-facing connections from Cloudflare, we’re now going to drill into specific changes that we’ve seen from domains that have been upgraded by <b>Automatic SSL/TLS</b>. </p><p>By January 2025, most domains had been enrolled in Automatic SSL/TLS, and the results were dramatic: a near 180-degree shift from plaintext to encrypted communication with origins. After that milestone, traffic patterns leveled off into a steady plateau, reflecting a more stable baseline of secure connections across the network. There is some drop in encrypted traffic which may represent some of the originally upgraded domains manually turning off Automatic SSL/TLS.</p><p>But the story doesn’t end there. In the past two months (July and August 2025), we’ve observed another noticeable uptick in encrypted origin traffic. This likely reflects customers upgrading outdated origin packages and enabling stronger TLS support—evidence that Automatic SSL/TLS not only raised the floor on encryption but continues nudging the long tail of domains toward better security.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6nJe12swMSMXBQsgzEhXtq/78debf8e0c3efbaf66bce8cf6e623c80/4.png" />
          </figure><p>To further explore the “encrypted” line above, we wanted to see what the delta was between TLS 1.2 and 1.3. Originally we wanted to include all TLS versions we support but the levels of 1.0 and 1.1 were so small that they skewed the graph and were taken out. We see a noticeable rise in the support for both TLS 1.2 and 1.3 between Cloudflare and origin servers. What is also interesting to note here is the network-wide decrease in TLS 1.2 but for the domains that have been automatically upgraded a generalized increase, potentially also signifying origin TLS stacks that could be updated further.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/BoRlq4irKWuvuXs5E4e8l/3971165f5029a03ae64dac79235a8671/5.png" />
          </figure><p>Finally, for Full (Strict) mode,<b> </b>we wanted to investigate the number of successful certificate validations we performed. This line shows a dramatic, approximately 40%, increase in successful certificate validations performed for customers upgraded by Automatic SSL/TLS. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5nNyiMNQ4xtOubbrhnDuRY/af16c0792a73de71fa0176e6c1cfeb0b/6.png" />
          </figure><p>We’ve seen a largely successful rollout of Automatic SSL/TLS so far, with millions of domains upgraded to stronger encryption by default. We’ve seen help Automatic SSL/TLS improve origin-facing security, safely pushing connections to stronger modes whenever possible, without risking site breakage. Looking ahead, we’ll continue to expand this capability to more customer use cases as we help to build a more encrypted Internet.</p>
    <div>
      <h2>What will we build next for Automatic SSL/TLS? </h2>
      <a href="#what-will-we-build-next-for-automatic-ssl-tls">
        
      </a>
    </div>
    <p>We’re expanding Automatic SSL/TLS with new features that give customers more visibility and control, while keeping the system safe by default. First, we’re building an <b>ad-hoc scan</b> option that lets you rescan your origin earlier than the standard monthly cadence. This means if you’ve just rotated certificates, upgraded your origin’s TLS configuration, or otherwise changed how your server handles encryption, you won’t need to wait for the next scheduled pass—Cloudflare will be able to re-evaluate and move you to a stronger mode right away.</p><p>In addition, we’re working on <b>error surfacing</b> that will highlight origin connection problems directly in the dashboard and provide actionable guidance for remediation. Instead of discovering after the fact that an upgrade failed, or a change on the origin resulted in a less secure setting than what was set previously, customers will be able to see where the issue lies and how to fix it. </p><p>Finally, for <b>newly onboarded domains</b>, we plan to add clearer guidance on when to finish configuring the origin before Cloudflare runs its first scan and sets an encryption mode. Together, these improvements are designed to reduce surprises, give customers more agency, and ensure smoother upgrades. We expect all three features to roll out by June 2026.</p>
    <div>
      <h2>Post Quantum Era</h2>
      <a href="#post-quantum-era">
        
      </a>
    </div>
    <p>Looking ahead, quantum computers introduce a serious risk: data encrypted today can be harvested and decrypted years later once quantum attacks become practical. To counter this <a href="https://en.wikipedia.org/wiki/Harvest_now,_decrypt_later"><u>harvest-now, decrypt-later</u></a> threat, the industry is moving towards post-quantum cryptography (PQC)—algorithms designed to withstand quantum attacks. We have extensively written on this subject <a href="https://blog.cloudflare.com/tag/post-quantum/"><u>in our previous blogs</u></a>.</p><p>In August 2024, NIST finalized its PQC standards: <a href="https://nvlpubs.nist.gov/nistpubs/fips/nist.fips.203.pdf"><u>ML-KEM</u></a> for key agreement, and <a href="https://nvlpubs.nist.gov/nistpubs/fips/nist.fips.204.pdf"><u>ML-DSA</u></a> and <a href="https://nvlpubs.nist.gov/nistpubs/fips/nist.fips.205.pdf"><u>SLH-DSA</u></a> for digital signatures. In collaboration with industry partners, Cloudflare has helped drive the development and deployment of PQC. We have deployed the hybrid key agreement, combining ML-KEM (post-quantum secure) and X25519 (classical), to secure TLS 1.3 traffic to our servers and internal systems. As of mid-September 2025, <a href="https://radar.cloudflare.com/adoption-and-usage#post-quantum-encryption-adoption"><u>around 43%</u></a> of human-generated connections to Cloudflare are already protected with the hybrid post-quantum secure key agreement – a huge milestone in preparing the Internet for the quantum era.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2hgIUNO8TM50kvAOvzQ8rg/cdbe5b3d64390fc4b946036e2f37471d/6.png" />
          </figure><p>But things look different on the other side of the network. When Cloudflare connects to origins, we act as the client, navigating a fragmented landscape of hosting providers, software stacks, and middleboxes. Each origin may support a different set of cryptographic features, and not all are ready for hybrid post-quantum handshakes.</p><p>To manage this diversity without the risk of breaking connections, we relied on <i>HelloRetryRequest</i>. Instead of sending post-quantum keyshare immediately in the <i>ClientHello</i>, we only advertise support for it. If the origin server supports the post-quantum key agreement, it uses <i>HelloRetryRequest</i> to request it from Cloudflare, and creates the post-quantum connection. The downside is this extra round trip (from the retry) cancels out the performance gains of TLS 1.3 and makes the connection feel closer to TLS 1.2 for uncached requests.</p><p>Back in 2023, <a href="https://developers.cloudflare.com/ssl/post-quantum-cryptography/pqc-to-origin/"><u>we launched an API endpoint</u></a>, so customers could manually opt their origins into preferring post-quantum connections. If set, we avoid the extra roundtrip and try to create a post-quantum connection at the start of the TLS session. Similarly, we extended post-quantum protection to <a href="https://blog.cloudflare.com/post-quantum-tunnel/"><u>Cloudflare tunnel</u></a>, making it one of the easiest ways to get origin-facing PQ today.</p><p><b>Starting Q4 2025, we’re taking the next step – making it </b><b><i>automatic</i></b><b>. </b>Just as we’ve done with SSL/TLS upgrades, Automatic SSL/TLS will begin testing, ramping, and enabling post-quantum handshakes with origins—without requiring customers to change a thing, as long as their origins support post-quantum key agreement.</p><p>Behind the scenes, we’re already scanning active origins about every 24 hours to test support and preferences for both classical and post-quantum key agreements. We’ve worked directly with vendors and customers to identify compatibility issues, and this new scanning system will be fully integrated into <a href="https://developers.cloudflare.com/ssl/origin-configuration/ssl-modes/#automatic-ssltls-default"><u>Automatic SSL/TLS</u></a>.</p><p>And the benefits won't stop at post-quantum. Even for classical handshakes, optimization matters. Today, the X25519 algorithm is used by default, but <b>our scanning data shows that more than 6% of origins currently prefer a different key agreement algorithm, </b>which leads to unnecessary <i>HelloRetryRequests </i>and wasted round trips<b>.</b> By folding this scanning data into Automatic SSL/TLS, we’ll improve connection establishment for classical TLS as well—squeezing out extra speed and reliability across the board.</p><p>As enterprises and hosting providers adopt PQC, our preliminary scanning pipeline has already found that <b>around 4% of origins could benefit from a post-quantum-preferred key agreement even today</b>, as shown below. This is an 8x increase since <a href="https://blog.cloudflare.com/post-quantum-to-origins/"><u>we started our scans in 2023</u></a>. We expect this number to grow at a steady pace as the industry continues to migrate to post-quantum protocols.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3g2Um1vTz6cuCtoYWtMU4C/7551fb50305a8512fa7cc22844024b99/8.png" />
          </figure><p>As part of this change, we will also<b> phase out</b> support for the pre-standard version X25519Kyber768 to support the final ML-KEM standard, again using a hybrid, from edge to origin connections.</p><p>With Automatic SSL/TLS, we will soon by default scan your origins proactively to directly send the most preferred keyshare to your origin removing the need for any extra roundtrip, improving both security and performance of your origin connections collectively.</p><p>At Cloudflare, we’ve always believed security is a right, not a privilege. From Universal SSL to post-quantum cryptography, our mission has been to make the strongest protections free and available to everyone. <b>Automatic SSL/TLS</b> is the next step—upgrading every domain to the best protocols automatically. Check the SSL/TLS section of your dashboard to ensure it’s enabled and join the millions of sites already secured for today and ready for tomorrow.</p> ]]></content:encoded>
            <category><![CDATA[Birthday Week]]></category>
            <category><![CDATA[TLS]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[Encryption]]></category>
            <category><![CDATA[Post-Quantum]]></category>
            <category><![CDATA[Research]]></category>
            <category><![CDATA[CDN]]></category>
            <guid isPermaLink="false">7nO4wFW304Eh2r48934ugz</guid>
            <dc:creator>Alex Krivit</dc:creator>
            <dc:creator>Suleman Ahmad</dc:creator>
            <dc:creator>Yawar Jamal</dc:creator>
        </item>
        <item>
            <title><![CDATA[Securing today for the quantum future: WARP client now supports post-quantum cryptography (PQC)]]></title>
            <link>https://blog.cloudflare.com/post-quantum-warp/</link>
            <pubDate>Wed, 24 Sep 2025 13:00:00 GMT</pubDate>
            <description><![CDATA[ To prepare for a future where powerful quantum computers come online, we've upgraded our WARP client with post-quantum cryptography. ]]></description>
            <content:encoded><![CDATA[ <p>The Internet is currently transitioning to <a href="https://www.cloudflare.com/pqc/"><u>post-quantum cryptography (PQC)</u></a> in preparation for Q-Day, when quantum computers break the classical cryptography that underpins all modern computer systems.  The US <a href="https://www.nist.gov/"><u>National Institute of Standards and Technology (NIST)</u></a> recognized the urgency of this transition, announcing that classical cryptography (<a href="https://en.wikipedia.org/wiki/RSA_cryptosystem"><u>RSA</u></a>, Elliptic Curve Cryptography (<a href="https://blog.cloudflare.com/a-relatively-easy-to-understand-primer-on-elliptic-curve-cryptography/"><u>ECC</u></a>)) must be <a href="https://csrc.nist.gov/pubs/ir/8547/ipd"><u>deprecated by 2030 and completely disallowed by 2035</u></a>.</p><p>Cloudflare is well ahead of NIST’s schedule. Today, over <a href="https://radar.cloudflare.com/adoption-and-usage?cf_history_state=%7B%22guid%22%3A%22C255D9FF78CD46CDA4F76812EA68C350%22%2C%22historyId%22%3A20%2C%22targetId%22%3A%22583662CE97724FCE7A7C0844276279FE%22%7D#post-quantum-encryption-adoption"><u>45%</u></a> of human-generated Internet traffic sent to Cloudflare’s network is already post-quantum encrypted. Because we believe that a secure and private Internet should be free and accessible to all, we’re on a mission to include PQC in all our <a href="https://blog.cloudflare.com/post-quantum-cryptography-ga/"><u>products</u></a>, <a href="https://blog.cloudflare.com/you-dont-need-quantum-hardware/"><u>without specialized hardware</u></a>, and at <a href="https://blog.cloudflare.com/post-quantum-crypto-should-be-free/"><u>no extra cost to our customers and end users</u></a>.</p><p>That’s why we’re proud to announce that <a href="https://developers.cloudflare.com/warp-client/"><u>Cloudflare’s WARP client</u></a> now supports post-quantum key agreement — both in our free consumer WARP client <a href="https://one.one.one.one/"><u>1.1.1.1</u></a>, and in our enterprise WARP client, the <a href="https://developers.cloudflare.com/cloudflare-one/connections/connect-devices/warp/download-warp/"><u>Cloudflare One Agent</u></a>. </p>
    <div>
      <h2>Post-quantum tunnels using the WARP client</h2>
      <a href="#post-quantum-tunnels-using-the-warp-client">
        
      </a>
    </div>
    <p>This upgrade of the WARP client to post-quantum key agreement provides end users with immediate protection for their Internet traffic against <a href="https://en.wikipedia.org/wiki/Harvest_now,_decrypt_later"><u>harvest-now-decrypt-later attacks</u></a>. The value proposition is clear — by tunneling your Internet traffic over the WARP client’s post-quantum MASQUE tunnels, you get immediate post-quantum encryption of your network traffic. And this holds even if the individual connections sent through the tunnel have not yet been upgraded to post-quantum cryptography.</p><p>Here’s how it works.</p><p>When the <a href="https://developers.cloudflare.com/cloudflare-one/connections/connect-devices/warp/download-warp/"><u>Cloudflare One Agent</u></a> (our enterprise WARP client) connects employees to the internal corporate resources as part of the <a href="https://developers.cloudflare.com/cloudflare-one/"><u>Cloudflare One Zero Trust</u></a> platform, it now provides <a href="https://blog.cloudflare.com/post-quantum-zero-trust/"><u>end-to-end quantum encryption</u></a> of network traffic. As shown in the figure below, traffic from the WARP client is wrapped in a post-quantum encrypted <a href="https://blog.cloudflare.com/zero-trust-warp-with-a-masque/"><u>MASQUE</u></a> (<a href="https://datatracker.ietf.org/wg/masque/about/"><u>Multiplexed Application Substrate over QUIC Encryption</u></a>) tunnel, sent to Cloudflare’s <a href="https://www.cloudflare.com/network/"><u>global network</u></a> network (link (1)). Cloudflare’s global network then forwards the traffic another set of post-quantum encrypted tunnels (link (2)), and then finally on to the internal corporate resource using a <a href="https://blog.cloudflare.com/post-quantum-tunnel/"><u>post-quantum encrypted</u></a> Cloudflare <a href="https://developers.cloudflare.com/cloudflare-one/connections/connect-networks/"><u>Tunnel</u></a> established using the <a href="https://developers.cloudflare.com/cloudflare-one/connections/connect-networks/"><u>cloudflared agent</u></a> (which installed near the corporate resource) (link (3)). </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7q9k7Ss95iM1PSiSIW76MD/db8146afa3da442d5459dac0919a3f31/image2.png" />
          </figure><p><sup><i>We have upgraded the </i></sup><a href="https://developers.cloudflare.com/cloudflare-one/connections/connect-devices/warp/download-warp/"><sup><i><u>Cloudflare One Agent</u></i></sup></a><sup> </sup><sup><i>to post-quantum key agreement, providing end-to-end post quantum protection for traffic sent to internal corporate resources. </i></sup></p><p>When an end user <a href="https://developers.cloudflare.com/learning-paths/secure-internet-traffic/connect-devices-networks/install-agent/"><u>installs</u></a> the consumer WARP Client (<a href="https://one.one.one.one/"><u>1.1.1.1</u></a>), the WARP client wraps the end user’s network traffic in a post-quantum encrypted <a href="https://blog.cloudflare.com/zero-trust-warp-with-a-masque/"><u>MASQUE</u></a> tunnel. As shown in the figure below, the MASQUE tunnel protects the traffic on its way to Cloudflare’s <a href="https://www.cloudflare.com/network/"><u>global network</u></a> (link (1)). Cloudflare's global network then uses post-quantum encrypted tunnels to bring the traffic as close as possible to its final destination (link (2)). Finally, the traffic is forwarded over the public Internet to the origin server (i.e. its final destination). That final connection (link (3)) may or may not be post-quantum (PQ). It will not be PQ if the origin server is not PQ.  It will be PQ if the origin server is (a) upgraded to PQC, and (b) the end user is connecting to over a client that supports PQC (like Chrome, Edge or Firefox).  In the future, <a href="https://blog.cloudflare.com/automatically-secure"><u>Automatic SSL/TLS</u></a> will ensure that your entire connection will be PQ as long as the origin server is behind Cloudflare and supports PQ connections (even if your browser doesn’t).</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/gagcJJsc6aLeAThvV5Wa4/c01ea5a20ea19778deca13e0eb4c7de3/image4.png" />
          </figure><p><sup><i>Consumer WARP client (</i></sup><a href="https://one.one.one.one/"><sup><i><u>1.1.1.1</u></i></sup></a><sup><i>) is now upgraded to post-quantum key agreement.</i></sup></p>
    <div>
      <h2>The cryptography landscape</h2>
      <a href="#the-cryptography-landscape">
        
      </a>
    </div>
    <p>Before we get into the details of our upgrade to the WARP client, let’s review the different cryptographic primitives involved in the transition to PQC. </p><p>Key agreement is a method by which two or more parties can establish a shared secret key over an insecure communication channel. This shared secret can then be used to encrypt and authenticate subsequent communications. Classical key agreement in <a href="https://www.cloudflare.com/learning/ssl/transport-layer-security-tls/"><u>Transport Layer Security (TLS)</u></a> typically uses the <a href="https://blog.cloudflare.com/a-relatively-easy-to-understand-primer-on-elliptic-curve-cryptography/"><u>Elliptic Curve Diffie Hellman (ECDH)</u></a> cryptographic algorithm, whose security can be broken by a quantum computer using <a href="https://en.wikipedia.org/wiki/Shor%27s_algorithm"><u>Shor's algorithm</u></a>. </p><p>We need <a href="https://blog.cloudflare.com/post-quantum-key-encapsulation/"><b><u>post-quantum key agreement</u></b></a> today to stop <a href="https://en.wikipedia.org/wiki/Harvest_now,_decrypt_later"><u>harvest-now-decrypt-later attacks</u></a>, where attackers collect encrypted data today, and then decrypt it in future once powerful quantum computers become available. Any institution that deals with data that could still be valuable ten years in the future (<a href="https://www.cloudflare.com/cloudflare-for-government/"><u>governments</u></a>, <a href="https://www.cloudflare.com/banking-and-financial-services/"><u>financial institutions</u></a>, <a href="https://www.cloudflare.com/healthcare/"><u>healthcare organizations</u></a>, and more) should deploy PQ key agreement to prevent these attacks.</p><p>This is why we upgraded the WARP client to post-quantum key agreement.</p><p>Post-quantum key agreement is already quite mature and performant; our <a href="https://blog.cloudflare.com/pq-2024/#ml-kem-versus-x25519"><u>experiments</u></a> have shown that deploying the post-quantumModule-Lattice-Based Key-Encapsulation Mechanism (<a href="https://csrc.nist.gov/pubs/fips/203/final"><u>ML-KEM</u></a>) algorithm in hybrid mode (in parallel with classical ECDH) over <a href="https://www.cloudflare.com/learning/ssl/why-use-tls-1.3/"><u>TLS 1.3</u></a> is actually more performant than using <a href="https://www.cloudflare.com/learning/ssl/why-use-tls-1.3/"><u>TLS 1.2</u></a> with classical cryptography. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7ggHbhukH4atXV4EIbPlrl/9845ac63363c9233fa0bff6b47a1ea79/image1.png" />
          </figure><p><sup><i>Over one-third of the human-generated traffic to our network uses TLS 1.3 with hybrid post-quantum key agreement (shown as X25519MLKEM768 in the screen capture above); in fact, if you’re on a Chrome, Edge or Firefox browser, you’re probably reading this blog right now over a PQ encrypted connection.</i></sup></p><p><b>Post-quantum digital signatures and certificates, </b>by contrast, are still in the process of being <a href="https://datatracker.ietf.org/doc/draft-ietf-lamps-dilithium-certificates/"><u>standardized</u></a> for use in TLS and the Internet’s Public Key Infrastructure (PKI). <a href="https://blog.cloudflare.com/another-look-at-pq-signatures/"><u>PQ signatures and certificates</u></a> are required to prevent an active attacker who uses a quantum computer to forge a digital certificate/signature and then uses it to decrypt or manipulate communications by impersonating a trusted server. As far as we know, we don’t have such attackers yet, which is why post-quantum signatures and certificates are not widely deployed across the Internet. We have not yet upgraded the WARP client to <a href="https://blog.cloudflare.com/another-look-at-pq-signatures/"><u>PQ signatures and certificates</u></a>, but we plan to do so soon.</p>
    <div>
      <h2>A unique challenge: PQC upgrade in the WARP client </h2>
      <a href="#a-unique-challenge-pqc-upgrade-in-the-warp-client">
        
      </a>
    </div>
    <p>While Cloudflare is on the <a href="https://blog.cloudflare.com/tag/post-quantum/"><u>forefront of the PQC transition</u></a>, a different kind of challenge emerged when we upgraded our WARP client. Unlike a server that we fully control and can hotfix at any time, our WARP client runs directly on end user devices. In fact, it runs on millions of end user devices that we do not control. This fundamental difference means that every time we update the WARP client, our release must work properly on the first try, with no room for error.</p><p>To make things even more challenging, we need to support the WARP client across five different operating systems (Windows, macOS, Linux, iOS, and Android/ChromeOS), while also ensuring consistency and reliability for both our consumer 1.1.1.1 WARP client and our Cloudflare One Agent. In addition, because the WARP client relies on the fairly new <a href="https://datatracker.ietf.org/doc/rfc9298/"><u>MASQUE protocol</u></a>, which the industry only standardized in August 2022, we need to be extra careful to make sure our upgrade to post-quantum key agreement does not expose latent bugs or instabilities in the MASQUE protocol itself. </p><p>All these challenges point to a slow and careful transition to PQC in the WARP client, while still supporting customers that want to immediately activate PQC. To accomplish this, we used three techniques: </p><ol><li><p>temporary PQC downgrades, </p></li><li><p>gradual rollout across our WARP client population, and</p></li><li><p>a <a href="https://en.wikipedia.org/wiki/Mobile_device_management"><u>Mobile Device Management (MDM)</u></a> override. </p></li></ol><p>Let’s take a deep dive into each. </p>
    <div>
      <h3>Temporary PQC downgrades</h3>
      <a href="#temporary-pqc-downgrades">
        
      </a>
    </div>
    <p>As we roll out PQ key agreement in MASQUE to the WARP client, we want to make sure we don’t have WARP clients that struggle to connect due to an error, middlebox, or a latent implementation bug triggered by our PQC migration. One way to accomplish this level of robustness is to have clients downgrade to a classic cryptographic connection if they fail to negotiate a PQ connection.</p><p>To really understand this strategy, we need to review the concept of <b>cryptographic downgrades</b>. In cryptography, a <b>downgrade attack</b> is a cyber attack where an attacker forces a system to abandon a secure cryptographic algorithm in favor of an older, less secure, or even unencrypted one that allows the attacker to introspect on the communications. Thus, when newly rolling out a PQ encryption, it is standard practice to ensure that: if the client and server <i>both </i>support PQ encryption, it should not be possible for an attacker to downgrade their connection to a classic encryption. </p><p>Thus, to prevent downgrade attacks, we should ensure that if the client and server both support PQC, but fail to negotiate a PQC connection, then the connection will just fail. However, while this prevents downgrade attacks, it also creates problems with robustness.</p><p>We cannot have both robustness (i.e. the ability for client to downgrade to a classical connection if the PQC fails) and security against downgrades (i.e. the client is forbidden to downgrade to classical cryptography once it supports PQC) at the same time. We have to choose one. For this reason, we opted for a phased approach.</p><ul><li><p><b>Phase 1: Automated PQC downgrades.</b> We start by choosing robustness at the cost of providing security against downgrade attacks.  In this phase, we support automated PQC downgrades — if a client fails to negotiate a PQC connection, it will downgrade to classical cryptography. That way, if there are bugs or other instability introduced by PQC, the client automatically downgrades to classical cryptography and the end user will not experience any issues. (Note: because MASQUE establishes a single very long-lived TLS connection only when the user logs in, an end user is unlikely to notice a downgrade.) </p></li><li><p><b>Phase 2: PQC with security against downgrades. </b>Then, once the rollout is stable and we are convinced that there are no issues interfering with PQC, we will choose security against downgrade attacks over robustness. In this phase, if a client fails to negotiate a PQC connection, the connection will just fail, which provides security against downgrade attacks.</p></li></ul><p>To implement this phased approach, we introduced an API flag that the client uses to determine how it should initiate TLS handshakes, which has three states:</p><ul><li><p><b>No PQC: </b>The client initiates a TLS handshake using classical cryptography only. .</p></li><li><p><b>PQC downgrades allowed:</b> The client initiates a TLS handshake using post-quantum key agreement. If the PQC handshake negotiation fails, the client downgrades to classical cryptography. This flag supports Phase 1 of our rollout. </p></li><li><p><b>PQC only:</b> The client initiates a TLS handshake using post-quantum key agreement cryptography. If the PQC handshake negotiation fails, the connection fails. This flag supports Phase 2 of our rollout.</p></li></ul><p>The WARP <a href="https://developers.cloudflare.com/changelog/2025-06-30-warp-windows-ga/"><u>desktop version 2025.5.893.0</u></a>, <a href="https://developers.cloudflare.com/changelog/2025-06-30-warp-ga-ios/"><u>iOS version 1.11</u></a> and <a href="https://developers.cloudflare.com/changelog/2025-06-30-warp-ga-android/"><u>Android version 2.4.2 </u></a>all support post-quantum key agreement along with this API flag.</p><p>With this as our framework, the next question becomes: what timing makes sense for this phased approach?</p>
    <div>
      <h3>Gradual rollout across the WARP client population</h3>
      <a href="#gradual-rollout-across-the-warp-client-population">
        
      </a>
    </div>
    <p>To limit the risk of errors or latent implementation bugs triggered by our PQC migration, we gradually rolled out PQC across our population of WARP clients.</p><p>In Phase 1 of our rollout, we prioritized robustness rather than security against downgrade attacks. Thus, initially the API flag is set to “No PQC” for our entire client population, and we gradually turn on the “PQC downgrades allowed” across groups of clients. As we do this, we monitor whether any clients downgrade from PQC to classical cryptography. At the time of this writing, we have completed the Phase 1 rollout to all of our consumer WARP (1.1.1.1) clients. We expect to complete Phase 1 for our Cloudflare One Agent by the end of 2025.</p><p>Downgrades are not expected during Phase 1. In fact, downgrades indicate that there may be a latent issue that we have to fix. If you are using a WARP client and encounter issues that you believe might be related to PQC, you can let us know by using the feedback button in the WARP client interface (by clicking the bug icon in the top-right corner of the WARP client application). Enterprise users can also file a support ticket for the Cloudflare One Agent.</p><p>We plan to enter Phase 2 — where the API flag is set to “PQC only” in order to provide security against downgrade attacks — by summer of mid 2026. </p>
    <div>
      <h3>MDM override</h3>
      <a href="#mdm-override">
        
      </a>
    </div>
    <p>Finally, we know that some of our customers may not be willing to wait for us to complete this careful upgrade to PQC. So, those customers can activate PQC right now. </p><p>We’ve built a <a href="https://en.wikipedia.org/wiki/Mobile_device_management"><u>Mobile Device Management (MDM)</u></a> override for the Cloudflare One Agent. MDM allows organizations to centrally manage, monitor, and secure mobile devices that access corporate resources; it works on multiple types of devices, not just mobile devices. The override for the Cloudflare One Agent allows an administrator (with permissions to manage the device) to turn on PQC. To use the <a href="https://developers.cloudflare.com/cloudflare-one/connections/connect-devices/warp/deployment/mdm-deployment/parameters/#enable_post_quantum"><u>MDM post-quantum override</u></a>, set the ‘enable_post_quantum’ MDM flag to true. This flag takes precedence over the signal from the API flag we described earlier, and will activate PQC without downgrades. With this setting, the client will only negotiate a PQC connection. And if the PQC negotiation fails, the connection will fail, which provides security against downgrade attacks. </p>
    <div>
      <h2>Ciphersuites, FIPS and Fedramp </h2>
      <a href="#ciphersuites-fips-and-fedramp">
        
      </a>
    </div>
    <p>The <a href="https://www.cloudflare.com/learning/privacy/what-is-fedramp/">Federal Risk and Authorization Management Program (FedRAMP)</a> is a U.S. government standard for securing federal data in the cloud. <a href="https://cf-assets.www.cloudflare.com/slt3lc6tev37/7wOGN7Ua9rvgzlQAwlFZ6y/324506e91b62aa4de55bcb2ceb5d8ee8/Cloudflare-s_Unique_FedRAMP_Architecture.pdf"><u>Cloudflare has a FedRAMP certification</u></a> that requires that we use cryptographic ciphersuites that comply with <a href="https://csrc.nist.gov/glossary/term/federal_information_processing_standard"><u>FIPS</u></a> (Federal Information Processing Standards) for certain products that are inside our FIPS boundary.</p><p>Because the WARP client is inside Cloudflare’s FIPS boundary for our <a href="https://www.fedramp.gov/"><u>FedRAMP</u></a> certification, we had to ensure it uses FIPS-compliant cryptography. For internal links (where Cloudflare controls both sides of the connection) within the FIPS boundary, we currently use a hybrid key agreement consisting of FIPS-compliant EDCH using the P256 Elliptic curve, in parallel with an early version of ML-KEM-768 (which we started using before the ML-KEM standards were finalized) — a key agreement called P256Kyber768Draft00. To observe this ciphersuite in action in your WARP client, you can use the <code>warp-cli tunnel stats</code> utility. Here’s an example of what we find when PQC is enabled:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1ilpmpuGdOAzbqX28T34tc/17254678b17ba493da1da09f10493e9e/image5.png" />
          </figure><p>And here is an example when PQC is not enabled:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3mdNurLT1USiRICpkvIKa8/1af40525be2ccaa5b6ef71824f0ace37/image6.png" />
          </figure>
    <div>
      <h2>PQC tunnels for everyone </h2>
      <a href="#pqc-tunnels-for-everyone">
        
      </a>
    </div>
    <p>We believe that PQC should be available to everyone, without <a href="https://blog.cloudflare.com/you-dont-need-quantum-hardware/"><u>specialized hardware</u></a>, at <a href="https://blog.cloudflare.com/post-quantum-crypto-should-be-free/"><u>no additional cost</u></a>. To that end, we’re proud to help shoulder the burden of the Internet’s upgrade to PQC.</p><p>A powerful strategy is to use tunnels protected by post-quantum key agreement to protect Internet traffic, in bulk, from harvest-now-decrypt-later attacks – even if the individual connections sent through the tunnel have not yet been upgraded to PQC. Eventually, we will upgrade these tunnels to also support post-quantum signatures and certificates, to stop active attacks by adversaries armed with quantum computers after Q-Day.</p><p>This staged approach keeps up with Internet standards. And the use of tunnels provides customers and end users with built-in <i>cryptographic agility</i>, so they can easily adapt to changes in the cryptographic landscape without a major architectural overhaul.</p><p>Cloudflare’s WARP client is just the latest tunneling technology that we’ve upgraded to post-quantum key agreement. You can try it out today for free on personal devices using our free consumer WARP client <a href="https://one.one.one.one/"><u>1.1.1.1</u></a>, or for your corporate devices using our <a href="https://dash.cloudflare.com/sign-up/zero-trust"><u>free zero-trust offering for teams of under 50 users</u></a> or a paid <a href="https://www.cloudflare.com/plans/zero-trust-services/"><u>enterprise zero-trust or SASE subscription</u></a>. Just <a href="https://developers.cloudflare.com/cloudflare-one/connections/connect-devices/warp/download-warp/"><u>download</u></a> and install the client on your Windows, Linux, macOS, iOS, Android/ChromeOS device, and start protecting your network traffic with PQC.</p><div>
  
</div><p></p> ]]></content:encoded>
            <category><![CDATA[Birthday Week]]></category>
            <category><![CDATA[Post-Quantum]]></category>
            <category><![CDATA[Zero Trust]]></category>
            <category><![CDATA[Cryptography]]></category>
            <category><![CDATA[WARP]]></category>
            <category><![CDATA[Cloudflare Zero Trust]]></category>
            <category><![CDATA[SASE]]></category>
            <category><![CDATA[Cloudflare One]]></category>
            <category><![CDATA[1.1.1.1]]></category>
            <guid isPermaLink="false">6Z8Ii372a6Lta1Y2ISnfWw</guid>
            <dc:creator>Sharon Goldberg</dc:creator>
            <dc:creator>Tochukwu Nkemdilim (Toks)</dc:creator>
            <dc:creator>Koko Uko</dc:creator>
        </item>
        <item>
            <title><![CDATA[You don’t need quantum hardware for post-quantum security]]></title>
            <link>https://blog.cloudflare.com/you-dont-need-quantum-hardware/</link>
            <pubDate>Fri, 19 Sep 2025 13:44:40 GMT</pubDate>
            <description><![CDATA[ Post-quantum cryptography protects against quantum threats using today’s hardware. Quantum tech like QKD may sound appealing, but it isn’t necessary or sufficient to secure organizations. ]]></description>
            <content:encoded><![CDATA[ <p>Organizations have finite resources available to combat threats, both by the adversaries of today and those in the not-so-distant future that are armed with quantum computers. In this post, we provide guidance on what to prioritize to best prepare for the future, when quantum computers become powerful enough to break the conventional cryptography that underpins the security of modern computing systems.  We describe how <a href="https://www.cloudflare.com/learning/ssl/quantum/what-is-post-quantum-cryptography/"><u>post-quantum cryptography (PQC)</u></a> can be deployed <b>on your existing hardware</b> to protect from threats posed by <a href="https://www.cloudflare.com/learning/ssl/quantum/what-is-quantum-computing/"><u>quantum computing</u></a>, and explain why quantum key distribution (QKD) and quantum random number generation (QRNG) are neither necessary nor sufficient for security in the quantum age.</p>
    <div>
      <h2>Are you quantum ready?</h2>
      <a href="#are-you-quantum-ready">
        
      </a>
    </div>
    <p>“Quantum” is becoming one of the most heavily used buzzwords in the tech industry. What does it actually mean, and why should you care?</p><p>At its core, “quantum” refers to technologies that harness principles of quantum mechanics to perform tasks that are not feasible with classical computers. Quantum computers have exciting potential to unlock advancements in <a href="https://pubs.aip.org/aip/jap/article/133/22/221102/2896017/Quantum-computing-and-materials-science-A"><u>materials science</u></a> and <a href="https://www.weforum.org/stories/2025/01/quantum-computing-drug-development/"><u>medicine</u></a>, but also pose a <a href="https://blog.cloudflare.com/the-quantum-menace/"><u>threat</u></a> to computer security systems. The term <i>Q-day</i> refers to the day that adversaries possess quantum computers that are large and stable enough to break the conventional <a href="https://www.cloudflare.com/learning/ssl/how-does-public-key-encryption-work/"><u>public-key cryptography</u></a> that secures much of today’s data and communications. Recent <a href="https://sam-jaques.appspot.com/quantum_landscape"><u>advances in quantum computing</u></a> have made it clear that it is no longer a question of <i>if </i>Q-day will arrive, but <i>when</i>.</p><p>What does it mean, then, for your organization to be <a href="https://www.cloudflare.com/the-net/top-of-mind-technology/post-quantum-security/"><u>quantum ready</u></a>? At Cloudflare, our definition is simple: <i>your systems and communications should be secure even after Q-day</i>. </p><p>However, this definition often gets muddied by vendors insisting that products <i>built using quantum technology</i> are required in order to <i>secure </i>an organization <i>against quantum adversaries</i>. In this blog post we explain why quantum technologies are neither necessary nor sufficient to <a href="https://www.cloudflare.com/the-net/security-signals/post-quantum-era/"><u>protect against attacks by a quantum adversary</u></a>.</p><p>The good news is that there is already a solution: <a href="https://www.cloudflare.com/learning/ssl/quantum/what-is-post-quantum-cryptography/"><u>post-quantum cryptography (PQC)</u></a>. PQC protects against attacks by quantum adversaries, but PQC is not a quantum technology — it runs on conventional computers without specialized hardware. You can use PQC today on the computers you already have, without buying expensive new hardware.</p>
    <div>
      <h2>Post-quantum cryptography</h2>
      <a href="#post-quantum-cryptography">
        
      </a>
    </div>
    <p>We’ve written <a href="https://blog.cloudflare.com/tag/post-quantum/"><u>quite a few blog posts</u></a> on post-quantum cryptography already, so we will keep this section brief.</p><p>The <a href="https://en.wikipedia.org/wiki/Public-key_cryptography"><u>public-key cryptography</u></a> that we’ve used for decades to secure our data and communications is based on math problems (like <a href="https://en.wikipedia.org/wiki/RSA_cryptosystem"><u>factoring large numbers</u></a>) that are believed to be <a href="https://en.wikipedia.org/wiki/Computational_hardness_assumption"><u>computationally hard</u></a> to solve on conventional computers. If you can efficiently solve the underlying math problem, you can efficiently break the cryptography and the systems that depend on it. As it turns out, the math problems underlying much of today’s public-key cryptography can be efficiently solved by specialized algorithms, like <a href="https://en.wikipedia.org/wiki/Shor%27s_algorithm"><u>Shor’s algorithm</u></a>, on large-scale quantum computers. </p><p>The solution? Pick new hard math problems (like finding <a href="https://blog.cloudflare.com/lattice-crypto-primer/"><u>“short” vectors in algebraic lattices</u></a>) that are no easier to solve with a quantum computer than with a conventional computer. Then, build new cryptographic systems around them. The <a href="https://www.nist.gov/"><u>US National Institute of Standards and Technologies (NIST)</u></a> launched an <a href="https://csrc.nist.gov/projects/post-quantum-cryptography/post-quantum-cryptography-standardization"><u>international competition</u></a> in 2016 to identify and standardize such cryptographic systems, which resulted in several new standards for post-quantum cryptography being published in 2024, and <a href="https://blog.cloudflare.com/another-look-at-pq-signatures/"><u>several more under consideration</u></a> for future standardization.</p><p>Post-quantum cryptography (PQC) runs on your existing phones, laptops, and servers. PQC runs at <a href="https://radar.cloudflare.com/adoption-and-usage#post-quantum-encryption-adoption"><u>Internet scale</u></a> and can even be <a href="https://blog.cloudflare.com/pq-2024/#ml-kem-versus-x25519"><u>more performant</u></a> than classical cryptography. Except in rare cases, like when you need additional hardware acceleration in cheap smartcards or to replace legacy systems that lack <a href="https://en.wikipedia.org/wiki/Cryptographic_agility"><u>cryptographic agility</u></a>, there is <b>no need to purchase new hardware to migrate to PQC</b>.</p><p><b>If you want to know how to protect your organization from security threats posed by quantum computers, you can stop reading now. Post-quantum cryptography is the solution. </b></p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6V5tcIzTpANpLJ0lQFUKKJ/50c58a5536a25b39985b6fc5f17ed432/image_-_2025-09-19T142023.308.png" />
          </figure><p>Alternatively, you can read below for our perspective on hardware-based quantum security technologies that are sometimes marketed as security solutions.</p>
    <div>
      <h2>Quantum security technologies</h2>
      <a href="#quantum-security-technologies">
        
      </a>
    </div>
    <p>Quantum technologies capture the imagination. <a href="https://en.wikipedia.org/wiki/Quantum_computing"><u>Quantum computers</u></a> (possibly linked together in a <a href="https://en.wikipedia.org/wiki/Quantum_network"><u>quantum Internet</u></a>) promise to deliver breakthroughs in <a href="https://www.weforum.org/stories/2025/01/quantum-computing-drug-development/"><u>drug discovery</u></a> and <a href="https://pubs.aip.org/aip/jap/article/133/22/221102/2896017/Quantum-computing-and-materials-science-A"><u>materials science</u></a> via advanced molecular simulation. Measurement of physical <a href="https://en.wikipedia.org/wiki/Hardware_random_number_generator"><u>quantum processes</u></a> can be used to generate <a href="https://en.wikipedia.org/wiki/Entropy"><u>entropy</u></a> with mathematically <a href="https://www.nature.com/articles/s41467-022-35556-z"><u>provable properties</u></a>.</p><p>This is exciting technology and fundamental scientific research. But this technology is <b>not</b> required to secure data and communications against quantum attackers.</p><p>In this section, we’ll explain why quantum security technologies do not need to be part of your quantum readiness strategy, and <b>any decision to invest in quantum technology should not be based on a desire to defend data and communications systems against the threat of quantum adversaries. </b>Instead, investments should be based on a desire to improve quantum technologies in their own right, for example to help with applications like <a href="https://pubs.acs.org/doi/10.1021/acs.chemrev.4c00678"><u>chemistry</u></a>, <a href="https://www.cloudflare.com/learning/ai/what-is-machine-learning/"><u>machine learning</u></a>, and <a href="https://pmc.ncbi.nlm.nih.gov/articles/PMC11257328/"><u>financial modeling</u></a>.</p><p>Our position here is largely in agreement with the strategies towards quantum security technologies of the <a href="https://www.nsa.gov/Cybersecurity/Post-Quantum-Cybersecurity-Resources/"><u>US National Security Agency (NSA)</u></a>, <a href="https://www.ncsc.gov.uk/whitepaper/quantum-networking-technologies"><u>UK National Cyber Security Centre (NCSC)</u></a>, <a href="https://english.ncsc.nl/binaries/ncsc-en/documenten/publications/2024/march/25/quantum-secure/Make+your+organization+quantum+secure.pdf"><u>NL Nationaal Cyber Security Centrum (NCSC)</u></a>, and <a href="https://www.bsi.bund.de/EN/Themen/Unternehmen-und-Organisationen/Informationen-und-Empfehlungen/Quantentechnologien-und-Post-Quanten-Kryptografie/quantentechnologien-und-post-quanten-kryptografie_node.html"><u>DE Federal Office for Information Security (BSI)</u></a>. We’ll focus on two quantum technologies widely marketed as security products: quantum key distribution (QKD) and quantum random number generation (QRNG).</p>
    <div>
      <h3>Quantum key distribution</h3>
      <a href="#quantum-key-distribution">
        
      </a>
    </div>
    <p>Quantum key distribution (QKD) is a hardware-based solution to secure communications across point-to-point links. Rather than relying on hard mathematical problems, QKD relies on principles of quantum physics to establish a shared symmetric secret between two parties, while ensuring that eavesdropping can be detected. QKD provides security guarantees that are based on physical properties of the communication channel. Once a shared secret is established, parties can switch to traditional symmetric-key cryptography for secure communication. QKD is the first step towards a futuristic “quantum Internet.” However, there are some fundamental reasons why QKD cannot be a general replacement for classical cryptography running on conventional hardware.</p><p>Most importantly, <i>QKD does not operate at Internet scale</i>. QKD is used to establish an unauthenticated secret between pairs of parties with a direct physical link between them. The parties can then use an authentication mechanism based on conventional cryptography to bootstrap a secure communication channel over that link. While building dedicated physical links may be feasible for cross-datacenter communication or across major Internet backbones, it is not possible for most pairs of parties on the Internet. In particular, deploying QKD for the “last-mile” connection to end-user devices would require that each device has a direct physical connection to every server or device it needs to securely communicate with.</p><p>Connectivity aside, there's a good reason why the Internet doesn't rely on secure point-to-point links: they do not scale (or rather, they scale exponentially). Bringing a new device online would require a change to <i>every other device</i> it needs to communicate with, a massive operational burden on everyone. Fortunately, there’s a better way. The <a href="https://www.cloudflare.com/learning/ddos/glossary/open-systems-interconnection-model-osi/"><u>OSI model</u></a> for networking provides an abstraction such that two parties can communicate even if they don’t share a direct physical link, so long as some chain of physical links exists between them. Public-key cryptography, invented in the seminal “<a href="https://www-ee.stanford.edu/~hellman/publications/24.pdf"><u>New Directions in Cryptography</u></a>” paper in 1976, allows two parties participating in the same <a href="https://en.wikipedia.org/wiki/Public_key_infrastructure"><u>public-key infrastructure</u></a> to establish a secure <a href="https://en.wikipedia.org/wiki/End-to-end_encryption"><u>end-to-end encrypted</u></a> communication channel, without requiring any prior setup between them. The massive scaling enabled by these technologies is why the secure Internet exists as we know it. Secure point-to-point links are not part of the solution.</p><p>Lack of scalability is enough for us to disqualify QKD outright: if a technology can’t bring security to the whole Internet, we’re not going to spend much time on it.</p><p>The challenges with QKD don’t stop there though.</p><p>QKD touts theoretical security guarantees, but achieving security in practice is not so simple. QKD systems have been <a href="https://www.bsi.bund.de/SharedDocs/Downloads/EN/BSI/Publications/Studies/QKD-Systems/QKD-Systems.pdf?__blob=publicationFile&amp;v=3"><u>plagued by implementation attacks</u></a>, both classical <a href="https://en.wikipedia.org/wiki/Side-channel_attack"><u>sidechannel attacks</u></a> and <a href="https://www.bsi.bund.de/SharedDocs/Downloads/EN/BSI/Publications/Studies/QKD-Systems/QKD-Systems.pdf?__blob=publicationFile&amp;v=3"><u>new ones</u></a> specific to the technology. Further, QKD works best over a special medium: either <a href="https://pmc.ncbi.nlm.nih.gov/articles/PMC4646568/"><u>fiber</u></a> or a <a href="https://journals.aps.org/prapplied/abstract/10.1103/PhysRevApplied.19.064003"><u>vacuum</u></a>. QKD has been demonstrated <a href="https://iopscience.iop.org/article/10.1088/1367-2630/16/4/043003"><u>over the air</u></a>, but performance and the implementation security mentioned before suffers. We still have not seen QKD work on a mobile phone or over Wi-Fi networks.</p><p>Further, neither QKD nor any other quantum technologies provide authentication to prove that the party on the other end of the key exchange is who you think they are. This opens the door for a classic <a href="https://blog.cloudflare.com/monsters-in-the-middleboxes/"><u>monster in the middle (MITM)</u></a> attack, where an adversary intercepts your connection, establishes a separate secure QKD link to you and your intended destination, and then sits in the middle reading and relaying all traffic. To prevent this, you must authenticate the identity of the party you are connecting to, using either <a href="https://en.wikipedia.org/wiki/Pre-shared_key"><u>pre-shared keys</u></a> or conventional public-key cryptography. The bottom line is, whether or not you invest in QKD, you still need a solution for authentication to protect against active attackers armed with quantum computers. Practically speaking, that means you need PQC, but PQC is already a standalone solution that provides both authentication and key agreement, which leads to questions of why use QKD in the first place.</p><p>Some <a href="https://www.amazon.science/blog/qkd-and-authentication-separating-facts-from-myths"><u>proponents</u></a> <a href="https://www.bluequbit.io/quantum-internet"><u>argue</u></a> that QKD should be integrated into existing systems as an extra security layer. The value proposition of QKD relates to the “<a href="https://en.wikipedia.org/wiki/Harvest_now,_decrypt_later"><u>harvest now, decrypt later</u></a>” threat. In public-key cryptography, the key exchange messages used to set up encryption keys to secure a communication channel are exchanged in full view of a potential adversary. If an adversary records the key exchange messages, they might hope to use improved techniques in the future to solve the hard math problems upon which the security of the key exchange relies, allowing them to recover the encryption keys and decrypt the communication. If encryption keys are exchanged directly via QKD instead, the eavesdropper protections provided by QKD stop an adversary from recording messages that could later allow them to recover the encryption key (e.g. by using a quantum computer or other advances in cryptanalysis). The problem is, however, that this “extra security layer” is brittle, and limited to a single physical link. As soon as the data is transmitted elsewhere — for instance at an Internet exchange point or to travel to an end-user — the QKD security ends. For the rest of its journey, the data is protected by standard protocols like <a href="https://www.cloudflare.com/learning/ssl/transport-layer-security-tls/"><u>TLS</u></a>, making the value of the initial QKD link questionable.</p><p>While we hope the technology progresses, QKD is neither necessary nor sufficient for security against a quantum adversary. PQC is sufficient for security against a quantum adversary, already runs on your existing hardware, and works everywhere.</p>
    <div>
      <h3>Quantum random number generators</h3>
      <a href="#quantum-random-number-generators">
        
      </a>
    </div>
    <p>Quantum random number generators (QRNGs) are a type of<a href="https://en.wikipedia.org/wiki/Hardware_random_number_generator"><u> “true” random number generator (TRNG)</u></a> that work by harnessing inherent unpredictability of quantum mechanics, for example by measuring <a href="https://en.wikipedia.org/wiki/Radioactive_decay"><u>atomic decay</u></a> or shooting photons at a <a href="https://en.wikipedia.org/wiki/Beam_splitter"><u>beam splitter</u></a>. Other types of classical (non-quantum) TRNGs use physical phenomena that exhibit random properties, such as <a href="https://ieeexplore.ieee.org/abstract/document/982700"><u>thermal noise</u></a> from electrical components, the motion of hot wax in <a href="https://blog.cloudflare.com/randomness-101-lavarand-in-production/"><u>lava lamps</u></a>, <a href="https://blog.cloudflare.com/harnessing-office-chaos/#londons-unpredictable-pendulums"><u>double pendulums</u></a>, <a href="https://blog.cloudflare.com/harnessing-office-chaos/#austins-mesmerizing-mobiles"><u>hanging mobiles</u></a>, or <a href="https://blog.cloudflare.com/chaos-in-cloudflare-lisbon-office-securing-the-internet-with-wave-motion/"><u>water wave machines</u></a>.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6hDsBCSgInfwiAP7Qlnmth/8c1601c30a8982a164edfd096a3792a6/image_-_2025-09-19T141347.781.png" />
          </figure><p>In cryptography and computer security, the essential property required from a random number generator is that the outputs are unpredictable and unbiased. This can be achieved by taking a small seed (say, 256 bits) of true randomness and feeding it to a cryptographically-secure pseudorandom number generator (CSPRNG) to produce an essentially limitless stream of pseudorandom output indistinguishable from true randomness. The randomness used to seed the CSPRNG can be based on either classical or quantum physical processes, as long as it is not known to the adversary. Whether or not you use a QRNG to generate the seed, a CSPRNG is essential for cryptographic applications.</p><p>We are the first to get excited about <a href="https://blog.cloudflare.com/randomness-101-lavarand-in-production/"><u>fun</u></a> <a href="https://blog.cloudflare.com/chaos-in-cloudflare-lisbon-office-securing-the-internet-with-wave-motion/"><u>new</u></a> <a href="https://blog.cloudflare.com/harnessing-office-chaos/"><u>sources</u></a> of <a href="https://blog.cloudflare.com/league-of-entropy/"><u>randomness</u></a>. However, we’d like to emphasize that randomness derived from quantum effects is not necessary to combat threats from quantum computers. Quantum computers do not enable any practical new attacks against classical TRNGs in widespread use today. Your decision to invest in QRNGs should be based on a perceived improvement in the quality of randomness they produce and not on a perceived threat to classical TRNGs from quantum computing.</p>
    <div>
      <h2>Post-quantum cryptography at Cloudflare</h2>
      <a href="#post-quantum-cryptography-at-cloudflare">
        
      </a>
    </div>
    <p>Cloudflare has been at the forefront of developing and deploying PQC, and we are committed to making PQC available <a href="https://blog.cloudflare.com/post-quantum-crypto-should-be-free"><u>for free and by default</u></a> for all of our products. And we run it at scale — already <a href="https://radar.cloudflare.com/explorer?dataSet=http&amp;groupBy=post_quantum&amp;filters=botClass%253DLIKELY_HUMAN&amp;dt=1d"><u>over 40% of the human-generated traffic</u></a> to our network uses PQC.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1UaPlayxwXxE9cKVZAVAQR/d605e06ae2a173c8344c1def89d64b1c/image_-_2025-09-19T141341.648.png" />
          </figure><p>So what’s in that 40%? PQC is supported for all <a href="https://developers.cloudflare.com/ssl/post-quantum-cryptography/"><u>website and API traffic</u></a> served through Cloudflare, most of Cloudflare’s <a href="https://blog.cloudflare.com/post-quantum-cryptography-ga"><u>internal network traffic</u></a>, and traffic running over our <a href="https://blog.cloudflare.com/post-quantum-zero-trust/"><u>Zero-Trust platform</u></a>. All these connections use post-quantum key agreement to protect against the “<a href="https://en.wikipedia.org/wiki/Harvest_now,_decrypt_later"><u>harvest now, decrypt later</u></a>” threat, where an adversary intercepts and stores encrypted data today with the hope of decrypting with a quantum computer or other cryptanalytic advances in the future. Key agreement is an important first step, but there’s still more work to be done. We’re <a href="https://mailarchive.ietf.org/arch/msg/ietf-announce/OWIjlOTCI_PIO0S2O9NHj8YUY0I/"><u>actively working</u></a> with stakeholders in the industry to prepare for the upcoming migration to post-quantum signatures to prevent active impersonation attacks from quantum adversaries (after Q-day).</p>
    <div>
      <h2>Quantum readiness strategy</h2>
      <a href="#quantum-readiness-strategy">
        
      </a>
    </div>
    <p>If purchasing quantum hardware is not necessary, how <i>should</i> organizations <a href="https://www.cloudflare.com/the-net/quantum-computing/"><u>prepare for a quantum future</u></a>? The most effective strategy will depend on your organization’s individual needs, but some general strategies will pay off for most organizations:</p><p>Investing in basic security practices is a good start. Hire the right expertise if you don’t already have it. Find vendors that support post-quantum encryption in their offerings today, and whose products are cryptographically agile so you can enjoy a seamless transition to <a href="https://blog.cloudflare.com/another-look-at-pq-signatures/"><u>post-quantum signatures</u></a> and certificates when the industry migrates before Q-day. Follow a tunneling strategy: routing application traffic over the Internet via <a href="https://developers.cloudflare.com/ssl/post-quantum-cryptography/pqc-and-zero-trust/"><u>secure quantum safe tunnels</u></a> allows you to reduce your attack surface area with minimal changes to existing systems. If you’re already a Cloudflare customer (or want to be), our <a href="https://www.cloudflare.com/application-services/products/cdn/"><u>Content Distribution Network</u></a> and <a href="https://blog.cloudflare.com/post-quantum-zero-trust/"><u>Zero Trust platform</u></a> makes this easy. Learn more about how we can help at our <a href="https://www.cloudflare.com/pqc"><u>Post-Quantum Cryptography</u></a> webpage.</p> ]]></content:encoded>
            <category><![CDATA[Post-Quantum]]></category>
            <category><![CDATA[Cryptography]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[Entropy]]></category>
            <category><![CDATA[Randomness]]></category>
            <category><![CDATA[Research]]></category>
            <guid isPermaLink="false">3X7BJlPGwok0pKcR33AUs0</guid>
            <dc:creator>Luke Valenta</dc:creator>
        </item>
        <item>
            <title><![CDATA[Prepping for post-quantum: a beginner’s guide to lattice cryptography]]></title>
            <link>https://blog.cloudflare.com/lattice-crypto-primer/</link>
            <pubDate>Fri, 21 Mar 2025 13:00:00 GMT</pubDate>
            <description><![CDATA[ This post is a beginner's guide to lattices, the math at the heart of the transition to post-quantum (PQ) cryptography. It explains how to do lattice-based encryption and authentication from scratch. ]]></description>
            <content:encoded><![CDATA[ <p>The cryptography that secures the Internet is evolving, and it's time to catch up. This post is a tutorial on lattice cryptography, the paradigm at the heart of the post-quantum (PQ) transition.</p><p>Twelve years ago (in 2013), the <a href="https://en.wikipedia.org/wiki/Edward_Snowden"><u>revelation of mass surveillance in the US</u></a> kicked off the widespread adoption of <a href="https://www.cloudflare.com/learning/ssl/transport-layer-security-tls/"><u>TLS</u></a> for encryption and authentication on the web. This transition was buoyed by the standardization and implementation of new, more efficient public-key cryptography based on <a href="https://blog.cloudflare.com/a-relatively-easy-to-understand-primer-on-elliptic-curve-cryptography/"><u>elliptic curves</u></a>. Elliptic curve cryptography was both faster and required less communication than its predecessors, including <a href="https://en.wikipedia.org/wiki/RSA_(cryptosystem)"><u>RSA</u></a> and <a href="https://en.wikipedia.org/wiki/Diffie%E2%80%93Hellman_key_exchange"><u>Diffie-Hellman</u></a> over finite fields.</p><p>Today's transition to <a href="https://www.cloudflare.com/learning/ssl/quantum/what-is-post-quantum-cryptography/"><u>PQ cryptography</u></a> addresses a looming threat for TLS and beyond: once built, a sufficiently large quantum computer can be used to break all <a href="https://www.cloudflare.com/learning/ssl/how-does-public-key-encryption-work/">public-key cryptography</a> in use today. And we continue to see <a href="https://blog.google/technology/research/google-willow-quantum-chip/"><u>advancements</u></a> in quantum-computer engineering that bring us closer to this threat becoming a reality.</p><p>Fortunately, this transition is well underway. The research and standards communities have spent the last several years developing alternatives that resist quantum cryptanalysis. For its part, Cloudflare has contributed to this process and is an early adopter of newly developed schemes. In fact, PQ encryption has been available at our edge <a href="https://blog.cloudflare.com/post-quantum-for-all/"><u>since 2022</u></a> and is <a href="https://radar.cloudflare.com/adoption-and-usage#post-quantum-encryption-adoption"><u>used in over 35% of non-automated HTTPS traffic today (2025)</u></a>. And this year we're beginning a major push towards PQ authentication for the TLS ecosystem.</p><p>Lattice-based cryptography is the first paradigm that will replace <a href="https://blog.cloudflare.com/a-relatively-easy-to-understand-primer-on-elliptic-curve-cryptography/"><u>elliptic curves</u></a>. Apart from being PQ secure, lattices are often as fast, and sometimes faster, in terms of CPU time. However, this new paradigm for public key crypto has one major cost: <b>lattices require much more communication than elliptic curves.</b> For example, establishing an encryption key using lattices requires 2272 bytes of communication between the client and the server (<a href="https://csrc.nist.gov/pubs/fips/203/final"><u>ML-KEM-768</u></a>), compared to just 64 bytes for a key exchange using a modern elliptic-curve-based scheme (<a href="https://datatracker.ietf.org/doc/html/rfc7748"><u>X25519</u></a>). Accommodating such costs requires a significant amount of engineering, from dealing with <a href="https://blog.cloudflare.com/sizing-up-post-quantum-signatures/"><u>TCP packet fragmentation</u></a>, to reworking <a href="https://blog.cloudflare.com/pq-2024/#two-migrations"><u>TLS and its public key infrastructure</u></a>. Thus, the PQ transition is going to require the participation of a large number of people with a variety of backgrounds, not just cryptographers.</p><p>The primary audience for this blog post is those who find themselves involved in the PQ transition and want to better understand what's going on under the hood. However, more fundamentally, we think it's important for everyone to understand lattice cryptography on some level, especially if we're going to trust it for our security and privacy.</p><p>We'll assume you have a software-engineering background and some familiarity with concepts like TLS, encryption, and authentication. We'll see that the math behind lattice cryptography is, at least at the highest level, not difficult to grasp. Readers with a crypto-engineering background who want to go deeper might want to start with the excellent <a href="https://eprint.iacr.org/2024/1287"><u>tutorial by Vadim Lyubashevsky</u></a> on which this blog post is based. We also recommend <a href="https://keymaterial.net/2023/09/01/learning-but-with-errors/"><u>Sophie Schmieg's blog</u></a> on this subject.</p><p>While the transition to lattice cryptography incurs costs, it also creates opportunities. Many things we can build with elliptic curves we can also build with lattices, though not always as efficiently; but there are also things we can do with lattices that we don't know how to do efficiently with anything else. We'll touch on some of these applications at the very end.</p><p>We're going to cover a lot of ground in this post. If you stick with it, we hope you'll come away feeling empowered, not only to tackle the engineering challenges the PQ transition entails, but to solve problems you didn't know how to solve before.</p><p>Strap in — let's have some fun!</p>
    <div>
      <h3>Encryption</h3>
      <a href="#encryption">
        
      </a>
    </div>
    <p>The most pressing problem for the PQ transition is to ensure that tomorrow's quantum computers don't break today's encryption. An attacker today can store the packets exchanged between your laptop and a website you visit, and then, some time in the future, decrypt those packets with the help of a quantum computer. This means that much of the sensitive information transiting the Internet today — everything from <a href="https://blog.cloudflare.com/https-only-for-cloudflare-apis-shutting-the-door-on-cleartext-traffic/"><u>API tokens</u></a> and passwords to database encryption keys — may one day be unlocked by a quantum computer.</p><p>In fact, today's encryption in TLS is <i>mostly</i> PQ secure: <b>what's at risk is the process by which your browser and a server establish an encryption key</b>. Today this is usually done with elliptic-curve-based schemes, which are not PQ secure; our goal for this section is to understand how to do key exchange with lattices-based schemes, which are.</p><p>We will work through and implement a simplified version of <a href="https://csrc.nist.gov/pubs/fips/203/final"><u>ML-KEM</u></a>, a.k.a. Kyber, the most widely deployed PQ key exchange in use today. Our code will be less efficient and secure than a <a href="https://pkg.go.dev/github.com/cloudflare/circl@v1.6.0/kem/mlkem"><u>spec-compliant, production-quality implementation</u></a>, but will be good enough to grasp the main ideas.</p><p>Our starting point is a protocol that looks an awful lot like <a href="https://en.wikipedia.org/wiki/Diffie%E2%80%93Hellman_key_exchange"><u>Diffie-Hellman (DH)</u></a> key exchange. For those readers unacquainted with DH, the goal is for Alice and Bob to establish a shared secret over an insecure network. To do so, each picks a random secret number, computes the corresponding "key share", and sends the key share to the other:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5yWJO1Sem97PfhwqiKZRLm/bfacc44ca4f7187f4212b7b4616dd86d/image7.png" />
          </figure><p>Alice's secret number is $s$ and her key share is $g^s$; Bob's secret number is $r$ and his key share is $g^r$. Then given their secret and their peer's key share, each can compute $g^{rs}$. The security of this protocol comes from how we choose $g$, $s$, and $r$ and how we do arithmetic. The most efficient instantiation of DH uses elliptic curves.</p><p>In ML-KEM we replace operations on elliptic curves with <b>matrix </b>operations. It's not quite a drop-in replacement, so we'll need a little linear algebra to make sense of it. But don't worry: we're going to work with Python so we have running code to play with, and we'll use <a href="https://numpy.org/"><u>NumPy</u></a> to keep things high level.</p>
    <div>
      <h4>All the math we'll need</h4>
      <a href="#all-the-math-well-need">
        
      </a>
    </div>
    <p>A matrix is just a two-dimensional array of numbers. In NumPy, we can create a matrix as follows (importing <code>numpy</code> as <code>np</code>):</p>
            <pre><code>A = np.matrix([[1, 2, 3],
               [4, 5, 6],
               [7, 8, 9]])</code></pre>
            <p>This defines <code>A</code> to be the <code>3</code>-by-<code>3</code> matrix with entries <code>A[0,0]==1, A[0,1]==2, A[0,2]==3, A[1,0]==4</code>, and so on.</p><p>For the purposes of this post, the entries of our matrices will always be integers. Furthermore, whenever we add, subtract, or multiply two integers, we then <b>reduce</b> the result, just like we do with hours on a clock, so that we end up with a number in <code>range(Q)</code> for some positive number <code>Q</code>, called the modulus. The exact value doesn’t really matter now, but for ML-KEM it’s <code>Q=3329</code>, so let's go with that for now. (The modulus for a clock would be <code>Q=12</code>.)</p><p>In Python, we write multiplication of integers <code>a</code> and <code>b</code> modulo <code>Q</code> as<code> c = a*b % Q</code>. Here we compute <code>a*b</code>, divide the result by <code>Q</code>, then set <code>c</code> to the remainder. For example, <code>42*1337</code> <code>% Q</code> is equal to <code>2890</code> rather than <code>56154</code>. Modular addition and subtraction are done analogously. For the rest of this blog, we will sometimes omit "<code>% Q</code>" when it's clear in context that we mean modular arithmetic.</p><p>Next, we'll need three operations on matrices.</p><p>The first is <b>matrix transpose</b>, written <code>A.T </code>in NumPy. This operation flips the matrix along its diagonal so that <code>A.T[j,i] == A[i,j]</code> for all rows <code>i</code> and columns <code>j</code>:</p>
            <pre><code>print(A.T)
# [[1 4 7]
#  [2 5 8]
#  [3 6 9]]</code></pre>
            <p>To visualize this, imagine writing down a matrix on a translucent piece of paper. Draw a line from the top left corner to the bottom right corner of that paper, then rotate the paper 180° around that line:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2x3qNDptBldEKCgQBbmNPr/ad174561910e6c90753024625f16fd68/image3.png" />
          </figure><p>The second operation we'll need is <b>matrix multiplication</b>. Normally, we will multiply a matrix by a <b>column vector</b>, which is just a matrix with one column. For example, the following <code>3</code>-by-<code>1</code> matrix is a column vector:</p>
            <pre><code>s = np.matrix([[0],
               [1],
               [0]])</code></pre>
            <p>We can also write <code>s </code>more concisely as <code>np.matrix([[0,1,0]]).T</code>.  To multiply a square matrix <code>A</code> by a column vector <code>s</code>, we compute the <b>dot product</b> of each row of <code>A</code> with <code>s</code>. That is, if <code>t = A*s % Q</code>, then <code>t[i] == (A[i,0]*s[0,0] + A[i,1]*s[1,0] + A[i,2]*s[2,0]) % Q </code>for each row <code>i</code>. The output will always be a column vector:</p>
            <pre><code>print(A*s % Q)
# [[2]
#  [5]
#  [8]]</code></pre>
            <p>The number of rows of this column vector is equal to the number of rows of the matrix on the left hand side. In particular, if we take our column vector <code>s</code>, transpose it into a <code>1</code>-by-<code>3</code> matrix, and multiply it by a <code>3</code>-by-<code>1</code> matrix <code>r</code>, then we end up with a <code>1</code>-by-<code>1 </code>matrix:</p>
            <pre><code>r = np.matrix([[1,2,3]]).T
print(s.T*r % Q)
# [[2]]</code></pre>
            <p>The final matrix operation we'll need is <b>matrix addition</b>. If <code>A</code> and <code>B</code> are both <code>N</code>-by-<code>M</code> matrices, then <code>C = (A+B) % Q</code> is the <code>N</code>-by-<code>M</code> matrix for which <code>C[i,j] == (A[i,j]+B[i,j]) % Q</code>. Of course, this only works if the matrices we're adding have the same dimensions.</p>
    <div>
      <h4>Warm up</h4>
      <a href="#warm-up">
        
      </a>
    </div>
    <p>Enough maths — let's get to exchanging some keys. We start with the DH diagram from before and swap out the computations with matrix operations. Note that this protocol is not secure, but will be the basis of a secure key exchange mechanism we'll develop in the next section:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3qRFXuCjStyX8IdRgHogJi/0ea04a675e6b8b52fc74a2375827890c/image6.png" />
          </figure><ul><li><p>Alice and Bob agree on a public, <code>N</code>-by-<code>N</code> matrix <code>A</code>. This is analogous to the number $g$ that Alice and Bob agree on in the DH diagram.</p></li><li><p>Alice chooses a random length<code>-N</code> vector <code>s</code> and sends <code>t = A*s % Q</code> to Bob.</p></li><li><p>Bob chooses a random length<code>-N</code>  vector <code>r</code> and sends <code>u = r.T*A % Q</code> to Alice. You can also compute this as <code>(A.T*r).T % Q.</code></p></li></ul><p>The vectors <code>t</code> and <code>u</code> are analogous to DH key shares. After the exchange of these key shares, Alice and Bob can compute a shared secret. Alice computes the shared secret as <code>u*s % Q</code> and Bob computes the shared secret as <code>r.T*t % Q</code>. To see why they compute the same key, notice that <code>u*s == (r.T*A)*s == r.T*(A*s) == r.T*t.</code></p><p>In fact, this key exchange is essentially what happens in ML-KEM. However, we don't use this directly, but rather as part of a <b>public key encryption scheme</b>. Public key encryption involves three algorithms:</p><ul><li><p><code>key_gen():</code> The key generation algorithm that outputs a public encryption key <code>pk</code> and the corresponding secret decryption key <code>sk</code>.</p></li><li><p><code>encrypt()</code>: The encryption algorithm that takes the public key and a plaintext and outputs a ciphertext.</p></li><li><p><code>decrypt()</code>: The decryption algorithm that takes the secret key and a ciphertext and outputs the underlying plaintext. That is, <code>decrypt(sk, encrypt(pk, ptxt)) == ptxt</code> for any plaintext <code>ptxt</code>.</p></li></ul><p>We'll say the scheme is secure if, given a ciphertext and the public key used to encrypt it, no attacker can discern any information about the underlying plaintext without knowledge of the secret key. Once we have this encryption scheme, we then transform it into a <b>key-encapsulation mechanism</b> (the "KEM" in "ML-KEM") in the last step. A KEM is very similar to encryption except that the plaintext is always a randomly generated key.</p><p>Our encryption scheme is as follows:</p><ul><li><p><code>key_gen()</code>: To generate a key pair, we choose a random, square matrix <code>A</code> and a random column vector <code>s</code>. We set our public key to <code>(A,t=A*s % Q)</code> and our secret key to <code>s</code>. Notice that <code>t </code>is Alice's  key share from the key exchange protocol above.</p></li><li><p><code>encrypt()</code>: Suppose our plaintext <code>ptxt </code>is an integer in <code>range(Q)</code>. To encrypt <code>ptxt</code>, Bob generates his key share <code>u</code>. He then derives the shared secret and adds it to <code>ptxt</code>. The ciphertext has two components:</p></li></ul><blockquote><p><code>u = r.T*A % Q</code></p><p><code>v = (r.T*t + m) % Q</code></p></blockquote><p>Here <code>m </code>is a <code>1</code>-by-<code>1 </code>matrix containing the plaintext, i.e., <code>m = np.matrix([[ptxt]])</code>, and <code>r</code> is a random column vector.</p><ul><li><p><code>decrypt()</code>: To decrypt, Alice computes the shared secret and subtracts it from <code>v</code>:</p></li></ul><blockquote><p><code>m = (v - u*s) % Q</code></p></blockquote><p>Some readers will notice that this looks an awful lot like <a href="https://en.wikipedia.org/wiki/ElGamal_encryption"><u>El Gamal</u></a> encryption. This isn't a coincidence. Good cryptographers roll their own crypto; great cryptographers steal from good cryptographers.</p><p>Let's now put this together into code. The last thing we'll need is a method of generating random matrices and column vectors. We call this function <code>gen_mat() </code>below. Take a crack at implementing this yourself. Our scheme has two parameters: the modulus <code>Q</code>; and the dimension of <code>N</code> of the matrix and column vectors. The choice of <code>N</code> matters for security, but for now feel free to pick whatever value you want.</p>
            <pre><code>def key_gen():
    # Here `gen_mat()` returns an N-by-N matrix with entries
    # randomly chosen from `range(0, Q)`.
    A = gen_mat(N, N, 0, Q)
    # Like above except the matrix is N-by-1.
    s = gen_mat(N, 1, 0, Q)
    t = A*s % Q
    return ((A, t), s)

def encrypt(pk, ptxt):
    (A, t) = pk
    m = np.matrix([[ptxt]])
    r = gen_mat(N, 1, 0, Q)
    u = r.T*A % Q
    v = (r.T*t + m) % Q
    return (u, v)

def decrypt(sk, ctxt):
    s = sk
    (u, v) = ctxt
    m = (v - u*s) % Q
    return m[0,0]

# Test
assert decrypt(sk, encrypt(pk, 1)) == 1</code></pre>
            
    <div>
      <h4>Making the scheme secure (or "What is a lattice?")</h4>
      <a href="#making-the-scheme-secure-or-what-is-a-lattice">
        
      </a>
    </div>
    <p>By now, you might be wondering what on Earth a lattice even is. We promise we'll define it, but before we do, it'll help to understand why our warm-up scheme is insecure and what it'll take to fix it.</p><p>Readers familiar with linear algebra may already see the problem: in order for this scheme to be secure, it should be impossible for the attacker to recover the secret key <code>s</code>; but given the public <code>(A,t)</code>, we can immediately solve for <code>s</code> using <a href="https://en.wikipedia.org/wiki/Gaussian_elimination"><u>Gaussian elimination</u></a>.</p><p>In more detail, if <code>A</code> is invertible, we can write the secret key as <code>A</code><code><sup>-1</sup></code><code>*t == A</code><code><sup>-1</sup></code><code>*(A*s) == (A</code><code><sup>-1</sup></code><code>*A)*s == s,</code> where <code>A</code><code><sup>-1</sup></code> is the inverse of <code>A</code>. (When you multiply a matrix by its inverse, you get the identity matrix <code>I</code>, which simply takes a column vector to itself, i.e., <code>I*s == s.</code>) We can use Gaussian elimination to compute this matrix. Intuitively, all we're doing is solving a set of linear equations, where the entries of <code>s</code> are the unknown variables. (Note that this is possible even if <code>A</code> is not invertible.)</p><p>In order to make this encryption scheme secure, we need to make it a little... “messier”.</p>
    <div>
      <h5>Let's get messy</h5>
      <a href="#lets-get-messy">
        
      </a>
    </div>
    <p>For starters, we need to make it hard to recover the secret key from the public key. Let's try the following: generate another random vector <code>e</code> and add it into <code>A*</code>s. Our key generation algorithm becomes:</p>
            <pre><code>def key_gen():
    A = gen_mat(N, N, 0, Q)
    s = gen_mat(N, 1, 0, Q)
    e = gen_mat(N, 1, 0, Q)
    t = (A*s + e) % Q
    return ((A, t), s)</code></pre>
            <p>Our formula for the column vector component of the public key, <code>t</code>, now includes an additive term <code>e</code>, which we'll call the <b>error</b>. Like the secret key, the error is just a random vector. </p><p>Notice that the previous attack no longer works: since <code>A</code><code><sup>-1</sup></code><code>*t == A</code><code><sup>-1</sup></code><code>*(A*s + e) == A</code><code><sup>-1</sup></code><code>*(A*s) + A</code><code><sup>-1</sup></code><code>*e == s + A</code><code><sup>-1</sup></code><code>*e</code>, we need to know <code>e</code> in order to compute <code>s</code>.</p><p>Great, but this patch creates another problem. Take a second to plug in this new key generation algorithm into your implementation and test it out. What happens?</p><p>You should see that <code>decrypt()</code> now outputs garbage. We can see why using a little algebra:</p><blockquote><p>	<code>(v - u*s) == (r.T*t + m) - (r.T*A)*s</code></p><p><code>                == r.T*(A*s + e) + m - (r.T*A)*s</code></p><p><code>                == r.T*(A*s) + r.T*e + m - r.T*(A*s)</code></p><p><code>                == r.T*e + m</code></p></blockquote><p>The entries of <code>r</code> and <code>e</code> are sampled randomly, so <code>r.T*e</code> is also uniformly random. It's as if we encrypted <code>m</code> with a <a href="https://en.wikipedia.org/wiki/One-time_pad"><u>one-time pad</u></a>, then threw away the one-time pad!</p><h6>Handling decryption errors</h6><p>What can we do about this? First, it would help if <code>r.T*e</code> were small so that decryption yields something that's close to the plaintext. Imagine we could generate <code>r</code> and<code> e</code> in such a way that <code>r.T*e</code> were in <code>range(-epsilon, epsilon+1)</code> for some small <code>epsilon</code>. Then <code>decrypt</code> would output a number in <code>range(ptxt-epsilon, ptxt+epsilon+1)</code>, which would be pretty close to the actual plaintext.</p><p>However, we need to do better than get close. Imagine your browser failing to load your favorite website one-third of the time because of a decryption error. Nobody has time for that.</p><p>ML-KEM reduces the probability of decryption errors by being clever about how we encode the plaintext. Suppose all we want to do is encrypt a single bit, i.e., <code>ptxt </code>is either <code>0</code> or <code>1</code>. Consider the numbers in <code>range(Q)</code>, and split the number line into four chunks of roughly equal length:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3xojaMtl881io3pH4BdXNe/d47aede96c751942b82fedf0ca396450/image2.png" />
          </figure><p>Here we've labeled the region around zero (<code>-Q/4</code> to <code>Q/4</code> modulo <code>Q</code>) with <code>ptxt=0</code> and the region far away from zero with <code>ptxt=1</code>. To encode the bit, we set it to the integer corresponding to the middle of its range, i.e., <code>m = np.matrix([[ptxt * Q//2]])</code>. (Note the double "<code>//</code>" — this denotes integer division in Python.) To decode, we choose the <code>ptxt</code> corresponding to whatever range<code> m[0,0] </code>is in. That way if the decryption error is small, then we're highly likely to end up in the correct range.</p><p>Now all that's left is to ensure the decryption error, <code>r.T*e</code>, is small. We do this by sampling <b>short vectors </b><code>r</code> and <code>e</code>. By "short" we mean the entries of these vectors are sampled from a range that is much smaller than <code>range(Q)</code>. In particular, we'll pick some small positive integer <code>beta</code> and sample entries <code>range(-beta,beta+1)</code>.</p><p>How do we choose <code>beta</code>? Well, it should be small enough that decryption succeeds with overwhelming probability, but not so small that <code>r</code> and <code>e</code> are easy to guess and our scheme is broken. Take a minute or two to play with this. The parameters we can vary are:</p><ul><li><p>the modulus <code>Q</code></p></li><li><p>the dimension of the column vectors <code>N</code></p></li><li><p>the shortness parameter <code>beta</code></p></li></ul><p>For what ranges of these parameters is the decryption error low but the secret vectors are hard to guess? For what ranges is our scheme most efficient, in terms of runtime and communication cost (size of the public key plus the ciphertext)? We'll give a concrete answer at the end of this section, but in the meantime, we encourage you to play with this a bit.</p><h6>Gauss strikes back</h6><p>At this point, we have a working encryption scheme that mitigates at least one key-recovery attack. We've come pretty far, but we have at least one more problem.</p><p>Take another look at our formula for the ciphertext  <code>ctxt = (u,v)</code>. What would happen if we managed to recover the random vector <code>r</code>? That would be catastrophic, since <code>v == r.T*t + m</code>, and we already know <code>t</code> (part of the public key) and<code> v </code>(part of the ciphertext).</p><p>Just as we were able to compute the secret key from the public key in our initial scheme, we can recover the encryption randomness <code>r</code> from the ciphertext component <code>u</code> using Gaussian elimination. Again, this is just because <code>r</code> is the solution to a system of linear equations.</p><p>We can mitigate this plaintext-recovery attack just as before, by adding some noise. In particular, we'll generate a short vector according to <code>gen_mat(N,1,-beta,beta+1)</code> and add it into <code>u</code>. We also need to add noise to <code>v</code> in the same way, for reasons that we'll discuss in the next section.</p><p>Once again, adding noise increases the probability of a decryption error, but this time the magnitude of the error also depends on the secret key <code>s</code>. To see this, recall that during decryption, we multiply <code>u</code> by <code>s</code> (to compute the shared secret), and the error vector is an additive term. We'll therefore need <code>s</code> to be a short vector as well.</p><p>Let's now put together everything we've learned into an updated encryption scheme. Our scheme now has three parameters, <code>Q</code>, <code>N</code>, and <code>beta</code>, and can be used to encrypt a single bit:</p>
            <pre><code>def key_gen():
    A = gen_mat(N, N, 0, Q)
    s = gen_mat(N, 1, -beta, beta+1)
    e1 = gen_mat(N, 1, -beta, beta+1)
    t = (A*s + e1) % Q
    return ((A, t), s)

def encrypt(pk, ptxt):
    (A, t) = pk
    m = np.matrix([[ptxt*(Q//2) % Q]])
    r = gen_mat(N, 1, -beta, beta+1)
    e2 = gen_mat(N, 1, -beta, beta+1)
    e3 = gen_mat(1, 1, -beta, beta+1)
    u = (r.T*A + e2) % Q
    v = (r.T*t + e3 + m) % Q
    return (u, v)

def decrypt(sk, ctxt):
    s = sk
    (u, v) = ctxt
    m = (v - u*s) % Q
    if m[0,0] in range(Q//4, 3*Q//4):
        return 1
    return 0

# Test
assert decrypt(sk, encrypt(pk, 0)) == 0
assert decrypt(sk, encrypt(pk, 1)) == 1</code></pre>
            <p>Before moving on, try to find parameters for which the scheme works and for which the secret and error vectors seem hard to guess.</p>
    <div>
      <h5>Learning with errors</h5>
      <a href="#learning-with-errors">
        
      </a>
    </div>
    <p>So far we have a functioning encryption scheme for which we've mitigated two attacks, one a key-recovery attack and the other a plaintext-recovery attack. There seems to be no other obvious way of breaking our scheme, unless we choose parameters that are so weak that an attacker can easily guess the secret key s or ciphertext randomness r. Again, these vectors need to be short in order to prevent decryption errors, but not so short that they are easy to guess. (Likewise for the error terms.)</p><p>Still, there may be other attacks that require a little more sophistication to pull off. For instance, there might be some mathematical analysis we can do to recover, or at least make a good guess of, a portion of the ciphertext randomness. This raises a more fundamental question: in general, how do we establish that cryptosystems like this are actually secure?</p><p>As a first step, cryptographers like to try and reduce the attack surface. <b>Modern cryptosystems are designed so that the problem of attacking the scheme reduces to solving some other problem that is easier to reason about.</b></p><p>Our public key encryption scheme is an excellent illustration of this idea. Think back to the key- and plaintext-recovery attacks from the previous section. What do these attacks have in common?</p><p>In both instances, the attacker knows some public vector that allowed it to recover a secret vector:</p><ul><li><p>In the key-recovery attack, the attacker knew <code>t</code> for which <code>A*s == t.</code></p></li><li><p>In the plaintext-recovery attack, the attacker knew <code>u</code> for which <code>r.T*A == u </code>(or, equivalently,<code> A.T*r == u.T</code>).</p></li></ul><p>The fix in both cases was to construct the public vector in such a manner that it is hard to solve for the secret, namely, by adding an error term. However, ideally the public vector would reveal no information about the secret whatsoever. This ideal is formalized by the <b>Learning With Errors (LWE)</b> problem.</p><p>The LWE problem asks the attacker to distinguish between two distributions. Concretely, imagine we flip a coin, and if it comes up heads, we sample from the first distribution and give the sample to the attacker; and if the coin comes up tails, we sample from the second distribution and give the sample to the attacker. The distributions are as follows:</p><ul><li><p><code>(A,t=A*s + e</code>) where <code>A</code> is a random matrix generated with <code>gen_mat(N,N,0,Q)</code> and <code>s</code> and <code>e</code> are short vectors generated with <code>gen_mat(N,1,-beta,beta+1)</code>.</p></li><li><p><code>(A,t) </code>where <code>A</code> is a random matrix generated with <code>gen_mat(N,N,0,Q)</code> and <code>t</code> is a random vector generated with <code>gen_mat(N,1,0,Q)</code>.</p></li></ul><p>The first distribution corresponds to what we actually do in the encryption scheme; in the second, <code>t</code> is just a random vector, and no longer a secret vector at all. We say that the LWE problem is "hard" if no attacker is able to guess the coin flip with probability significantly better than one-half.</p><p>Our encryption is <i>passively </i>secure — meaning the ciphertext doesn't leak any information about the plaintext — if the LWE problem is hard for the parameters we chose. To see why, notice that both the public key and ciphertext look like LWE instances; if we can replace each instance with an instance of the random distribution, then the ciphertext would be completely independent of the plaintext and therefore leak no information about it at all. Note that, for this argument to go through, we also have to add the error term <code>e3</code> to the ciphertext component <code>v</code>.</p>
    <div>
      <h5>Choosing the parameters</h5>
      <a href="#choosing-the-parameters">
        
      </a>
    </div>
    <p>We've established that if solving the LWE problem is hard for parameters <code>N</code>, <code>Q</code>, and <code>beta</code>, then so is breaking our public key encryption scheme. What's left for us to do is tune the parameters so that solving LWE is beyond the reach of any attacker we can think of. This is where lattices come in.</p><h6>Lattices</h6><p>A <b>lattice </b>is an infinite grid of points in high-dimensional space. A two-dimensional lattice might look something like this:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5Js1SkG7bWyCNwPxKaYf3i/277a665f0e44ec7594e7e11b35958bc8/image4.png" />
          </figure><p>The points always follow a clear pattern that resembles "lattice work" you might see in a garden:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5kOimwlM1guY2YMj9Gn9xu/62ee11ddde1e18ba0cf3151a0b87092e/image5.jpg" />
          </figure><p><sup><i>(Source: https://picryl.com/media/texture-wood-vintage-backgrounds-textures-8395bb)</i></sup></p><p>For cryptography, we care about a special class of lattices, those defined by a matrix <code>P</code> that "recognizes" points in the lattice. That is, the lattice recognized by <code>P</code> is the set of vectors <code>v</code> for which <code>P*v == 0</code>, where "<code>0</code>" denotes the all-zero vector. The all-zero vector is <code>np.zeros((N,1), dtype=int)</code> in NumPy.</p><p>Readers familiar with linear algebra may have a different definition of lattices in mind: in general, a lattice is the set of points obtained by taking linear combinations of some basis. Our lattices can also be formulated in this way, i.e., for a matrix <code>P</code> that recognizes a lattice, we can compute the basis vectors that generate the lattice. However, we don't much care about this representation here.</p><p>The LWE problem boils down to distinguishing a set of points that are "close to" the lattice from a set of points that are "far away from" the lattice. We construct these points from an LWE instance and a random <code>(A,t)</code> respectively. Here we have an LWE sample (left) and a sample from the random distribution (right):</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/45bmYu3jdn8g7cI1i7WC3t/c82617991463169ecd83b1b944aa879a/image10.png" />
          </figure><p>What this shows is that the points of the LWE instance are much closer to the lattice than the random instance. This is indeed the case on average. However, while distinguishing LWE instances from random is easy in two dimensions, it gets harder in higher dimensions.</p><p>Let's take a look at how we construct these points. First, let's take an LWE instance <code>(A,t=(A*s + e) % Q</code>) and consider the lattice recognized by the matrix <code>P</code> we get by concatenating <code>A</code> with the identity matrix <code>I</code>. This might look something like this (<code>N=3</code>):</p>
            <pre><code>A = gen_mat(N, N, 0, Q)
P = np.concatenate((A, np.identity(N, dtype=int)), axis=1)
print(P)
# [[1570  634  161	1	0	0]
#  [1522 1215  861	0	1	0]
#  [ 344 2651 1889	0	0	1]]</code></pre>
            <p>Notice that we can compute <code>t</code> by multiplying <code>P</code> by the vector we get by concatenating <code>s</code> and <code>e</code> (<code>beta=2</code>):</p>
            <pre><code>s = gen_mat(N, 1, -beta, beta+1)
e = gen_mat(N, 1, -beta, beta+1)
t = (A*s + e) % Q
z = np.concatenate((s, e))
print(z)
# [[-2]
#  [ 0]
#  [-2]
#  [ 0]
#  [-1]
#  [ 2]]
assert np.array_equal(t, P*z % Q)</code></pre>
            <p>Let <code>z</code> denote this vector and consider the set of points <code>v</code> for which <code>P*v == t</code>. By definition, we say this set of points is "close to" the lattice because <code>z</code> is a short vector. (Remember: by "short" we mean its entries are bounded around <code>0</code> by <code>beta</code>.)</p><p>Now consider a random<code> (A,t)</code> and consider the set of points <code>v</code> for which <code>P*v == t</code>. We won't prove it, but it is a fact that this set of points is likely to be "far away from" the lattice in the sense that there is no short vector <code>z</code> for which<code> P*z == t.</code></p><p>Intuitively, solving LWE gets harder as <code>z</code> gets longer. Indeed, increasing the average length of <code>z</code> (by making <code>beta</code> larger) increases the average distance to the lattice, making it look more like a random instance: </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4oN8fydfP54b6EDLIOTYjI/85813f716b48164077b8f6d27586f768/unnamed__1_.png" />
          </figure><p>On the other hand, making <code>z</code> too long creates another problem.</p><h6>Breaking lattice cryptography by finding short vectors</h6><p>Given a random matrix <code>A</code>, the <b>Short Integer Solution (SIS)</b> problem is to find short vectors (i.e., whose entries are bounded by <code>beta</code>) <code>z1</code> and <code>z2</code> for which <code>(A*z1 + z2) % Q</code> is zero. Notice that this is equivalent to finding a short vector <code>z</code> in the lattice recognized by <code>P</code>:</p>
            <pre><code>z = np.concatenate((z1, z2))
assert np.array_equal((A*z1 + z2) % Q, P*z % Q)</code></pre>
            <p>If we had a (quantum) computer program for solving SIS, then we could use this program to solve LWE as well: if <code>(A,t) </code>is an LWE instance, then <code>z1.T*t </code>will be small; otherwise, if <code>(A,t)</code> is random, then <code>z1.T*t </code>will be uniformly random. (You can convince yourself of this using a little algebra.) Therefore, in order for our encryption scheme to be secure, it must be hard to find short vectors in the lattice defined by those parameters.</p><p>Intuitively, finding long vectors in the lattice is easier than finding short ones, which means that solving the SIS problem gets easier as <code>beta </code>gets closer to <code>Q</code>. On the other hand, as <code>beta </code> gets closer to <code>0</code>, it gets easier to distinguish LWE instances from random!</p><p><b>This suggests a kind of Goldilocks zone for LWE-based encryption</b>: if the secret and noise vectors are too short, then LWE is easy; but if the secret and noise vectors are too long, then SIS is easy. The optimal choice is somewhere in the middle.</p><h6>Enough math, just give me my parameters!</h6><p>To tune our encryption scheme, we want to choose parameters for which the most efficient known algorithms (quantum or classical) for solving LWE are out of reach for any attacker with as many resources as we can imagine (and then some, in case new algorithms are discovered). But how do we know which attacks to look out for?</p><p>Fortunately, the community of expert lattice cryptographers and cryptanalysts maintains a tool called <a href="https://github.com/malb/lattice-estimator"><u>lattice-estimator</u></a> that estimates the complexity of the best known (quantum) algorithms for lattice problems relevant to cryptography. Here's what we get when we run this tool for ML-KEM (this requires <a href="https://www.sagemath.org/"><u>Sage</u></a> to run):</p>
            <pre><code>sage: from estimator import *
sage: res = LWE.estimate.rough(schemes.Kyber768)
usvp        :: rop: ≈2^182.2, red: ≈2^182.2, δ: 1.002902, β: 624, d: 1427, tag: usvp
dual_hybrid :: rop: ≈2^174.3, red: ≈2^174.3, guess: ≈2^162.5, β: 597, p: 4, ζ: 10, t: 60, β': 597, N: ≈2^122.7, m: 768</code></pre>
            <p>The number that we're most interested in is "<code>rop</code>", which estimates the amount of computation the attack would consume. Playing with this tool a bit, we eventually find some parameters for our scheme for which the "<code>usvp</code>" and "<code>dual_hybrid</code>" attacks have comparable complexity. However, lattice-estimator identifies an attack it calls "<code>arora-gb</code>" that applies to our scheme, but not to ML-KEM, that has much lower complexity.  (<code>N=600</code>, <code>Q=3329</code>, and <code>beta=4</code>):</p>
            <pre><code>sage: res = LWE.estimate.rough(LWE.Parameters(n=600, q=3329, Xs=ND.Uniform(-4,4), Xe=ND.Uniform(-4,4)))
usvp        :: rop: ≈2^180.2, red: ≈2^180.2, δ: 1.002926, β: 617, d: 1246, tag: usvp
dual_hybrid :: rop: ≈2^226.2, red: ≈2^225.4, guess: ≈2^224.9, β: 599, p: 3, ζ: 10, t: 0, β': 599, N: ≈2^174.8, m: 600
arora-gb    :: rop: ≈2^129.4, dreg: 9, mem: ≈2^129.4, t: 4, m: ≈2^64.7</code></pre>
            <p>We'd have to bump the parameters even further to the scheme to a regime that has comparable security to ML-KEM.</p><p>Finally, a word of warning: when designing lattice cryptography, determining whether our scheme is secure requires a lot more than estimating the cost of generic attacks on our LWE parameters. In the absence of a mathematical proof of security in a realistic adversarial model, we can't rule out other ways of breaking our scheme. Tread lightly, fair traveler, and bring a friend along for the journey.</p>
    <div>
      <h4>Making the scheme efficient</h4>
      <a href="#making-the-scheme-efficient">
        
      </a>
    </div>
    <p>Now that we understand how to encrypt with LWE, let's take a quick look at how to make our scheme efficient.</p><p>The main problem with our scheme is that we can only encrypt a bit at a time. This is because we had to split the  <code>range(Q)</code> into two chunks, one that encodes <code>1</code> and another that encodes <code>0</code>. We could improve the bit rate by splitting the range into more chunks, but this would make decryption errors more likely.</p><p>Another problem with our scheme is that the runtime depends heavily on our security parameters. Encryption requires <code>O(N</code><code><sup>2</sup></code><code>)</code> multiplications (multiplication is the most expensive part of a secure implementation of modular arithmetic), and in order for our scheme to be secure, we need to make <code>N</code> quite large.</p><p>ML-KEM solves both of these problems by replacing modular arithmetic with arithmetic over a <a href="https://en.wikipedia.org/wiki/Polynomial_ring"><b><u>polynomial ring</u></b></a>. This means the entries of our matrices will be polynomials rather than integers. We need to define what it means to add, subtract, and multiply polynomials, but once we've done that, everything else about the encryption scheme is the same.</p><p>In fact, you probably learned polynomial arithmetic in grade school. The only thing you might not be familiar with is polynomial modular reduction. To multiply two polynomials $f(X)$ and $g(X)$, we start by multiplying $f(X)\cdot g(X)$ as usual. Then we're going to <b>divide </b>$f(X)\cdot g(X)$ by some special polynomial — ML-KEM uses $X^{256}+1$ — and take the remainder. We won't try to explain this algorithm, but the takeaway is that the result is a polynomial with $256$ coefficients, each of which is an integer in <code>range(Q)</code>.</p><p>The main advantage of using a polynomial ring for arithmetic is that we can pack more bits into the ciphertext. Our formula for the ciphertext is exactly the same (<code>u=r.T*A + e2, v=r.T*t + e3 + m</code>), but this time the plaintext <code>m </code>encodes a polynomial. Each coefficient of the polynomial encodes a bit, and we'll handle decryption errors just as we did before, by splitting <code>range(Q)</code> into two chunks, one that encodes <code>1</code> and another that encodes <code>0</code>. This allows us to reliably encrypt 256 bits (32 bytes) per ciphertext.</p><p>Another advantage of using polynomials is that it significantly reduces the dimension of the matrix without impacting security. Concretely, the most widely used variant of ML-KEM, ML-KEM-768, uses a <code>3</code>-by-<code>3</code> matrix <code>A</code>, so just <code>9 </code>polynomials in total. (Note that $256 \cdot 3 = 768$, hence the name "ML-KEM-768".) However, note that we have to be careful in how we choose the modulus: $X^{256}+1$ is special in that it does not exhibit any algebraic structure that is known to permit attacks.</p><p>The choices of <code>Q=3329</code> for the coefficient modulus and $X^{256}+1$ for the polynomial modulus have one more benefit. They allow polynomial multiplication to be carried out using the <a href="https://eprint.iacr.org/2024/585"><u>NTT algorithm</u></a>, which massively reduces the number of multiplications and additions we have to perform. In fact, this optimization is a major reason why ML-KEM is sometimes faster in terms of CPU time than key exchange with elliptic curves.</p><p>We won't get into how NTT works here, except to say that the algorithm will look familiar to you if you've ever implemented RSA. In both cases we use the <a href="https://en.wikipedia.org/wiki/Chinese_remainder_theorem"><u>Chinese Remainder Theorem</u></a> to split multiplication up into multiple, cheaper multiplications with smaller moduli.</p>
    <div>
      <h4>From public key encryption to ML-KEM</h4>
      <a href="#from-public-key-encryption-to-ml-kem">
        
      </a>
    </div>
    <p>The last step to build ML-KEM is to make the scheme secure against chosen ciphertext attacks (CCA). Currently, it's only secure against chosen plaintext attacks (CPA), which basically means that the ciphertext leaks no information about the plaintext, regardless of the distribution of plaintexts. CCA security is stronger in that it gives the attacker access to <a href="https://en.wikipedia.org/wiki/Adaptive_chosen-ciphertext_attack#Practical_attacks"><u>decryptions of ciphertexts of its choosing</u></a>. (Of course, it's not allowed to decrypt the target ciphertext itself.) The specific transform used in ML-KEM results in a CCA-secure KEM ("Key-Encapsulation Mechanism").</p><p>Chosen ciphertext attacks might seem a bit abstract, but in fact they formalize a realistic threat model for many applications of KEMs (and public key encryption for that matter). For example, suppose we use the scheme in a protocol in which the server authenticates itself to a client by proving it was able to decrypt a ciphertext generated by the client. In this kind of protocol, the server acts as a sort of "decryption oracle" in which its responses to clients depend on the secret key. Unless the scheme is CCA secure, this oracle can be abused by an attacker to leak information about the secret key over time, allowing it to eventually impersonate the server.</p><p>ML-KEM incorporates several more optimizations to make it as fast and as compact as possible. For example, instead of generating a random matrix <code>A</code>, we can derive it from a random, 32-byte string (called a "seed") using a hash-based primitive called a XOF ("eXtendable Output Function"), in the case of ML-KEM this XOF is <a href="https://pycryptodome.readthedocs.io/en/latest/src/hash/shake128.html"><u>SHAKE128</u></a>. This significantly reduces the size of the public key.</p><p>Another interesting optimization is that the polynomial coefficients (integers in <code>range(Q)</code>) in the ciphertext are compressed by rounding off the least significant bits of each coefficient, thereby reducing the overall size of the ciphertext.</p><p>All told, for the most widely deployed parameters (ML-KEM-768), the public key is 1184 bytes and the ciphertext is 1088 bytes. There's no obvious way to reduce this, except by reducing the size of the encapsulated key or the size of the public matrix <code>A</code>. The former would make ML-KEM useful for fewer applications, and the latter would reduce the security margin.</p><p>Note that there are <a href="https://eprint.iacr.org/2022/031"><u>other lattice schemes</u></a> that are smaller, but they are based on different hardness assumptions and are still undergoing analysis.</p>
    <div>
      <h3>Authentication</h3>
      <a href="#authentication">
        
      </a>
    </div>
    <p>In the previous section, we learned about ML-KEM, the algorithm already in use to make encryption PQ-secure. However, encryption is only one piece of the puzzle: establishing a secure connection also requires <b>authenticating</b> the server — and sometimes the client, depending on the application.</p><p>Authentication is usually provided by a <b>digital signature scheme</b>, which uses a secret key to sign a message and a public key to verify the signature. The signature schemes used today aren't PQ-secure: a quantum computer can be used to compute the secret key corresponding to a server's public key, then use this key to impersonate the server.</p><p>While this threat is less urgent than the threat to encryption, mitigating it is going to be more complicated. Over the years, we've bolted a number of signatures onto the <a href="https://www.cloudflare.com/learning/ssl/what-happens-in-a-tls-handshake/">TLS handshake</a> in order to meet the evolving requirements of the web <a href="https://en.wikipedia.org/wiki/Public_key_infrastructure"><u>PKI</u></a>. We have PQ alternatives for these signatures, one of which we'll study in this section, but so far these signatures and their public keys are too large (i.e., take up too many bytes) to make comfortable replacements for today's schemes. Barring some breakthrough in <a href="https://blog.cloudflare.com/another-look-at-pq-signatures/"><u>NIST's ongoing standardization effort</u></a>, we will have to re-engineer TLS and the web PKI to use fewer signatures.</p><p>For now, let's dive into the PQ signature scheme we're likely to see deployed first: <a href="https://csrc.nist.gov/pubs/fips/204/final"><u>ML-DSA</u></a>, a.k.a. Dillithium. The design of ML-DSA follows a similar template as ML-KEM. We start by building some intermediate primitive, then we transform that primitive into the primitive we want, in this case a signature scheme.</p><p>ML-DSA is quite a bit more involved than ML-KEM, so we're going to try to boil it down even further and just try to get across the main ideas.</p>
    <div>
      <h4>Warm up</h4>
      <a href="#warm-up">
        
      </a>
    </div>
    <p>Whereas ML-KEM is basically El Gamal encryption with elliptic curves replaced with lattices, ML-DSA is basically the <a href="https://www.zkdocs.com/docs/zkdocs/zero-knowledge-protocols/schnorr/"><u>Schnorr identification protocol</u></a> with elliptic curves replaced with lattices. Schnorr's protocol is used by a <b>prover </b>to convince a <b>verifier</b> that it knows the secret key associated with its public key without revealing the secret key itself. The protocol has three moves and is executed with four algorithms:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7lV5fU3j9rRQ2j2Y4lpW8r/a12908c078ec46e0fa35ef674fcfdb07/image8.png" />
          </figure><ol><li><p><code>initialize()</code>: The prover initializes the protocol and sends a <b>commitment</b> to the verifier</p></li><li><p><code>challenge()</code>:  The verifier receives the commitment and sends the prover a <b>challenge</b></p></li><li><p><code>finish()</code>: The prover receives the challenge and sends the verifier the <b>proof</b> </p></li><li><p><code>verify()</code>:  Finally, the verifier uses the proof to decide whether the prover knows the secret key</p></li></ol><p>We get the high-level structure of ML-DSA by making this protocol non-interactive. In particular, the prover derives the challenge itself by hashing the commitment together with the message to be signed. The signature consists of the commitment and proof: to verify the signature, the verifier recomputes the challenge from the commitment and message and runs <code>verify()</code>as usual.</p><p>Let's jump right in to building Schnorr's identification protocol from lattices. If you've never seen this protocol before, then this will look a little like black magic at first. We'll go through it slowly enough to see how and why it works.</p><p>Just like for ML-KEM, our public key is an LWE instance <code>(A,t=A*s1 + s2)</code>. However, this time our secret key is the <i>pair</i> of short vectors <code>(s1,s2)</code>, i.e., it includes the error term. Otherwise, key generation is exactly the same:</p>
            <pre><code>def key_gen():
    A = gen_mat(N, N, 0, Q)
    s1 = gen_mat(N, 1, -beta, beta+1)
    s2 = gen_mat(N, 1, -beta, beta+1)
    t = (A*s1 + s2) % Q
    return ((A, t), (s1, s2))</code></pre>
            <p>To initialize the protocol, the prover generates another LWE instance <code>(A,w=A*y1 + y2)</code>. You'll see why in just a moment. The prover sends the <i>hash</i> of <code>w</code> as its commitment:</p>
            <pre><code>def initialize(A):
    y1 = gen_mat(N, 1, -beta, beta+1)
    y2 = gen_mat(N, 1, -beta, beta+1)
    w = (A*y1 + y2) % Q
    return (H(w), (y1, y2))</code></pre>
            <p>Here <code>H </code>is some cryptographic hash function, like <a href="https://en.wikipedia.org/wiki/SHA-3"><u>SHA-3</u></a>. The prover stores the secret vectors<code> (y1,y2)</code> for use in its next move.</p><p>Now it's time for the verifier's challenge. The challenge is just an integer, but we need to be careful about how we choose it. For now let's just pick it at random:</p>
            <pre><code>def challenge():
    return random.randrange(0, Q)</code></pre>
            <p>Remember: when we turn this protocol into a digital signature, the challenge is derived from the commitment, <code>H(w)</code>, and the message. The range of this hash function must be the same as the set of outputs of <code>challenge()</code>.</p><p>Now comes the fun part. The proof is a pair of vectors<code> (z1,z2)</code> satisfying <code>A*z1 + z2 == c*t + w</code>. We can easily produce this proof if we know the secret key:</p><p>	<code>z1 = (c*s1 + y1) % Q</code></p><p>        <code>z2 = (c*s2 + y2) % Q</code></p><p>Then <code>A*z1 + z2 == A*(c*s1 + y1) + (c*s2 + y2) == c*(A*s1 + s2) + (A*y1 + y2) == c*t + w</code>. Our goal is to design the protocol such that it's hard to come up with <code>(z1,z2)</code> without knowing <code>(s1,s2)</code>, even after observing many executions of the protocol.</p><p>Here are the <code>finish() </code>and <code>verify() </code>algorithms for completeness:</p>
            <pre><code>def finish(s1, s2, y1, y2, c):
    z1 = (c*s1 + y1) % Q
    z2 = (c*s2 + y2) % Q
    return (z1, z2)

def verify(A, t, hw, c, z1, z2):
	return H((A*z1 + z2 - c*t) % Q) == hw

# Test
((A, t), (s1, s2)) = key_gen()
(hw, (y1, y2)) = initialize(A)        # hw: prover -&gt; verifier
c = challenge()                       # c: verifier -&gt; prover
(z1, z2) = finish(s1, s2, y1, y2, c)  # (z1, z2): prover -&gt; verifier
assert verify(A, t, hw, c, z1, z2)    # verifier</code></pre>
            <p>Notice that the verifier doesn't actually check <code>A*z1 + z2 == c*t + w</code> directly; we have to rearrange the equation so that we can set the commitment to <code>H(w)</code> rather than <code>w</code>. We'll explain the need for hashing in the next section.</p>
    <div>
      <h4>Making this scheme secure</h4>
      <a href="#making-this-scheme-secure">
        
      </a>
    </div>
    <p>The question of whether this protocol is secure boils down to whether it's possible to impersonate the prover without knowledge of the secret key. Let's put our attacker hat on and poke around.</p><p>Perhaps there's a way to compute the secret key, either from the public key directly or by eavesdropping on executions of the protocol with the honest prover. If LWE is hard, then clearly there's no way we're going to extract the secret key from the public key <code>t</code>. Likewise, the commitment <code>H(w)</code>doesn't leak any information that would help us extract the secret key from the proof<code> (z1,z2)</code>.</p><p>Let's take a closer look at the proof. Notice that the vectors <code>(y1,y2)</code> "mask" the secret key vectors, sort of how the shared secret masks the plaintext in ML-KEM. However, there's one big exception: we also scale the secret key vectors by the challenge <code>c</code>.</p><p>What's the effect of scaling these vectors? If we squint at a few proofs, we start to see a pattern emerge. Let's look at <code>z1</code> first (<code>N=3, Q=3329, beta=4</code>):</p>
            <pre><code>((A, t), (s1, s2)) = key_gen()
print('s1={}'.format(s1.T % Q))
for _ in range(10):
    (w, (y1, y2)) = initialize(A)
    c = challenge()
    (z1, z2) = finish(s1, s2, y1, y2, c)
    print('c={}, z1={}'.format(c, z1.T))
# s1=[[   1	0 3326]]
# c=1123, z1=[[1121 3327 3287]]
# c=1064, z1=[[1060	4  137]]
# c=1885, z1=[[1884 3327  999]]
# c=269, z1=[[ 270 3325 2524]]
# c=1506, z1=[[1510 3325 2141]]
# c=3147, z1=[[3149	4  547]]
# c=703, z1=[[ 700	4 1219]]
# c=1518, z1=[[1518 3327 2104]]
# c=1726, z1=[[1726	0 1478]]
# c=2591, z1=[[2589	4 2217]]</code></pre>
            <p>Indeed, with enough proof samples, we should be able to make a pretty good guess of the value of <code>s1</code>. In fact, for these parameters, there is a simple statistical analysis we can do to compute <code>s1</code> exactly. (Hint: <code>Q</code> is a prime number, which means <code>c*pow(c,-1,Q)==1 </code>whenever <code>c&gt;0</code>.) We can also apply this analysis to <code>s2</code>, or compute it directly from <code>t</code>, <code>s1</code>, and <code>A</code>.</p><p>The main flaw in our protocol is that, although our secret vectors are short, scaling them makes them so long that they're not completely masked by<code> (y1,y2)</code>. Since <code>c</code> spans the entire <code>range(Q)</code>, so do the entries of <code>c*s1.</code> and <code>c*s2,</code> which means in order to mask these entries, we need the entries of <code>(y1,y2) </code>to span <code>range(Q)</code> as well. However, doing this would make solving LWE for <code>(A,w)</code> easy, by solving SIS. We somehow need to strike a balance between the length of the vectors of our LWE instances and the leakage induced by the challenge.</p><p>Here's where things get tricky. Let's refer to the set of possible outputs of <code>challenge() </code>as the <b>challenge space. </b>We need the challenge space to be fairly large, large enough that the probability of outputting the same challenge twice is negligible.</p><p>Why would such a collision be a problem? It's a little easier to see in the context of digital signatures. Let's say an attacker knows a valid signature for a message <code>m</code>. The signature includes the commitment <code>H(m)</code>, so the attacker also knows the challenge is <code>c == H(H(w),m)</code>. Suppose it manages to find a different message <code>m</code><code><sup>*</sup></code> for which <code>c == H(H(w),m</code><code><sup>*</sup></code><code>)</code>. Then the signature is also valid for m! And this attack is easy to pull off if the challenge space, that is, the set of possible outputs of <code>H</code>, is too small.</p><p>Unfortunately, we can't make the challenge space larger simply by increasing the size of the modulus <code>Q</code>: the larger the challenge might be, the more information we'd leak about the secret key. We need a new idea.</p>
    <div>
      <h5>The best of both worlds</h5>
      <a href="#the-best-of-both-worlds">
        
      </a>
    </div>
    <p>Remember that the hardness of LWE depends on the ratio between <code>beta</code> and <code>Q</code>. This means that <code>y1</code> and <code>y2</code> don't need to be short in absolute terms, but short relative to random vectors.</p><p>With that in mind, consider the following idea. Let's take a larger modulus, say <code>Q=2**31 - 1</code>, and we'll continue to sample from the same challenge space, <code>range(2**16)</code>.</p><p>First, notice that <code>z1</code> is now "relatively" short, since its entries are now in <code>range(-gamma, gamma+1)</code>, where <code>gamma = beta*(2**16-1),</code> rather than uniform over <code>range(Q)</code>. Let's also modify <code>initialize()</code> to sample the entries of <code>(y1,y2)</code> from the same range and see what happens:</p>
            <pre><code>def initialize(A):
	y1 = gen_mat(N, 1, -gamma, gamma+1)
	y2 = gen_mat(N, 1, -gamma, gamma+1)
	w = (A*y1 + y2) % Q
	return (H(w), (y1, y2))

((A, t), (s1, s2)) = key_gen()
print('s1={}'.format(s1.T % Q))
for _ in range(10):
    (w, (y1, y2)) = initialize(A)
    c = challenge()
    (z1, z2) = finish(s1, s2, y1, y2, c)
    print('c={}, z1={}'.format(c, z1.T))
# s1=[[3 0 1]]
# c=31476, z1=[[175933 141954  93186]]
# c=27360, z1=[[    136404 2147438807     283758]]
# c=33536, z1=[[2147430945 2147377022     190671]]
# c=23283, z1=[[186516  73400   4955]]
# c=24756, z1=[[    328377 2147438906 2147388768]]
# c=12428, z1=[[2147340715     188675      90282]]
# c=24266, z1=[[    175498 2147261581 2147301553]]
# c=45331, z1=[[357595 185269 177155]]
# c=45641, z1=[[     21592 2147249191 2147446200]]
# c=57893, z1=[[297750 113335 144894]]</code></pre>
            <p>This is definitely going in the right direction, since there are no obvious correlations between <code>z1</code> and <code>s1</code>. (Likewise for <code>z2</code> and <code>s2</code>.) However, we're not quite there.</p><p>One problem is that the challenge space is still quite small. With only <code>2**16 </code>challenges to choose from, we're likely to see a collision even after only a handful of protocol executions. We need the challenge space to be much, much larger, say around <code>2**256</code>. But then <code>Q</code> has to be an insanely large number in order for the <code>beta</code> to <code>Q</code> ratio to be secure.</p><p>ML-DSA is able to side step this problem due to its use of arithmetic over polynomial rings. It uses the same modulus polynomial as ML-KEM, so the challenge is a polynomial with 256 coefficients. The coefficients are chosen carefully so that the challenge space is large, but multiplication by the challenge scales the secret vector by a small amount. Note that we still end up using a slightly larger modulus (<code>Q=8380417</code>) for ML-DSA than for ML-KEM, but only by about twelve bits.</p><p>However, there is a more fundamental problem here, which is that we haven't completely ruled out that signatures may leak information about the secret key.</p>
    <div>
      <h5>Cause and effect</h5>
      <a href="#cause-and-effect">
        
      </a>
    </div>
    <p>Suppose we run the protocol a number of times, and in each run, we happen to choose a relatively small value for some entry of <code>y1</code>. After enough runs, this would eventually allow us to reconstruct the corresponding entry of <code>s1</code>. To rule this out as a possibility, we need to make <code>y1 </code>even longer. (Likewise for <code>y2</code>.) But how long?</p><p>Suppose we know that the entries of <code>z1</code> and <code>z2</code> are always in <code>range(-beta_loose,beta_loose+1) </code>for some <code>beta_loose &gt; beta</code>. Then we can <b>simulate </b>an honest run of the protocol as follows:</p>
            <pre><code>def simulate(A, t):
    z1 = gen_mat(N, 1, -beta_loose, beta_loose+1)
    z2 = gen_mat(N, 1, -beta_loose, beta_loose+1)
    c = challenge()
    w = (A*z1 + z2 - c*t) % Q
    return (H(w), c, (z1, z2))

# Test
((A, t), (s1, s2)) = key_gen()
(hw, c, (z1, z2)) = simulate(A, t)
assert verify(A, t, hw, c, z1, z2)</code></pre>
            <p>This procedure perfectly simulates honest runs of the protocol, in the sense that the output of <code>simulate() </code>is indistinguishable from the transcript of a real run of the protocol with the honest prover. To see this, notice that the <code>w</code>,<code> c</code>, <code>z1</code>, and <code>z2 </code>all have the same mathematical relationship (the verification equation still holds) and have the same distribution.</p><p>And here's the punch line: since this procedure doesn't use the secret key, it follows that the attacker learns nothing from eavesdropping on the honest prover that it can't compute from the public key itself. Pretty neat!</p><p>What's left to do is arrange for <code>z1</code> and <code>z2</code> to fall in this range. First, we modify <code>initialize() </code>by increasing the range of <code>y1 </code>and<code> y2 </code>by<code> beta_loose</code>:</p>
            <pre><code>def initialize(A):
    y1 = gen_mat(N, 1, -gamma+beta_loose, gamma+beta_loose+1)
    y2 = gen_mat(N, 1, -gamma+beta_loose, gamma+beta_loose+1)
    w = (A*y1 + y2) % Q
    return (H(w), (y1, y2))</code></pre>
            <p>This ensures the proof vectors <code>z1</code> and <code>z2 </code>are roughly uniform over r<code>ange(-beta_loose, beta_loose+1)</code>. However, they may fall slightly outside of this range, so need to modify <code>finalize() </code>to <b>abort </b>if not. Correspondingly, <code>verify() </code>should reject proof vectors that are out of range:</p>
            <pre><code>def finish(s1, s2, y1, y2, c):
    z1 = (c*s1 + y1) % Q
    z2 = (c*s2 + y2) % Q
    if not in_range(z1, beta_loose) or not in_range(z2, beta_loose):
        return (None, None)
    return (z1, z2)

def verify(A, t, hw, c, z1, z2):
    if not in_range(z1, beta_loose) or not in_range(z2, beta_loose):
        return False
    return H((A*z1 + z2 - c*t) % Q) == hw</code></pre>
            <p>If <code>finish()</code> returns <code>(None,None)</code>, then the prover and verifier are meant to abort the protocol and retry until the protocol succeeds:</p>
            <pre><code>((A, t), (s1, s2)) = key_gen()
while True:
    (hw, (y1, y2)) = initialize(A)        # hw: prover -&gt; verifier
    c = challenge()                       # c: verifier -&gt; prover
    (z1, z2) = finish(s1, s2, y1, y2, c)  # (z1, z2): prover -&gt; verifier
    if z1 is not None and z2 is not None:
        break
assert verify(A, t, hw, c, z1, z2)</code></pre>
            <p>Interestingly, we should expect aborts to be quite common. The parameters of ML-DSA are tuned so that the protocol runs five times on average before it succeeds.</p><p>Another interesting point is that the security proof requires us to simulate not only successful protocol runs, but aborted protocol runs as well. More specifically, the protocol simulator must abort with the same probability as the real protocol, which implies that the rejection probability is <i>independent </i>of the secret key.</p><p>The simulator also needs to be able to produce realistic looking commitments for aborted transcripts. This is exactly why the prover commits to the <i>hash</i> of <code>w </code>rather than <code>w </code> itself: in the security proof, we can easily simulate hashes of random inputs.</p>
    <div>
      <h4>Making this scheme efficient</h4>
      <a href="#making-this-scheme-efficient">
        
      </a>
    </div>
    <p>ML-DSA benefits from many of the same optimizations as ML-KEM, including using polynomial rings, NTT for polynomial multiplication, and encoding polynomials with a fixed number of bits. However, ML-DSA has a few more tricks to make things smaller.</p><p>First, in ML-DSA, instead of the pair of short vectors <code>z1 </code>and <code>z2</code>, the proof consists of a single vector <code>z=c*s1 + y,</code> where <code>y</code> was committed to in the previous step. In turn, we only end up with a single proof vector <code>z</code> rather than two as before. Getting this to work requires a special encoding of the commitment so that we can't compute <code>y</code> from it. ML-DSA uses a related trick to reduce the size of the <code>t</code> vector of the public key, but the details are more complicated.</p><p>For the parameters we expect to deploy first (ML-DSA-44), the public key is 1312 bytes long and the signature is a whopping 2420 bytes. In contrast to ML-KEM, it is possible to shave off some more bytes. This does not come for free and requires complicating the scheme. An example is <a href="https://eprint.iacr.org/2023/624.pdf"><u>HAETAE</u></a>, which changes the distributions used. <a href="https://github.com/pornin/rust-fn-dsa"><u>Falcon</u></a> takes it a step further with even smaller signatures, using a completely different approach, which although elegant is also more complex to implement.</p>
    <div>
      <h3>Wrap up</h3>
      <a href="#wrap-up">
        
      </a>
    </div>
    <p>Lattice cryptography underpins the first generation of PQ algorithms to get widely deployed on the Internet. ML-KEM is already widely used today to protect encryption from quantum computers, and in the coming years we expect to see ML-DSA deployed to get ahead of the threat of quantum computers to authentication.</p><p>Lattices are also the basis of a new frontier for cryptography: computing on encrypted data.</p><p>Suppose you wanted to aggregate some metrics submitted by clients without learning the metrics themselves. <a href="https://eprint.iacr.org/2024/936"><u>With LWE-based encryption</u></a>, you can arrange for each client to encrypt their metrics before submission, aggregate the ciphertexts, then decrypt to get the aggregate.</p><p>Suppose instead that a server has a database that it wants to provide clients access to without revealing to the server which rows of the database the client wants to query. <a href="https://eprint.iacr.org/2022/949"><u>LWE-based encryption</u></a> allows the database to be encoded in a manner that permits encrypted queries.</p><p>These applications are special cases of a paradigm known as <a href="https://en.wikipedia.org/wiki/Homomorphic_encryption"><u>FHE</u></a> ("Fully Homomorphic Encryption"), which allows for arbitrary computations on encrypted data. FHE is an extremely powerful primitive, and the only way we know how to build it today is with lattices. However, for most applications, FHE is far less practical than a special-purpose protocol would be (lattice-based or not). Still, over the years we've seen FHE get better and better, and for many applications it is already a decent option. Perhaps we'll dig into this and other lattice schemes in a future blog post.</p><p>We hope you enjoyed this whirlwind tour of lattices. Thanks for reading!</p> ]]></content:encoded>
            <category><![CDATA[Security Week]]></category>
            <category><![CDATA[Cryptography]]></category>
            <category><![CDATA[Post-Quantum]]></category>
            <category><![CDATA[Research]]></category>
            <guid isPermaLink="false">01euRoOpkvsq16eKrMZ6hu</guid>
            <dc:creator>Christopher Patton</dc:creator>
            <dc:creator>Peter Schwabe (Guest Author)</dc:creator>
        </item>
        <item>
            <title><![CDATA[Conventional cryptography is under threat. Upgrade to post-quantum cryptography with Cloudflare Zero Trust]]></title>
            <link>https://blog.cloudflare.com/post-quantum-zero-trust/</link>
            <pubDate>Mon, 17 Mar 2025 13:00:00 GMT</pubDate>
            <description><![CDATA[ We’re thrilled to announce that organizations can now protect their sensitive corporate network traffic against quantum threats by tunneling it through Cloudflare’s Zero Trust platform. ]]></description>
            <content:encoded><![CDATA[ <p>Quantum computers are actively being developed that will eventually have the ability to break the cryptography we rely on for securing modern communications. Recent <a href="https://blog.google/technology/research/google-willow-quantum-chip/"><u>breakthroughs</u></a> in quantum computing have underscored the vulnerability of conventional cryptography to these attacks. Since 2017, Cloudflare has <a href="https://blog.cloudflare.com/tag/post-quantum/"><u>been at the forefront</u></a> of developing, standardizing, and implementing <a href="https://www.cloudflare.com/learning/ssl/quantum/what-is-post-quantum-cryptography/"><u>post-quantum cryptography</u></a> to withstand attacks by quantum computers. </p><p>Our mission is simple: we want every Cloudflare customer to have a clear path to quantum safety. Cloudflare recognizes the urgency, so we’re committed to managing the complex process of upgrading cryptographic algorithms, so that you don’t have to worry about it. We're not just talking about doing it. <a href="https://radar.cloudflare.com/adoption-and-usage#post-quantum-encryption-adoption"><u>Over 35% of the non-bot HTTPS traffic that touches Cloudflare today is post-quantum secure.</u></a> </p><p>The <a href="https://www.nist.gov/"><u>National Institute of Standards and Technology (NIST)</u></a> also recognizes the urgency of this transition. On November 15, 2024, NIST made a landmark <a href="https://nvlpubs.nist.gov/nistpubs/ir/2024/NIST.IR.8547.ipd.pdf"><u>announcement</u></a> by setting a timeline to phase out <a href="https://en.wikipedia.org/wiki/RSA_(cryptosystem)"><u>RSA</u></a> and <a href="https://blog.cloudflare.com/a-relatively-easy-to-understand-primer-on-elliptic-curve-cryptography/"><u>Elliptic Curve Cryptography (ECC)</u></a>, the conventional cryptographic algorithms that underpin nearly every part of the Internet today. According to NIST’s announcement, these algorithms will be deprecated by 2030 and completely disallowed by 2035.</p><p>At Cloudflare, we aren’t waiting until 2035 or even 2030. We believe privacy is a fundamental human right, and advanced cryptography should be <a href="https://blog.cloudflare.com/post-quantum-crypto-should-be-free/"><u>accessible to everyone</u></a> without compromise. No one should be required to pay extra for post-quantum security. That’s why any visitor accessing a <a href="https://blog.cloudflare.com/pq-2024/"><u>website protected by Cloudflare today</u></a> benefits from post-quantum cryptography, when using a major browser like <a href="https://blog.chromium.org/2024/05/advancing-our-amazing-bet-on-asymmetric.html"><u>Chrome, Edge</u></a>, or <a href="https://www.mozilla.org/en-US/firefox/135.0/releasenotes/"><u>Firefox</u></a>. (And, we are excited to see a <a href="https://radar.cloudflare.com/explorer?dataSet=http&amp;groupBy=post_quantum&amp;filters=botClass%253DLikely_Human%252Cos%253DiOS"><u>small percentage of (mobile) Safari traffic</u></a> in our Radar data.) Well over a third of the human traffic passing through Cloudflare today already enjoys this enhanced security, and we expect this share to increase as more browsers and clients are upgraded to support post-quantum cryptography. </p><p>While great strides have been made to protect human web traffic, not every application is a web application. And every organization has internal applications (both web and otherwise) that do not support post-quantum cryptography.  </p><p>How should organizations go about upgrading their sensitive corporate network traffic to support post-quantum cryptography?</p><p>That’s where today’s announcement comes in. We’re thrilled to announce the first phase of end-to-end quantum readiness of our <a href="https://www.cloudflare.com/zero-trust/">Zero Trust platform</a><b>, </b>allowing customers to protect their corporate network traffic with post-quantum cryptography.<b> Organizations can tunnel their corporate network traffic though Cloudflare’s Zero Trust platform, protecting it against quantum adversaries without the hassle of individually upgrading each and every corporate application, system, or network connection.</b> </p><p>More specifically, organizations can use our Zero Trust platform to route communications from end-user devices (via web browser or Cloudflare’s <a href="https://developers.cloudflare.com/cloudflare-one/connections/connect-devices/warp/"><u>WARP device client</u></a>) to secure applications connected with <a href="https://blog.cloudflare.com/post-quantum-tunnel/"><u>Cloudflare Tunnel</u></a>, to gain end-to-end quantum safety, in the following use cases: </p><ul><li><p><b>Cloudflare’s clientless </b><a href="https://developers.cloudflare.com/cloudflare-one/policies/access/"><b><u>Access</u></b></a><b>: </b>Our clientless <a href="https://www.cloudflare.com/learning/access-management/what-is-ztna/">Zero Trust Network Access (ZTNA)</a> solution verifies user identity and device context for every HTTPS request to corporate applications from a web browser. Clientless Access is now protected end-to-end with post-quantum cryptography.</p></li><li><p><b>Cloudflare’s </b><a href="https://developers.cloudflare.com/cloudflare-one/connections/connect-devices/warp/"><b><u>WARP device client</u></b></a><b>:</b> By mid-2025, customers using the WARP device client will have all of their traffic (regardless of protocol) tunneled over a connection protected by post-quantum cryptography. The WARP client secures corporate devices by privately routing their traffic to Cloudflare's global network, where Gateway applies advanced web filtering and Access enforces policies for secure access to applications. </p></li><li><p><b>Cloudflare </b><a href="https://developers.cloudflare.com/cloudflare-one/policies/gateway/http-policies/"><b><u>Gateway</u></b></a>: Our <a href="https://www.cloudflare.com/learning/access-management/what-is-a-secure-web-gateway/">Secure Web Gateway (SWG) </a>— designed to inspect and filter TLS traffic in order to block threats and unauthorized communications — now supports TLS with post-quantum cryptography. </p></li></ul><p>In the remaining sections of this post, we’ll explore the threat that quantum computing poses and the challenges organizations face in transitioning to post-quantum cryptography. We’ll also dive into the technical details of how our Zero Trust platform supports post-quantum cryptography today and share some plans for the future.</p>
    <div>
      <h3>Why transition to post-quantum cryptography and why now? </h3>
      <a href="#why-transition-to-post-quantum-cryptography-and-why-now">
        
      </a>
    </div>
    <p>There are two key reasons to adopt post-quantum cryptography now:</p>
    <div>
      <h4>1. The challenge of deprecating cryptography</h4>
      <a href="#1-the-challenge-of-deprecating-cryptography">
        
      </a>
    </div>
    <p>History shows that updating or removing outdated cryptographic algorithms from live systems is extremely difficult. For example, although the MD5 hash function was <a href="https://iacr.org/archive/eurocrypt2005/34940019/34940019.pdf"><u>deemed insecure in 2004</u></a> and long since deprecated, it was still in use with the RADIUS enterprise authentication protocol as recently as 2024. In July 2024, Cloudflare contributed to research revealing an <a href="https://blog.cloudflare.com/radius-udp-vulnerable-md5-attack/"><u>attack on RADIUS</u></a> that exploited its reliance on MD5. This example underscores the enormous challenge of updating legacy systems — this difficulty in achieving <a href="https://en.wikipedia.org/wiki/Cryptographic_agility"><i><u>crypto-agility</u></i></a> — which will be just as demanding when it’s time to transition to post-quantum cryptography. So it makes sense to start this process now.</p>
    <div>
      <h4>2. The “harvest now, decrypt later” threat</h4>
      <a href="#2-the-harvest-now-decrypt-later-threat">
        
      </a>
    </div>
    <p>Even though quantum computers lack enough qubits to break conventional cryptography today, adversaries can harvest and store encrypted communications or steal datasets with the intent of decrypting them once quantum technology matures. If your encrypted data today could become a liability in 10 to 15 years, planning for a post-quantum future is essential. For this reason, we have already started working with some of the most innovative <a href="https://www.cloudflare.com/banking-and-financial-services/">banks</a>, ISPs, and <a href="https://www.cloudflare.com/public-sector/">governments</a> around the world as they begin their journeys to quantum safety. </p><p>The U.S. government is already addressing these risks. On January 16, 2025, the White House issued <a href="https://www.federalregister.gov/documents/2025/01/17/2025-01470/strengthening-and-promoting-innovation-in-the-nations-cybersecurity"><u>Executive Order 14144</u></a> on Strengthening and Promoting Innovation in the Nation’s Cybersecurity. This order requires government agencies to “<i>regularly update a list of product categories in which products that support post-quantum cryptography (PQC) are widely available…. Within 90 days of a product category being placed on the list … agencies shall take steps to include in any solicitations for products in that category a requirement that products support PQC.</i>”</p><p>At Cloudflare, we’ve been <a href="https://blog.cloudflare.com/the-tls-post-quantum-experiment/"><u>researching</u></a>, <a href="https://blog.cloudflare.com/securing-the-post-quantum-world/"><u>developing</u></a>, and <a href="https://www.ietf.org/archive/id/draft-kwiatkowski-tls-ecdhe-mlkem-02.html"><u>standardizing</u></a> post-quantum cryptography <a href="https://blog.cloudflare.com/tag/post-quantum/"><u>since 2017</u></a>. Our strategy is simple:</p><p><b>Simply tunnel your traffic through Cloudflare’s quantum-safe connections to immediately protect against harvest-now-decrypt-later attacks, without the burden of upgrading every cryptographic library yourself.</b></p><p>Let’s take a closer look at how the migration to post-quantum cryptography is taking shape at Cloudflare.</p>
    <div>
      <h3>A two-phase migration to post-quantum cryptography</h3>
      <a href="#a-two-phase-migration-to-post-quantum-cryptography">
        
      </a>
    </div>
    <p>At Cloudflare, we’ve largely focused on migrating the <a href="https://www.cloudflare.com/learning/ssl/transport-layer-security-tls/"><u>TLS (Transport Layer Security) 1.3</u></a> protocol to post-quantum cryptography.   TLS primarily secures the communications for web applications, but it is also widely used to secure email, messaging, <a href="https://blog.cloudflare.com/zero-trust-warp-with-a-masque/"><u>VPN connections</u></a>, <a href="https://www.cloudflare.com/learning/dns/dns-over-tls/"><u>DNS</u></a>, and many other protocols.  This makes TLS an ideal protocol to focus on when migrating to post-quantum cryptography.</p><p>The migration involves updating two critical components of TLS 1.3: <a href="https://www.cloudflare.com/learning/ssl/what-is-an-ssl-certificate/"><u>digital signatures used in certificates</u></a> and <a href="https://blog.cloudflare.com/post-quantum-key-encapsulation/"><u>key agreement mechanisms</u></a>. We’ve made significant progress on key agreement, but the migration to post-quantum digital signatures is still in its early stages.</p>
    <div>
      <h4>Phase 1: Migrating key agreement</h4>
      <a href="#phase-1-migrating-key-agreement">
        
      </a>
    </div>
    <p>Key agreement protocols enable two parties to securely establish a shared secret key that they can use to secure and encrypt their communications. Today, vendors have largely converged on transitioning TLS 1.3 to support a post-quantum key exchange protocol known as <a href="https://blog.cloudflare.com/nists-first-post-quantum-standards/"><u>ML-KEM</u></a> (Module-lattice based Key-Encapsulation Mechanism Standard). There are two main reasons for prioritizing migration of key agreement:</p><ul><li><p><b>Performance:</b> ML-KEM <a href="https://blog.cloudflare.com/pq-2024/"><u>performs</u></a> well with the <a href="https://www.cloudflare.com/learning/ssl/why-use-tls-1.3/">TLS 1.3 protocol,</a> even for short-lived network connections.</p></li><li><p><b>Security</b>: Conventional cryptography is vulnerable to “harvest now, decrypt later” attacks. In this threat model, an adversary intercepts and stores encrypted communications today and later (in the future) uses a quantum computer to derive the secret key, compromising the communication. As of March 2025, well over a third of the human web traffic reaching the Cloudflare network is protected against these attacks by TLS 1.3 with hybrid ML-KEM key exchange.</p></li></ul>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1Tgfy0HYHA5MM6JjaNP2Z1/b601d2938be3c52decf1f3cec7313c6e/image6.png" />
          </figure><p><sup><i>Post-quantum encrypted share of human HTTPS request traffic seen by Cloudflare per </i></sup><a href="https://radar.cloudflare.com/adoption-and-usage?dateRange=52w"><sup><i><u>Cloudflare Radar</u></i></sup></a><sup><i> from March 1, 2024 to March 1, 2025. (Captured on March 13, 2025.)</i></sup></p><p>Here’s how to check if your Chrome browser is using ML-KEM for key agreement when visiting a website: First, <a href="https://developer.chrome.com/docs/devtools/inspect-mode#:~:text=Open%20DevTools,The%20element's%20margin%2C%20in%20pixels."><u>Inspect the page</u></a>, then open the <a href="https://developer.chrome.com/docs/devtools/security"><u>Security tab</u></a>, and finally look for <a href="https://www.ietf.org/archive/id/draft-kwiatkowski-tls-ecdhe-mlkem-02.html"><u>X25519MLKEM768</u></a> as shown here:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6EoD5jFMXJeWFeRtG9w6Uy/85aa13123d64f21ea93313f674d4378f/image1.png" />
          </figure><p>This indicates that your browser is using key-agreement protocol ML-KEM <i>in combination with</i> conventional <a href="https://blog.cloudflare.com/a-relatively-easy-to-understand-primer-on-elliptic-curve-cryptography/"><u>elliptic curve cryptography</u></a> on curve <a href="https://en.wikipedia.org/wiki/Curve25519"><u>X25519</u></a>. This provides the protection of the tried-and-true conventional cryptography (<a href="https://en.wikipedia.org/wiki/Curve25519"><u>X25519</u></a>) alongside the new post-quantum key agreement (<a href="https://blog.cloudflare.com/nists-first-post-quantum-standards/"><u>ML-KEM</u></a>).</p>
    <div>
      <h4>Phase 2: Migrating digital signatures</h4>
      <a href="#phase-2-migrating-digital-signatures">
        
      </a>
    </div>
    <p><a href="https://www.cloudflare.com/learning/ssl/what-is-an-ssl-certificate/"><u>Digital signatures are used in TLS certificates</u></a> to validate the authenticity of connections — allowing the client to be sure that it is really communicating with the server, and not with an adversary that is impersonating the server. </p><p>Post-quantum digital signatures, however, are significantly larger, and thus slower, than their current counterparts. This performance impact has slowed their adoption, particularly because they slow down short-lived TLS connections. </p><p>Fortunately, post-quantum signatures are not needed to prevent harvest-now-decrypt-later attacks. Instead, they primarily protect against attacks by an adversary that is actively using a quantum computer to tamper with a live TLS connection. We still have some time before quantum computers are able to do this, making the migration of digital signatures a lower priority.</p><p>Nevertheless, Cloudflare is actively <a href="https://datatracker.ietf.org/doc/draft-ietf-lamps-dilithium-certificates/07/"><u>involved in standardizing</u></a> post-quantum signatures for TLS certificates. We are also experimenting with their deployment on long-lived TLS connections and exploring <a href="https://datatracker.ietf.org/doc/draft-davidben-tls-merkle-tree-certs/"><u>new approaches</u></a> to achieve post-quantum authentication without sacrificing performance. Our goal is to ensure that post-quantum digital signatures are ready for widespread use when quantum computers are able to actively attack live TLS connections.</p>
    <div>
      <h3>Cloudflare Zero Trust + PQC: future-proofing security</h3>
      <a href="#cloudflare-zero-trust-pqc-future-proofing-security">
        
      </a>
    </div>
    <p>The Cloudflare Zero Trust platform replaces legacy corporate security perimeters with Cloudflare's global network, making access to the Internet and to corporate resources faster and safer for teams around the world. Today, we’re thrilled to announce that Cloudflare's Zero Trust platform protects your data from quantum threats as it travels over the public Internet.  There are three key quantum-safe use cases supported by our Zero Trust platform in this first phase of quantum readiness.</p>
    <div>
      <h4>Quantum-safe clientless Access</h4>
      <a href="#quantum-safe-clientless-access">
        
      </a>
    </div>
    <p><a href="https://developers.cloudflare.com/cloudflare-one/connections/connect-devices/agentless/"><u>Clientless</u></a> <a href="https://developers.cloudflare.com/cloudflare-one/applications/configure-apps/self-hosted-public-app/"><u>Cloudflare Access</u></a> now protects an organization’s Internet traffic to internal web applications against quantum threats, even if the applications themselves have not yet migrated to post-quantum cryptography. ("Clientless access" is a method of accessing network resources without installing a dedicated client application on the user's device. Instead, users connect and access information through a web browser.)</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5mKiboLMsIEuNt1MaXlWsy/dad0956066e97db69401757b18e8ce5f/image4.png" />
          </figure><p>Here’s how it works today:</p><ul><li><p><b>PQ connection via browser: </b>(Labeled (1) in the figure.)
As long as the user's web browser supports post-quantum key agreement, the connection from the device to Cloudflare's network is secured via TLS 1.3 with post-quantum key agreement.</p></li><li><p><b>PQ within Cloudflare’s global network: </b>(Labeled (2) in the figure) 
If the user and origin server are geographically distant, then the user’s traffic will enter Cloudflare’s global network in one geographic location (e.g. Frankfurt), and exit at another (e.g. San Francisco).  As this traffic moves from one datacenter to another inside Cloudflare’s global network, these hops through the network are secured via TLS 1.3 with post-quantum key agreement.<b> </b></p></li><li><p><b>PQ Cloudflare Tunnel: </b>(Labeled (3) in the figure)
Customers establish a Cloudflare Tunnel from their datacenter or public cloud — where their corporate web application is hosted — to Cloudflare's network. This tunnel is secured using TLS 1.3 with post-quantum key agreement, safeguarding it from harvest-now-decrypt-later attacks.</p></li></ul><p>Putting it together, clientless Access provides <b>end-to-end</b> quantum safety for accessing corporate HTTPS applications, without requiring customers to upgrade the security of corporate web applications.</p>
    <div>
      <h4>Quantum-safe Zero Trust with Cloudflare’s WARP Client-to-Tunnel configuration (as a VPN replacement)</h4>
      <a href="#quantum-safe-zero-trust-with-cloudflares-warp-client-to-tunnel-configuration-as-a-vpn-replacement">
        
      </a>
    </div>
    <p>By mid-2025, organizations will be able to protect <b>any protocol</b>, not just HTTPS, by tunneling it through Cloudflare's Zero Trust platform with post quantum cryptography, thus providing quantum safety as traffic travels across the Internet from the end-user’s device to the corporate office/data center/cloud environment.</p><p>Cloudflare’s Zero Trust platform is ideal for replacing traditional VPNs, and enabling <a href="https://www.cloudflare.com/learning/security/glossary/what-is-zero-trust/"><u>Zero Trust architectures</u></a> with modern authentication and authorization polices.  Cloudflare’s WARP client-to-tunnel is a popular network configuration for our Zero Trust platform: organizations deploy Cloudflare’s <a href="https://developers.cloudflare.com/cloudflare-one/connections/connect-devices/warp/"><u>WARP device client</u></a> on their end users’ devices, and then use <a href="https://developers.cloudflare.com/cloudflare-one/connections/connect-networks/"><u>Cloudflare Tunnel</u></a> to connect to their corporate office, cloud, or data center environments.   </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5xovIIyVOO32xrXBs0ZFcf/110928926b86f12777f16518b1313875/image3.png" />
          </figure><p> Here are the details:  </p><ul><li><p><b>PQ connection via WARP client (coming in mid-2025): </b>(Labeled (1) in the figure)
The WARP client uses the <a href="https://blog.cloudflare.com/zero-trust-warp-with-a-masque/"><u>MASQUE protocol</u></a> to connect from the device to Cloudflare’s global network. We are working to add support for establishing this MASQUE connection with TLS 1.3 with post-quantum key agreement, with a target completion date of mid-2025.  </p></li><li><p><b>PQ within Cloudflare’s global network:  </b>(Labeled (2) in the figure) 
As traffic moves from one datacenter to another inside Cloudflare’s global network, each hop it takes through Cloudflare’s network is already secured with TLS 1.3 with post-quantum key agreement.</p></li><li><p><b>PQ Cloudflare Tunnel: </b>(Labeled (3) in the figure)
As mentioned above, <a href="https://developers.cloudflare.com/cloudflare-one/connections/connect-networks/"><u>Cloudflare Tunnel</u></a> already supports post-quantum key agreement. </p></li></ul><p>Once the upcoming post-quantum enhancements to the WARP device client are complete, customers can encapsulate their traffic in quantum-safe tunnels, effectively mitigating the risk of harvest-now-decrypt-later attacks without any heavy lifting to individually upgrade their networks or applications.  And this provides comprehensive protection for any protocol that can be sent through these tunnels, not just for HTTPS!</p>
    <div>
      <h4>Quantum-safe SWG (end-to-end PQC for access to third-party web applications)</h4>
      <a href="#quantum-safe-swg-end-to-end-pqc-for-access-to-third-party-web-applications">
        
      </a>
    </div>
    <p>A <a href="https://developers.cloudflare.com/cloudflare-one/policies/gateway/http-policies/"><u>Secure Web Gateway</u></a> (SWG) is used to secure access to third-party websites on the public Internet by intercepting and inspecting TLS traffic. </p><p>Cloudflare Gateway is now a quantum-safe SWG for HTTPS traffic. As long as the third-party website that is being inspected supports post-quantum key agreement, then Cloudflare’s SWG also supports post-quantum key agreement. This holds regardless of the onramp that the customer uses to get to Cloudflare's network (i.e. <a href="https://developers.cloudflare.com/cloudflare-one/connections/connect-devices/agentless/"><u>web browser</u></a>, <a href="https://developers.cloudflare.com/cloudflare-one/connections/connect-devices/warp/"><u>WARP device client</u></a>, <a href="https://developers.cloudflare.com/cloudflare-one/connections/connect-networks/private-net/warp-connector/"><u>WARP Connector</u></a>, <a href="https://developers.cloudflare.com/magic-wan/"><u>Magic WAN</u></a>), and only requires the use of a browser that supports post-quantum key agreement.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6vnkEFkvKbhSAxp33GmRk7/c58d00a14767a03b2422af1c48a53ba9/image5.png" />
          </figure><p>Cloudflare Gateway's HTTPS SWG feature involves two post-quantum TLS connections, as follows:</p><ul><li><p><b>PQ connection via browser: </b>(Labeled (1) in the figure)  
A TLS connection is initiated from the user's browser to a data center in Cloudflare's network that performs the TLS inspection. As long as the user's web browser supports post-quantum key agreement, this connection is secured by TLS 1.3 with post-quantum key agreement.  </p></li><li><p><b>PQ connection to the origin server: </b>(Labeled (2) in the figure)  
A TLS connection is initiated from a datacenter in Cloudflare's network to the origin server, which is typically controlled by a third party. The connection from Cloudflare’s SWG currently supports post-quantum key agreement, as long as the third party’s origin server also already supports post-quantum key agreement.  You can test this out today by using <a href="https://pq.cloudflareresearch.com/"><u>https://pq.cloudflareresearch.com/</u></a> as your third-party origin server. </p></li></ul><p>Put together, Cloudflare’s SWG is quantum-ready to support secure access to any third-party website that is quantum ready today or in the future. And this is true regardless of the onramp used to get end users' traffic into Cloudflare's global network!</p>
    <div>
      <h3>The post-quantum future: Cloudflare’s Zero Trust platform leads the way</h3>
      <a href="#the-post-quantum-future-cloudflares-zero-trust-platform-leads-the-way">
        
      </a>
    </div>
    <p>Protecting our customers from emerging quantum threats isn't just a priority — it's our responsibility. Since 2017, Cloudflare has been pioneering post-quantum cryptography through research, standardization, and strategic implementation across our product ecosystem.</p><p><b>Today marks a milestone: </b>We're launching the first phase of quantum-safe protection for our Zero Trust platform. Quantum-safe clientless Access and Secure Web Gateway are available immediately, with WARP client-to-tunnel network configurations coming by mid-2025. As we continue to advance the state of the art in post-quantum cryptography, our commitment to continuous innovation ensures that your organization stays ahead of tomorrow's threats.  Let us worry about crypto-agility so that you don’t have to.</p><p>To learn more about how Cloudflare’s built-in crypto-agility can future-proof your business, visit our <a href="http://cloudflare.com/pqc"><u>Post-Quantum Cryptography</u></a> webpage.</p>
    <div>
      <h3>Watch on Cloudflare TV</h3>
      <a href="#watch-on-cloudflare-tv">
        
      </a>
    </div>
    <div>
  
</div><p></p> ]]></content:encoded>
            <category><![CDATA[Security Week]]></category>
            <category><![CDATA[Post-Quantum]]></category>
            <category><![CDATA[Zero Trust]]></category>
            <category><![CDATA[Cloudflare Gateway]]></category>
            <category><![CDATA[Cloudflare Access]]></category>
            <category><![CDATA[Clientless]]></category>
            <category><![CDATA[Cloudflare Tunnel]]></category>
            <category><![CDATA[Cryptography]]></category>
            <category><![CDATA[Research]]></category>
            <guid isPermaLink="false">18HFPrh07hn9Zqp8kaonRp</guid>
            <dc:creator>Sharon Goldberg</dc:creator>
            <dc:creator>Wesley Evans</dc:creator>
            <dc:creator>Bas Westerbaan</dc:creator>
            <dc:creator>John Engates</dc:creator>
        </item>
        <item>
            <title><![CDATA[A look at the latest post-quantum signature standardization candidates]]></title>
            <link>https://blog.cloudflare.com/another-look-at-pq-signatures/</link>
            <pubDate>Thu, 07 Nov 2024 14:00:00 GMT</pubDate>
            <description><![CDATA[ NIST has standardized four post-quantum signature schemes so far, and they’re not done yet: there are fourteen new candidates in the running for standardization. ]]></description>
            <content:encoded><![CDATA[ <p>On October 24, 2024, the National Institute of Standards and Technology (NIST) <a href="https://groups.google.com/a/list.nist.gov/g/pqc-forum/c/khAfIZPktRE/m/bBZWmET-AAAJ"><u>announced</u></a> that they’re advancing fourteen post-quantum signature schemes to the second round of the “<a href="https://csrc.nist.gov/projects/pqc-dig-sig"><u>signatures on ramp</u></a>” competition. “Post-quantum” means that these algorithms are designed to resist <a href="https://blog.cloudflare.com/the-quantum-menace/"><u>the attack of quantum computers</u></a>. NIST already standardized four post-quantum signature schemes (<a href="https://blog.cloudflare.com/nists-first-post-quantum-standards/"><u>ML-DSA, SLH-DSA</u></a>, <a href="https://csrc.nist.gov/News/2020/stateful-hash-based-signature-schemes-sp-800-208"><u>XMSS, and LMS</u></a>) and they are drafting a standard for a fifth (<a href="https://falcon-sign.info/"><u>Falcon</u></a>). Why do we need even more, you might ask? We’ll get to that.</p><p>A regular reader of the blog will know that this is not the first time we’ve taken measure of post-quantum signatures. In <a href="https://blog.cloudflare.com/sizing-up-post-quantum-signatures/"><u>2021</u></a> we took a first hard look, and reported on the performance impact we expect from large-scale measurements.  Since then, dozens of new post-quantum algorithms have been proposed. Many of them have been submitted to this new NIST competition. We discussed some of the more promising ones in our <a href="https://blog.cloudflare.com/pq-2024/"><u>early 2024 blog post</u></a>.</p><p>In this blog post, we will go over the fourteen schemes advanced to the second round of the on ramp and discuss their feasibility for use in TLS — the protocol that secures browsing the Internet. The defining feature of practically all of them, is that they require much more bytes on the wire. Back in 2021 we shared <a href="https://blog.cloudflare.com/sizing-up-post-quantum-signatures/"><u>experimental results</u></a> on the impact of these extra bytes. Today, we will share some surprising statistics on how TLS is used in practice. One is that today already almost half the data sent over more than half the QUIC connections are just for the certificates.</p><p>For a broader context and introduction to the post-quantum migration, check out our <a href="https://blog.cloudflare.com/pq-2024"><u>early 2024 blog post</u></a>. One take-away to mention here: there will be two migrations for TLS. First, we urgently need to migrate key agreement to <a href="https://www.cloudflare.com/learning/ssl/quantum/what-is-post-quantum-cryptography/">post-quantum cryptography</a> to protect against <a href="https://en.wikipedia.org/wiki/Harvest_now,_decrypt_later"><u>attackers that store encrypted communication today</u></a> in order to decrypt it in the future when a quantum computer is available. The industry is making good progress here: <a href="https://radar.cloudflare.com/adoption-and-usage#post-quantum-encryption-adoption"><u>18% of human requests</u></a> to websites using Cloudflare are <a href="https://blog.cloudflare.com/post-quantum-for-all/"><u>secured</u></a> using post-quantum key agreement. The second migration, to post-quantum signatures (certificates), is not as urgent: we will need to have this sorted by the time the quantum computer arrives. However, it will be a bigger challenge.</p>
    <div>
      <h2>The signatures in TLS</h2>
      <a href="#the-signatures-in-tls">
        
      </a>
    </div>
    <p>Before we have a look at the long list of post-quantum signature algorithms and their performance characteristics, let’s go through the signatures involved when browsing the Internet and their particular constraints.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/415VcZzABkhZjT60GRkQZM/f30ae24bd14e86534efd3e74d15eb5b5/image3.png" />
          </figure><p>When you visit a website, the browser establishes a TLS connection with the server for that website. The connection starts with a cryptographic handshake. During this handshake, to authenticate the connection, the server signs the transcript so far, and presents the browser with a TLS <i>leaf</i> certificate to prove that it’s allowed to serve the website. This <i>leaf </i>certificate is signed by a certification authority (CA). Typically, it’s not signed by the CA’s <i>root</i> certificate, but by an <i>intermediate</i> CA certificate, which in turn is signed by the root CA, or another intermediate. That’s not all: a leaf certificate has to include at least two <i>signed certificate timestamps</i> (SCTs). These SCTs are signatures created by <a href="https://blog.cloudflare.com/introducing-certificate-transparency-and-nimbus/"><u>certificate transparency (CT) logs</u></a> to attest they’ve been publicly logged. <a href="https://certificate.transparency.dev/howctworks/"><u>Certificate Transparency</u></a> is what enables you to look up a certificate on websites such <a href="http://crt.sh"><u>crt.sh</u></a> and <a href="https://www.merklemap.com/"><u>merklemap</u></a>. In the future three or more SCTs might be required. Finally, servers may also send an <a href="https://blog.cloudflare.com/high-reliability-ocsp-stapling/"><u>OCSP staple</u></a> to demonstrate a certificate hasn’t been revoked.</p><p>Thus, we’re looking at a minimum of five signatures (not counting the OCSP staple) and two public keys transmitted across the network to establish a new TLS connection.</p>
    <div>
      <h3>Tailoring</h3>
      <a href="#tailoring">
        
      </a>
    </div>
    <p>Only the handshake transcript signature is created <i>online</i>; the other signatures are “offline”. That is, they are created ahead of time. For these offline signatures, fast verification is much more important than fast signing. On the other hand, for the handshake signature, we want to minimize the sum of signing and verification time.</p><p>Only the public keys of the leaf and intermediate certificates are transmitted on the wire during the handshake, and for those we want to minimize the combined size of the signature and the public key. For the other signatures, the public key is not transmitted during the handshake, and thus a scheme with larger public keys would be tolerable, and preferable if it trades larger public keys for smaller signatures.</p>
    <div>
      <h2>The algorithms</h2>
      <a href="#the-algorithms">
        
      </a>
    </div>
    <p>Now that we’re up to speed, let’s have a look at the candidates that progressed (marked by 🤔 below), compared to the classical algorithms vulnerable to quantum attack (marked by ❌), and the post-quantum algorithms that are already standardized (✅) or soon will be (📝). Each submission proposes several variants. We list the most relevant variants to TLS from each submission. To explore all variants, check out <a href="https://research.cloudflare.com/outreach/academic-programs/interns/thom-wiggers/"><u>Thom Wigger</u></a>’s <a href="https://pqshield.github.io/nist-sigs-zoo/"><u>signatures zoo</u></a>.</p>
<div><table><thead>
  <tr>
    <th></th>
    <th></th>
    <th></th>
    <th><span>Sizes (bytes)</span></th>
    <th><span>CPU time (lower is better)</span></th>
  </tr></thead>
<tbody>
  <tr>
    <td><span>Family</span></td>
    <td><span>Name variant</span></td>
    <td></td>
    <td><span>Public key</span></td>
    <td><span>Signature</span></td>
    <td><span>Signing</span></td>
    <td><span>Verification</span></td>
  </tr>
  <tr>
    <td><span>Elliptic curves</span></td>
    <td><span>Ed25519</span></td>
    <td><span>❌</span></td>
    <td><span>32</span></td>
    <td><span>64</span></td>
    <td><span>0.15</span></td>
    <td><span>1.3</span></td>
  </tr>
  <tr>
    <td><span>Factoring</span></td>
    <td><span>RSA<small> 2048</small></span></td>
    <td><span>❌</span></td>
    <td><span>256</span></td>
    <td><span>256</span></td>
    <td><span>80</span></td>
    <td><span>0.4</span></td>
  </tr>
  <tr>
    <td><span>Lattices</span></td>
    <td><span>ML-DSA <small>44</small></span></td>
    <td><span>✅</span></td>
    <td><span>1,312</span></td>
    <td><span>2,420</span></td>
    <td><span>1 (baseline)</span></td>
    <td><span>1 (baseline)</span></td>
  </tr>
  <tr>
    <td><span>Symmetric</span></td>
    <td><span>SLH-DSA <small>128s</small></span></td>
    <td><span>✅</span></td>
    <td><span>32</span></td>
    <td><span>7,856</span></td>
    <td><span>14,000</span></td>
    <td><span>40</span></td>
  </tr>
  <tr>
    <td><span>SLH-DSA <small>128f</small></span></td>
    <td><span>✅</span></td>
    <td><span>32</span></td>
    <td><span>17,088</span></td>
    <td><span>720</span></td>
    <td><span>110</span></td>
  </tr>
  <tr>
    <td><span>LMS <small>M4_H20_W8</small></span></td>
    <td><span>✅</span></td>
    <td><span>48</span></td>
    <td><span>1,112</span></td>
    <td><span>2.9</span> ⚠️</td>
    <td><span>8.4</span></td>
  </tr>
  <tr>
    <td><span>Lattices</span></td>
    <td><span>Falcon <small>512</small></span></td>
    <td><span>📝</span></td>
    <td><span>897</span></td>
    <td><span>666</span></td>
    <td><span>3 ⚠️</span></td>
    <td><span>0.7</span></td>
  </tr>
  <tr>
    <td><span>Codebased</span></td>
    <td><span>CROSS <small>R-SDP(G)1 small</small></span></td>
    <td><span>🤔</span></td>
    <td><span>38</span></td>
    <td><span>7,956</span></td>
    <td><span>20</span></td>
    <td><span>35</span></td>
  </tr>
  <tr>
    <td><span>LESS <small>1s</small></span></td>
    <td><span>🤔</span></td>
    <td><span>97,484</span></td>
    <td><span>5,120</span></td>
    <td><span>620</span></td>
    <td><span>1800</span></td>
  </tr>
  <tr>
    <td><span>MPC in the head</span></td>
    <td><span>Mirath <small>Mirith Ia fast</small></span></td>
    <td><span>🤔</span></td>
    <td><span>129</span></td>
    <td><span>7,877</span></td>
    <td><span>25</span></td>
    <td><span>60</span></td>
  </tr>
  <tr>
    <td><span>MQOM <small>L1-gf251-fast</small></span></td>
    <td><span>🤔</span></td>
    <td><span>59</span></td>
    <td><span>7,850</span></td>
    <td><span>35</span></td>
    <td><span>85</span></td>
  </tr>
  <tr>
    <td><span>PERK <small>I-fast5</small></span></td>
    <td><span>🤔</span></td>
    <td><span>240</span></td>
    <td><span>8,030</span></td>
    <td><span>20</span></td>
    <td><span>40</span></td>
  </tr>
  <tr>
    <td><span>RYDE <small>128F</small></span></td>
    <td><span>🤔</span></td>
    <td><span>86</span></td>
    <td><span>7,446</span></td>
    <td><span>15</span></td>
    <td><span>40</span></td>
  </tr>
  <tr>
    <td><span>SDitH <small>gf251-L1-hyp</small></span></td>
    <td><span>🤔</span></td>
    <td><span>132</span></td>
    <td><span>8,496</span></td>
    <td><span>30</span></td>
    <td><span>80</span></td>
  </tr>
  <tr>
    <td><span>VOLE in the head</span></td>
    <td><span>FAEST <small>EM-128f</small></span></td>
    <td><span>🤔</span></td>
    <td><span>32</span></td>
    <td><span>5,696</span></td>
    <td><span>6</span></td>
    <td><span>18</span></td>
  </tr>
  <tr>
    <td><span>Lattices</span></td>
    <td><span>HAWK <small>512</small></span></td>
    <td><span>🤔</span></td>
    <td><span>1,024</span></td>
    <td><span>555</span></td>
    <td><span>0.25</span></td>
    <td><span>1.2</span></td>
  </tr>
  <tr>
    <td><span>Isogeny</span></td>
    <td><span>SQISign <small>I</small></span></td>
    <td><span>🤔</span></td>
    <td><span>64</span></td>
    <td><span>177</span></td>
    <td><span>17,000</span></td>
    <td><span>900</span></td>
  </tr>
  <tr>
    <td><span>Multivariate</span></td>
    <td><span>MAYO <small>one</small></span></td>
    <td><span>🤔</span></td>
    <td><span>1,168</span></td>
    <td><span>321</span></td>
    <td><span>1.4</span></td>
    <td><span>1.4</span></td>
  </tr>
  <tr>
    <td><span>MAYO <small>two</small></span></td>
    <td><span>🤔</span></td>
    <td><span>5,488</span></td>
    <td><span>180</span></td>
    <td><span>1.7</span></td>
    <td><span>0.8</span></td>
  </tr>
  <tr>
    <td><span>QR-UOV <small>I-(31,165,60,3)</small></span></td>
    <td><span>🤔</span></td>
    <td><span>23,657</span></td>
    <td><span>157</span></td>
    <td><span>75</span></td>
    <td><span>125</span></td>
  </tr>
  <tr>
    <td><span>SNOVA <small>(24,5,4)</small></span></td>
    <td><span>🤔</span></td>
    <td><span>1,016</span></td>
    <td><span>248</span></td>
    <td><span>0.9</span></td>
    <td><span>1.4</span></td>
  </tr>
  <tr>
    <td><span>SNOVA <small>(25,8,3)</small></span></td>
    <td><span>🤔</span></td>
    <td><span>2,320</span></td>
    <td><span>165</span></td>
    <td><span>0.9</span></td>
    <td><span>1.8</span></td>
  </tr>
  <tr>
    <td><span>SNOVA <small>(37,17,2)</small></span></td>
    <td><span>🤔</span></td>
    <td><span>9,842</span></td>
    <td><span>106</span></td>
    <td><span>1</span></td>
    <td><span>1.2</span></td>
  </tr>
  <tr>
    <td><span>UOV <small>Is-pkc</small></span></td>
    <td><span>🤔</span></td>
    <td><span>66,576</span></td>
    <td><span>96</span></td>
    <td><span>0.3</span></td>
    <td><span>2.3</span></td>
  </tr>
  <tr>
    <td><span>UOV <small>Ip-pkc</small></span></td>
    <td><span>🤔</span></td>
    <td><span>43,576</span></td>
    <td><span>128</span></td>
    <td><span>0.3</span></td>
    <td><span>0.8</span></td>
  </tr>
</tbody></table></div><p>Some notes about the table. It compares selected variants of the submissions progressed to the second round of the NIST PQC signature on ramp with earlier existing traditional and post-quantum schemes at the security level of AES-128. CPU times are taken from the <a href="https://pqshield.github.io/nist-sigs-zoo/"><u>signatures zoo</u></a>, which collected them from the submission documents and some later advances. CPU performance varies significantly by platform and implementation, and should only be taken as a rough indication. We are early in the competition, and the on-ramp schemes will evolve: some will improve drastically (both in compute and size), whereas others will regress to counter new attacks. Check out <a href="https://pqshield.github.io/nist-sigs-zoo/"><u>the zoo</u></a> for the latest numbers. We marked Falcon signing with a <i>⚠️</i>, as Falcon signing is hard to implement in a fast and timing side-channel secure manner. LMS signing has a ⚠️, as secure LMS signing requires keeping a state and the listed signing time assumes a 32MB cache. This will be discussed later on.</p><p>These are a lot of algorithms, and we didn’t even list all variants. One thing is clear: none of them perform as well as classical elliptic curve signatures across the board. Let’s start with NIST’s 2022 picks.</p>
    <div>
      <h3>ML-DSA, SLH-DSA, and Falcon</h3>
      <a href="#ml-dsa-slh-dsa-and-falcon">
        
      </a>
    </div>
    <p>The most viable general purpose post-quantum signature scheme standardized today is the lattice-based <b>ML-DSA</b> (<a href="https://nvlpubs.nist.gov/nistpubs/fips/nist.fips.204.pdf"><u>FIPS 204</u></a>), which started its life as <a href="https://pq-crystals.org/dilithium/index.shtml"><u>Dilithium</u></a>. It’s light on the CPU and reasonably straightforward to implement. The big downside is that its signatures and public keys are large: 2.4kB and 1.3kB respectively. Here and for the balance of the blog post, we will only consider the variants at the AES-128 security level unless stated otherwise. Adding ML-DSA, adds 14.7kB to the TLS handshake (two 1312-byte public keys plus five 2420-byte signatures).</p><p><b>SLH-DSA</b> (<a href="https://nvlpubs.nist.gov/nistpubs/fips/nist.fips.205.pdf"><u>FIPS 205</u></a>, née <a href="https://sphincs.org/"><u>SPHINCS</u><u><sup>+</sup></u></a>) looks strictly worse, adding 39kB and significant computational overhead for both signing and verification. The advantage of SLH-DSA, being solely based on hashes, is that its security is much better understood than ML-DSA. The lowest security level of SLH-DSA is generally more trusted than the highest security levels of many other schemes.</p><p><a href="https://falcon-sign.info/"><b><u>Falcon</u></b></a> (to be renamed <a href="https://www.nist.gov/news-events/news/2024/08/nist-releases-first-3-finalized-post-quantum-encryption-standards"><u>FN-DSA</u></a>) seems much better than SLH-DSA and ML-DSA if you look only at the numbers in the table. There is a catch though. For fast signing, Falcon requires fast floating-point arithmetic, which turns out to be <a href="https://blog.cloudflare.com/nist-post-quantum-surprise/#digital-signatures"><u>difficult to implement securely</u></a>. Signing can be performed securely with emulated floating-point arithmetic, but that makes it roughly twenty times slower. This makes Falcon ill-suited for online signatures. Furthermore, the signing procedure of Falcon is complicated to implement. On the other hand, Falcon verification is simple and doesn’t require floating-point arithmetic.</p><p>Leaning into Falcon’s strength, by using ML-DSA for the handshake signature, and Falcon for the rest, we’re only adding 7.3kB (at security level of AES-128).</p><p>There is one more difficulty with Falcon worth mentioning: it’s missing a middle security level. That means that if Falcon-512 (which we considered so far) turns out to be weaker than expected, then the next one up is Falcon-1024, which has double signature and public key size. That amounts to adding about 11kB.</p>
    <div>
      <h3>Stateful hash-based signatures</h3>
      <a href="#stateful-hash-based-signatures">
        
      </a>
    </div>
    <p>The very first post-quantum signature algorithms standardized are the stateful hash-based <a href="https://datatracker.ietf.org/doc/html/rfc8391"><u>XMSS</u><u><sup>(MT)</sup></u></a> and <a href="https://datatracker.ietf.org/doc/html/rfc8554#page-45"><u>LMS/HSS</u></a>. These are hash-based signatures, similar to SLH-DSA, and so we have a lot of trust in their security. They come with a big drawback: when creating a keypair you prepare a finite number of <i>signature slots</i>. For the variant listed in the table, there are about one million slots. Each slot can only be used once. If by accident a slot is used twice, then anyone can (<a href="https://eprint.iacr.org/2016/1042"><u>probably</u></a>) use those two signatures to forge any new signature from that slot and break into the connection the certificate is supposed to protect. Remembering which slots have been used, is the <i>state</i> in <i>stateful</i> hash-based signature. Certificate authorities might be able to keep the state, but for general use, Adam Langley calls keeping the state a <a href="https://www.imperialviolet.org/2013/07/18/hashsig.html"><u>huge foot-cannon</u></a>.</p><p>There are more quirks to keep in mind for stateful hash-based signatures. To start, during key generation, each slot needs to be prepared. Preparing each slot takes approximately the same amount of time as verifying a signature. Preparing all million takes a couple of hours on a single core. For intermediate certificates of a popular certificate authority, a million slots are not enough. Indeed, Let’s Encrypt issues more than <a href="https://letsencrypt.org/stats/"><u>four million certificates per day</u></a>. Instead of increasing the number of slots directly, we can use an extra intermediate. This is what XMSS<sup>MT</sup> and HSS do internally. A final quirk of stateful hash-based signatures is that their security is bottlenecked on non-repudiation: the listed LMS instance has 192 bits of security against forgery, but only 96 bits against the signer themselves creating a single signature that verifies two different messages.</p><p>Even when stateful hash-based signatures or Falcon can be used, we are still adding a lot of bytes on the wire. From <a href="https://blog.cloudflare.com/sizing-up-post-quantum-signatures/"><u>earlier experiments</u></a> we know that that will impact performance significantly. We summarize those findings later in this blog post, and share some new data. The short of it: it would be nice to have a post-quantum signature scheme that outperforms Falcon, or at least outperforms ML-DSA and is easier to deploy. This is one of the reasons NIST is running the second competition.</p><p>With that in mind, let’s have a look at the candidates.</p>
    <div>
      <h3>Structured lattice alternatives</h3>
      <a href="#structured-lattice-alternatives">
        
      </a>
    </div>
    <p>With only performance in mind, it is surprising that half of the candidates do worse than ML-DSA. There is a good reason for it: NIST is worried that we’re putting all our eggs in the structured lattices basket. SLH-DSA is an alternative to lattices today, but it doesn’t perform well enough for many applications. As such, NIST <a href="https://csrc.nist.gov/csrc/media/Projects/pqc-dig-sig/documents/call-for-proposals-dig-sig-sept-2022.pdf"><u>would primarily like to standardize</u></a> another general purpose signature algorithm that is not based on structured lattices, and that outperforms SLH-DSA. We will briefly touch upon these schemes here.</p>
    <div>
      <h4>Code-based</h4>
      <a href="#code-based">
        
      </a>
    </div>
    <p><a href="https://www.cross-crypto.com/"><u>CROSS</u></a> and <a href="https://www.less-project.com/#:~:text=LESS%20(Linear%20Equivalence%20Signature%20Scheme,the%20Linear%20Code%20Equivalence%20problem."><u>LESS</u></a> are two<b> code-based signature</b> schemes. <b>CROSS</b> is based on a variant of the traditional syndrome decoding problem. Its signatures are about as large as SLH-DSA, but its edge over SLH-DSA is the much better signing times. <b>LESS</b> is based on the novel <a href="https://eprint.iacr.org/2023/847"><u>linear equivalence problem</u></a>. It only outperforms SLH-DSA on signature size, requiring larger public keys in return. For use in TLS, the high verification times of LESS are especially problematic. Given that LESS is based on a new approach, it will be interesting to see how much it can improve going forward.</p>
    <div>
      <h4>Multi-party computation in the head</h4>
      <a href="#multi-party-computation-in-the-head">
        
      </a>
    </div>
    <p>Five of the submissions (<a href="https://pqc-mira.org/"><u>Mira</u></a><a href="https://pqc-mirith.org/"><u>th</u></a>, <a href="https://mqom.org/"><u>MQOM</u></a>, <a href="https://pqc-perk.org/"><u>PERK</u></a>, <a href="https://pqc-ryde.org/"><u>RYDE</u></a>, <a href="https://sdith.org/"><u>SDitH</u></a>) use the <b>Multi-Party Computation in the Head</b> (MPCitH) paradigm.</p><p>It has been exciting to see the developments in this field. To explain a bit about it, let’s go back to <a href="https://microsoft.github.io/Picnic/"><u>Picnic</u></a>. Picnic was an MPCitH submission to the previous NIST PQC competition. In essence, its private key is a random key <i>x</i>, and its public key is the hash <i>H(x)</i>. A signature is a zero-knowledge proof demonstrating that the signer knows <i>x</i>. So far, it’s pretty similar in shape to other signature schemes that use zero knowledge proofs. The difference is in how that proof is created. We have to talk about multi-party computation (MPC) first. MPC starts with splitting the key <i>x</i> into shares, using <a href="https://en.wikipedia.org/wiki/Shamir%27s_secret_sharing"><u>Shamir secret sharing</u></a> for instance, and giving each party one share. No single party knows the value of <i>x</i> itself, but they can recover it by recombining. The insight of MPC is that these parties (with some communication) can perform arbitrary computation on the data they shared. In particular, they can compute a secret share of <i>H(x)</i>. Now, we can use that to make a zero-knowledge proof as follows. The signer simulates all parties in the multi-party protocol to compute and recombine <i>H(x)</i>. The signer then reveals part of the intermediate values of the computation using <a href="https://en.wikipedia.org/wiki/Fiat%E2%80%93Shamir_heuristic"><u>Fiat–Shamir</u></a>: enough so that none of the parties could have cheated on any of the steps, but not enough that it allows the verifier to figure out <i>x</i> themselves.</p><p>For <i>H</i>, Picnic uses <a href="https://lowmc.github.io/"><u>LowMC</u></a>, a block cipher for which it’s easy to do the multi-party computation. The initial submission of Picnic performed poorly compared to SLH-DSA with 32kB signatures. For the second round, Picnic was improved considerably, boasting 12kB signatures. SLH-DSA won out with smaller signatures, and more conservative security assumptions: Picnic relies on LowMC which didn’t receive as much study as the hashes on which SLH-DSA is based.</p><p>Back to the MPCitH candidates that progressed. All of them have variants (listed in the table) with similar or better signature sizes as SLH-DSA, while outperforming SLH-DSA considerably in signing time. There are variants with even smaller signatures, but their verification performance is significantly higher. The difference between the MPCitH candidates is the underlying <a href="https://en.wikipedia.org/wiki/Trapdoor_function"><u>trapdoor</u></a> they use. In Picnic the trapdoor was LowMC. For both RYDE and SDiTH, the trapdoors used are based on variants of <a href="https://en.wikipedia.org/wiki/Decoding_methods#Syndrome_decoding"><u>syndrome decoding</u></a>, and could be classified as code-based cryptography.</p><p>Over the years, MPCitH schemes have seen remarkable improvements in performance, and we don’t seem to have reached the end of it yet. There is still some way to go before these schemes would be competitive in TLS: signature size needs to be reduced without sacrificing the currently borderline acceptable verification performance. On top of that, not all underlying trapdoors of the various schemes have seen enough scrutiny.</p>
    <div>
      <h4>FAEST</h4>
      <a href="#faest">
        
      </a>
    </div>
    <p><a href="https://faest.info/"><u>FAEST</u></a> is a peek into the future. It’s similar to the MPCitH candidates in that its security reduces to an underlying trapdoor. It is quite different from those in that FAEST’s underlying trapdoor is AES. That means that, given the security analysis of FAEST is correct, it’s on the same footing as SLH-DSA. Despite the conservative trapdoor, FAEST beats the MPCitH candidates in performance. It also beats SLH-DSA on all metrics.</p><p>At the AES-128 security level, FAEST’s signatures are larger than ML-DSA. For those that want to hedge against improvements in lattice attacks, and would only consider higher security levels of ML-DSA, FAEST becomes an attractive alternative. ML-DSA-65 has a combined public key and signature size of 5.2kB, which is similar to FAEST EM-128f. ML-DSA-65 still has a slight edge in performance.</p><p>FAEST is based on the 2023 <a href="https://eprint.iacr.org/2023/996.pdf"><u>VOLE in the Head</u></a> paradigm. These are new ideas, and it seems likely their full potential has not been realized yet. It is likely that FAEST will see improvements.</p><p>The VOLE in the Head techniques can and probably will be adopted by some of the MPCitH submissions. It will be interesting to see how far VOLEitH can be pushed when applied to less conservative trapdoors. Surpassing ML-DSA seems in reach, but Falcon? We will see.</p><p>Now, let’s move on to the submissions that surpass ML-DSA today.</p>
    <div>
      <h3>HAWK</h3>
      <a href="#hawk">
        
      </a>
    </div>
    <p><a href="https://hawk-sign.info/"><u>HAWK</u></a> is similar to Falcon, but improves upon it in a few key ways. Most importantly, it doesn’t rely on floating point arithmetic. Furthermore, its signing procedure is simpler and much faster. This makes HAWK suitable for online signatures. Using HAWK adds 4.8kB. Apart from size and speed, it’s beneficial to rely on only a single scheme: using multiple schemes increases the attack surface for algorithmic weaknesses and implementation mistakes.</p><p>Similar to Falcon, HAWK is missing a middle security level. Using HAWK-1024 doubles sizes (9.6kB).</p><p>There is one downside to HAWK over Falcon: HAWK relies on a new security assumption, the <a href="https://eprint.iacr.org/2021/1332.pdf"><u>lattice isomorphism problem</u></a>.</p>
    <div>
      <h3>SQISign</h3>
      <a href="#sqisign">
        
      </a>
    </div>
    <p><a href="https://sqisign.org/"><u>SQISign</u></a> is based on <a href="https://blog.cloudflare.com/sidh-go/"><u>isogenies</u></a>. Famously, SIKE, another isogeny-based scheme in the previous competition, got <a href="https://eprint.iacr.org/2022/975.pdf"><u>broken badly</u></a> late into the competition. SQISign is based on a different problem, though. SQISign is remarkable for having very small signatures and public keys: it even beats RSA-2048. The glaring downside is that it is computationally very expensive to compute and verify a signature. Isogeny-based signature schemes is a very active area of research with many advances over the years.</p><p>It seems unlikely that any future SQISign variant will sign fast enough for the TLS handshake signature. Furthermore, SQISign signing seems to be hard to implement in a timing side-channel secure manner. What about the other signatures of TLS? The bottleneck is verification time. It would be acceptable for SQISign to have larger signatures, if that allows it to have faster verification time.</p>
    <div>
      <h3>UOV</h3>
      <a href="#uov">
        
      </a>
    </div>
    <p><a href="https://www.uovsig.org/"><u>UOV</u></a> (unbalanced oil and vinegar) is an old multivariate scheme with large public keys (67kB), but small signatures (96 bytes). Furthermore, it has excellent signing and verification performance. These interesting size tradeoffs make it quite suited for use cases where the public key is known in advance.</p><p>If we use UOV in TLS for the SCTs and root CA, whose public keys are not transmitted when setting up the connection, together with ML-DSA for the others, we’re looking at 7.2kB. That’s a clear improvement over using ML-DSA everywhere, and a tad better than combining ML-DSA with Falcon.</p><p>When combining UOV with HAWK instead of ML-DSA, we’re looking at adding only 3.4kB. That’s better again, but only a marginal improvement over using HAWK everywhere (4.8kB). The relative advantage of UOV improves if the certificate transparency ecosystem moves towards requiring more SCTs.</p><p>For SCTs, the size of UOV public keys seems acceptable, as there are not that many certificate transparency logs at the moment. Shipping a UOV public key for hundreds of root CAs is more painful, but within reason. Even with <a href="https://blog.cloudflare.com/pq-2024/#leaving-out-intermediate-certificates"><u>intermediate suppression</u></a>, using UOV in each of the thousands of intermediate certificates does not make sense.</p>
    <div>
      <h3>Structured multivariate</h3>
      <a href="#structured-multivariate">
        
      </a>
    </div>
    <p>Since the original UOV, over the decades, many attempts have been made to add additional structure UOV, to get a better balance between the size of the signature and public key. Unfortunately many of these <i>structured multivariate</i> schemes, which include GeMMS and Rainbow, have been broken.</p><p>Let’s have a look at the multivariate candidates. The most interesting variant of <b>QR-UOV</b> for TLS has 24kB public keys and 157 byte signatures. The current verification times are unacceptably high, but there seems to be plenty of room for an improved implementation. There is also a variant with a 12kB public key, but its verification time needs to come down even further. In any case, the combined size QR-UOV’s public key and signatures remain large enough that it’s not a competitor of ML-DSA or Falcon. Instead, QR-UOV competes with UOV, where UOV’s public keys are unwieldy. Although QR-UOV hasn’t seen a direct attack yet, a similar scheme has recently been <a href="https://link.springer.com/chapter/10.1007/978-3-031-62746-0_9"><u>weakened</u></a> and another <a href="https://link.springer.com/chapter/10.1007/978-3-030-44223-1_18"><u>broken</u></a>.</p><p>Finally, we get to<b> </b><a href="https://snova.pqclab.org/"><b><u>SNOVA</u></b></a> and <a href="https://pqmayo.org/"><b><u>MAYO</u></b></a>. Although they’re based on a different technique, they have a lot of properties in common. To start, they have the useful property that they allow for a granular tradeoff between public key and signature size. This allows us to use a different variant optimized for whether we’re transmitting the public in the connection or not. Using MAYO<sub>one</sub> for the leaf and intermediate, and MAYO<sub>two</sub> for the others, adds 3.5kB. Similarly with SNOVA, we add 2.8kB. On top of that, both schemes have excellent signing and verification performance.</p><p>The elephant in the room is the security. During the end of the first round, a new <a href="https://www.jstage.jst.go.jp/article/jsiaml/15/0/15_53/_article"><u>generic attack</u></a> on underdefined multivariate systems prompted the MAYO team to <a href="https://groups.google.com/a/list.nist.gov/g/pqc-forum/c/jEKfDYUgdec/m/0UP_GNKSAwAJ"><u>tweak their parameters</u></a> slightly. SNOVA has been hit a bit harder by three attacks (<a href="https://dl.acm.org/doi/10.1145/3659467.3659900"><u>1</u></a>, <a href="https://eprint.iacr.org/2024/1297"><u>2</u></a>, <a href="https://eprint.iacr.org/2024/1770.pdf"><u>3</u></a>), but so far it seems that SNOVA’s parameters can be adjusted to compensate.</p><p>Ok, we had a look at all the candidates. What did we learn? There are some very promising algorithms that will reduce the number of bytes required on the wire compared to ML-DSA and Falcon. None of the practical ones will prevent us from adding any extra bytes to TLS. So, given that we must add some bytes: how many extra bytes are too many?</p>
    <div>
      <h2>How many added bytes are too many for TLS?</h2>
      <a href="#how-many-added-bytes-are-too-many-for-tls">
        
      </a>
    </div>
    <p>On average, around 15 million TLS connections are established with Cloudflare per second. Upgrading each to ML-DSA, would take 1.8Tbps, which is 0.6% of our current total network capacity. No problem so far. The question is how these extra bytes affect performance.</p><p>Back in 2021, we <a href="https://blog.cloudflare.com/sizing-up-post-quantum-signatures/"><u>ran a large-scale experiment</u></a> to measure the impact of big post-quantum certificate chains on connections to Cloudflare’s network over the open Internet. There were two important results. First, we saw a steep increase in the rate of client and middlebox failures when we added more than 10kB to existing certificate chains. Secondly, when adding less than 9kB, the slowdown in TLS handshake time would be approximately 15%. We felt the latter is workable, but far from ideal: such a slowdown is noticeable and people might hold off deploying post-quantum certificates before it’s too late.</p><p>Chrome is more cautious and set 10% as their target for maximum TLS handshake time regression. They <a href="https://dadrian.io/blog/posts/pqc-signatures-2024/#fnref:3"><u>report</u></a> that deploying post-quantum key agreement has already incurred a 4% slowdown in TLS handshake time, for the extra 1.1kB from server-to-client and 1.2kB from client-to-server. That slowdown is proportionally larger than the 15% we found for 9kB, but that could be explained by slower upload speeds than download speeds.</p><p>There has been pushback against the focus on TLS handshake times. One argument is that session resumption alleviates the need for sending the certificates again. A second argument is that the data required to visit a typical website dwarfs the additional bytes for post-quantum certificates. One example is this <a href="https://www.amazon.science/publications/the-impact-of-data-heavy-post-quantum-tls-1-3-on-the-time-to-last-byte-of-real-world-connections"><u>2024 publication</u></a>, where Amazon researchers have simulated the impact of large post-quantum certificates on data-heavy TLS connections. They argue that typical connections transfer multiple requests and hundreds of kilobytes, and for those the TLS handshake slowdown disappears in the margin.</p><p>Are session resumption and hundreds of kilobytes over a connection typical though? We’d like to share what we see. We focus on QUIC connections, which are likely initiated by browsers or browser-like clients. Of all QUIC connections with Cloudflare that carry at least one HTTP request, 37% are <a href="https://blog.cloudflare.com/even-faster-connection-establishment-with-quic-0-rtt-resumption/"><u>resumptions</u></a>, meaning that key material from a previous TLS connection is reused, avoiding the need to transmit certificates. The median number of bytes transferred from server-to-client over a resumed QUIC connection is 4.4kB, while the average is 395kB. For non-resumptions the median is 7.8kB and average is 551kB. This vast difference between median and average indicates that a small fraction of data-heavy connections skew the average. In fact, only 15.8% of all QUIC connections transfer more than 100kB.</p><p>The median certificate chain today (with compression) is <a href="https://datatracker.ietf.org/doc/html/draft-ietf-tls-cert-abridge-02#section-4"><u>3.2kB</u></a>. That means that almost 40% of all data transferred from server to client on more than half of the non-resumed QUIC connections are just for the certificates, and this only gets worse with post-quantum algorithms. For the majority of QUIC connections, using ML-DSA as a drop-in replacement for classical signatures would more than double the number of transmitted bytes over the lifetime of the connection.</p><p>It sounds quite bad if the vast majority of data transferred for a typical connection is just for the post-quantum certificates. It’s still only a proxy for what is actually important: the effect on metrics relevant to the end-user, such as the browsing experience (e.g. <a href="https://web.dev/articles/optimize-lcp"><u>largest contentful paint</u></a>) and the amount of data those certificates take from a user’s monthly data cap. We will continue to investigate and get a better understanding of the impact.</p>
    <div>
      <h2>Zooming out</h2>
      <a href="#zooming-out">
        
      </a>
    </div>
    <p>That was a lot — let’s step back.</p><p>It’s great to see how much better the post-quantum signature algorithms are today in almost every family than they were in <a href="https://blog.cloudflare.com/sizing-up-post-quantum-signatures/"><u>2021</u></a>. The improvements haven’t slowed down either. Many of the algorithms that do not improve over ML-DSA for TLS today could still do so in the third round. Looking back, we are also cautioned: several algorithms considered in 2021 have since been broken.</p><p>From an implementation and performance perspective for TLS today, HAWK, SNOVA, and MAYO are all clear improvements over ML-DSA and Falcon. They are also very new, and presently we cannot depend on them without a <a href="https://blog.cloudflare.com/pq-2024/#way-forward"><u>plan B</u></a>. UOV has been around a lot longer. Due to its large public key, it will not work on its own, but be a very useful complement to another general purpose signature scheme.</p><p>Even with the best performers out of the competition, the way we see TLS connections used today, suggest that drop-in post-quantum certificates will have a big impact on at least half of them.</p><p>In the meantime, we can also make plan B our plan A: there are several ways in which we can reduce the number of signatures used in TLS. We can leave out intermediate certificates (<a href="https://datatracker.ietf.org/doc/html/draft-kampanakis-tls-scas-latest"><u>1</u></a>, <a href="https://datatracker.ietf.org/doc/draft-ietf-tls-cert-abridge/"><u>2</u></a>, <a href="https://datatracker.ietf.org/doc/html/draft-davidben-tls-trust-expr-04#name-intermediate-elision"><u>3</u></a>). Another is to use a KEM <a href="https://kemtls.org/"><u>instead of a signature</u></a> for handshake authentication. We can even get rid of all the offline signatures with a more <a href="https://datatracker.ietf.org/doc/html/draft-davidben-tls-merkle-tree-certs-03"><u>ambitious redesign</u></a> for the <a href="https://www.youtube.com/watch?v=f8unMB2Qjho"><u>vast majority</u></a> of visits: a post-quantum Internet with fewer bytes on the wire! We’ve discussed these ideas at more length in a <a href="https://blog.cloudflare.com/pq-2024/#way-forward"><u>previous blog post</u></a>.</p><p>So what does this mean for the coming years? We will continue to work with browsers to understand the end user impact of large drop-in post-quantum certificates. When certificate authorities support them (our guess: 2026), we will add support for ML-DSA certificates <a href="https://blog.cloudflare.com/post-quantum-crypto-should-be-free/"><u>for free</u></a>. This will be opt-in until cryptographically relevant quantum computers are imminent, to prevent undue performance regression. In the meantime, we will continue to pursue larger changes to the WebPKI, so that we can bring full post-quantum security to the Internet without performance compromise.</p><p>We’ve talked a lot about certificates, but what we need to care about today is encryption. Along with many across industry, including the major browsers, we have deployed the post-quantum key agreement X25519MLKEM768 across the board, and you can make sure your connections with Cloudflare are already secured against harvest-now/decrypt-later. Visit <a href="http://pq.cloudflareresearch.com"><u>pq.cloudflareresearch.com</u></a> to learn how.</p> ]]></content:encoded>
            <category><![CDATA[Post-Quantum]]></category>
            <category><![CDATA[Research]]></category>
            <category><![CDATA[Cryptography]]></category>
            <category><![CDATA[TLS]]></category>
            <guid isPermaLink="false">3mOPXbiTgeQHBChx4vUuMs</guid>
            <dc:creator>Bas Westerbaan</dc:creator>
            <dc:creator>Luke Valenta</dc:creator>
        </item>
        <item>
            <title><![CDATA[NIST’s first post-quantum standards]]></title>
            <link>https://blog.cloudflare.com/nists-first-post-quantum-standards/</link>
            <pubDate>Tue, 20 Aug 2024 21:00:00 GMT</pubDate>
            <description><![CDATA[ NIST has published the first cryptographic standards for protecting against attacks from quantum computers. Learn what this means for you and your organization. ]]></description>
            <content:encoded><![CDATA[ <p>On August 13th, 2024, the US National Institute of Standards and Technology (NIST) <a href="https://www.nist.gov/news-events/news/2024/08/nist-releases-first-3-finalized-post-quantum-encryption-standards"><u>published</u></a> the first three cryptographic standards designed to resist an <a href="https://blog.cloudflare.com/the-quantum-menace"><u>attack</u></a> from quantum computers: <a href="https://nvlpubs.nist.gov/nistpubs/fips/nist.fips.203.pdf"><u>ML-KEM</u></a>, <a href="https://nvlpubs.nist.gov/nistpubs/fips/nist.fips.204.pdf"><u>ML-DSA</u></a>, and <a href="https://nvlpubs.nist.gov/nistpubs/fips/nist.fips.205.pdf"><u>SLH-DSA</u></a>. This announcement marks a significant milestone for ensuring that today’s communications remain secure in a future world where large-scale quantum computers are a reality.</p><p>In this blog post, we briefly discuss the significance of NIST’s recent announcement, how we expect the ecosystem to evolve given these new standards, and the next steps we are taking. For a deeper dive, see <a href="https://blog.cloudflare.com/pq-2024"><u>our March 2024 blog post</u></a>.</p>
    <div>
      <h2>Why are quantum computers a threat?</h2>
      <a href="#why-are-quantum-computers-a-threat">
        
      </a>
    </div>
    <p>Cryptography is a fundamental aspect of modern technology, securing everything from online communications to financial transactions. For instance, when visiting this blog, your web browser used cryptography to establish a secure communication channel to Cloudflare’s server to ensure that you’re really talking to Cloudflare (and not an impersonator), and that the conversation remains private from eavesdroppers.</p><p>Much of the cryptography in widespread use today is based on mathematical puzzles (like <a href="https://en.wikipedia.org/wiki/RSA_(cryptosystem)"><u>factoring very large numbers</u></a>) which are computationally out of reach for classical (non-quantum) computers. We could likely continue to use traditional cryptography for decades to come if not for the advent of <a href="https://www.cloudflare.com/learning/ssl/quantum/what-is-quantum-computing/"><u>quantum computers</u></a>, devices that use properties of quantum mechanics to perform certain specialized calculations much more efficiently than traditional computers. Unfortunately, those specialized calculations include solving the mathematical puzzles upon which most widely deployed cryptography depends.</p><p>As of today, no quantum computers exist that are large and stable enough to break today’s cryptography, but experts predict that it’s only a matter of time until such a cryptographically-relevant quantum computer (CRQC) exists. For instance, more than a quarter of interviewed experts in a <a href="https://globalriskinstitute.org/publication/2023-quantum-threat-timeline-report/"><u>2023 survey</u></a> expect that a CRQC is more likely than not to appear in the next decade.</p>
    <div>
      <h2>What is being done about the quantum threat?</h2>
      <a href="#what-is-being-done-about-the-quantum-threat">
        
      </a>
    </div>
    <p>In recognition of the quantum threat, the US National Institute of Standards and Technology (<a href="https://nist.gov"><u>NIST</u></a>) launched a public <a href="https://csrc.nist.gov/projects/post-quantum-cryptography"><u>competition in 2016</u></a> to solicit, evaluate, and standardize new “post-quantum” cryptographic schemes that are designed to be resistant to attacks from quantum computers. On August 13, 2024, NIST <a href="https://www.nist.gov/news-events/news/2024/08/nist-releases-first-3-finalized-post-quantum-encryption-standards"><u>published</u></a> the final standards for the first three post-quantum algorithms to come out of the competition: <a href="https://nvlpubs.nist.gov/nistpubs/fips/nist.fips.203.pdf"><u>ML-KEM</u></a> for key agreement, and <a href="https://nvlpubs.nist.gov/nistpubs/fips/nist.fips.204.pdf"><u>ML-DSA</u></a> and <a href="https://nvlpubs.nist.gov/nistpubs/fips/nist.fips.205.pdf"><u>SLH-DSA</u></a> for digital signatures. A <a href="https://www.nist.gov/news-events/news/2022/07/nist-announces-first-four-quantum-resistant-cryptographic-algorithms"><u>fourth standard</u></a> based on <a href="https://falcon-sign.info/"><u>FALCON</u></a> is planned for release in late 2024 and will be dubbed FN-DSA, short for FFT (fast-Fourier transform) over NTRU-Lattice-Based Digital Signature Algorithm.</p><p>The publication of the final standards marks a significant milestone in an <a href="https://www.nist.gov/news-events/news/2016/04/nist-kicks-effort-defend-encrypted-data-quantum-computer-threat"><u>eight-year</u></a> global community effort managed by NIST to prepare for the arrival of quantum computers. Teams of cryptographers from around the world jointly submitted <a href="https://csrc.nist.gov/Projects/post-quantum-cryptography/post-quantum-cryptography-standardization/round-1-submissions"><u>82 algorithms</u></a> to the first round of the competition in 2017. After years of evaluation and cryptanalysis from the global cryptography community, NIST winnowed the algorithms under consideration down through several rounds until they decided upon the first four algorithms to standardize, which they <a href="https://blog.cloudflare.com/nist-post-quantum-surprise"><u>announced in 2022</u></a>.</p><p>This has been a monumental effort, and we would like to extend our gratitude to NIST and all the cryptographers and engineers across academia and industry that participated.</p><p>Security was a primary concern in the selection process, but algorithms also need to be performant enough to be deployed in real-world systems. Cloudflare’s involvement in the NIST competition began in 2019 when we <a href="https://blog.cloudflare.com/the-tls-post-quantum-experiment"><u>performed experiments</u></a> with industry partners to evaluate how algorithms under consideration performed when deployed on the open Internet. Gaining practical experience with the new algorithms was a crucial part of the evaluation process, and helped to identify and remove obstacles for deploying the final standards.</p><p>Having standardized algorithms is a significant step, but migrating systems to use these new algorithms is going to require a multi-year effort. To understand the effort involved, let’s look at two classes of traditional cryptography that are susceptible to quantum attacks: key agreement and digital signatures.</p><p><b>Key agreement</b> allows two parties that have never communicated before to establish a shared secret over an insecure communication channel (like the Internet). The parties can then use this shared secret to encrypt future communications between them. An adversary may be able to observe the encrypted communication going over the network, but without access to the shared secret they cannot decrypt and “see inside” the encrypted packets.</p><p>However, in what is known as the <a href="https://en.wikipedia.org/wiki/Harvest_now,_decrypt_later"><u>"harvest now, decrypt later"</u></a> threat model, an adversary can store encrypted data until some point in the future when they gain access to a sufficiently large quantum computer, and then can decrypt at their leisure. Thus, today’s communication is already at risk from a future quantum adversary, and it is urgent that we upgrade systems to use post-quantum key agreement as soon as possible.</p><p>In 2022, soon after NIST announced the first set of algorithms to be standardized, Cloudflare worked with industry partners to deploy a preliminary version of ML-KEM to protect traffic arriving at Cloudflare’s servers (and our internal systems), both to pave the way for adoption of the final standard and to start protecting traffic as soon as possible. As of mid-August 2024, over 16% of human-generated requests to Cloudflare’s servers are already protected with post-quantum key agreement.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4vTixjEDsg7Tu5YW6Xhy9p/7ad1860335cb330637629c4625b5fc76/2499-2.png" />
          </figure><p><sub><i>Percentage of human traffic to Cloudflare protected by X25519Kyber, a preliminary version of ML-KEM as shown on </i></sub><a href="https://radar.cloudflare.com/adoption-and-usage#post-quantum-encryption-adoption"><sub><i><u>Cloudflare Radar</u></i></sub></a><sub><i>.</i></sub></p><p>Other players in the tech industry have deployed post-quantum key agreement as well, including <a href="https://blog.chromium.org/2024/05/advancing-our-amazing-bet-on-asymmetric.html"><u>Google</u></a>, <a href="https://security.apple.com/blog/imessage-pq3/"><u>Apple</u></a>, <a href="https://engineering.fb.com/2024/05/22/security/post-quantum-readiness-tls-pqr-meta/"><u>Meta</u></a>, and <a href="https://signal.org/blog/pqxdh/"><u>Signal</u></a>.</p><p><b>Signatures</b> are crucial to ensure that you’re communicating with who you think you are communicating. In the web public key infrastructure (WebPKI), signatures are used in certificates to prove that a website operator is the rightful owner of a domain. The threat model for signatures is different than for key agreement. An adversary capable of forging a digital signature could carry out an <i>active</i> attack to impersonate a web server to a client, but today’s communication is not yet at risk.</p><p>While the migration to post-quantum signatures is less urgent than the migration for key agreement (since traffic is only at risk once CRQCs exist), it is much more challenging. Consider, for instance, the number of parties involved. In key agreement, only two parties need to support a new key agreement protocol: the client and the server. In the WebPKI, there are many more parties involved, from library developers, to browsers, to server operators, to certificate authorities, to hardware manufacturers. Furthermore, post-quantum signatures are <a href="https://dadrian.io/blog/posts/pqc-signatures-2024/"><u>much larger</u></a> than we’re used to from traditional signatures. For more details on the tradeoffs between the different signature algorithms, deployment challenges, and out-of-the-box solutions see our <a href="https://blog.cloudflare.com/pq-2024"><u>previous blog post</u></a>.</p><p>Reaching consensus on the right approach for migrating to post-quantum signatures is going to require extensive effort and coordination among stakeholders. However, that work is already well underway. For instance, in 2021 we ran large scale <a href="https://blog.cloudflare.com/sizing-up-post-quantum-signatures/"><u>experiments</u></a> to understand the feasibility of post-quantum signatures in the WebPKI, and we have more studies planned.</p>
    <div>
      <h2>What’s next?</h2>
      <a href="#whats-next">
        
      </a>
    </div>
    <p>Now that NIST has published the first set of standards for post-quantum cryptography, what comes next?</p><p>In 2022, Cloudflare <a href="https://blog.cloudflare.com/post-quantum-cryptography-ga"><u>deployed</u></a> a preliminary version of the ML-KEM key agreement algorithm, Kyber, which is now used to protect <a href="https://radar.cloudflare.com/adoption-and-usage#post-quantum-encryption-adoption"><u>double-digit percentages</u></a> of requests to Cloudflare’s network. We use a <a href="https://datatracker.ietf.org/doc/html/draft-ietf-tls-hybrid-design"><i><u>hybrid</u></i></a> with X25519, to hedge against future advances in cryptanalysis and implementation vulnerabilities. In coordination with industry partners at the <a href="https://www.nccoe.nist.gov/"><u>NIST NCCoE</u></a> and <a href="https://www.ietf.org/"><u>IETF</u></a>, we will upgrade our systems to support the final ML-KEM standard, again using a hybrid. We will slowly phase out support for the pre-standard version X25519Kyber768 after clients have moved to the ML-KEM-768 hybrid, and will quickly phase out X25519Kyber512, which hasn’t seen real-world usage.</p><p>Now that the final standards are available, we expect to see widespread adoption of ML-KEM industry-wide as support is added in software and hardware, and post-quantum becomes the new default for key agreement. Organizations should look into upgrading their systems to use post-quantum key agreement as soon as possible to protect their data from future quantum-capable adversaries. Check if your browser already supports post-quantum key agreement by visiting <a href="https://pq.cloudflareresearch.com"><u>pq.cloudflareresearch.com</u></a>, and if you’re a Cloudflare customer, see how you can <a href="https://blog.cloudflare.com/post-quantum-to-origins/"><u>enable post-quantum key agreement support to your origin</u></a> today.</p><p>Adoption of the newly-standardized post-quantum signatures ML-DSA and SLH-DSA will take longer as stakeholders work to reach consensus on the migration path. We expect the first post-quantum certificates to be available in 2026, but not to be enabled by default. Organizations should prepare for a future flip-the-switch migration to post-quantum signatures, but there is no need to flip the switch just yet.</p><p>We’ll continue to provide updates in this blog and at <a href="https://pq.cloudflareresearch.com"><u>pq.cloudflareresearch.com</u></a>. Don’t hesitate to reach out to us at <a><u>ask-research@cloudflare.com</u></a> with any questions.</p> ]]></content:encoded>
            <category><![CDATA[Research]]></category>
            <category><![CDATA[Cryptography]]></category>
            <category><![CDATA[Post-Quantum]]></category>
            <guid isPermaLink="false">5JwNgDhEFBcPJq3mVrYMUx</guid>
            <dc:creator>Luke Valenta</dc:creator>
            <dc:creator>Vânia Gonçalves</dc:creator>
            <dc:creator>Bas Westerbaan</dc:creator>
        </item>
        <item>
            <title><![CDATA[The state of the post-quantum Internet]]></title>
            <link>https://blog.cloudflare.com/pq-2024/</link>
            <pubDate>Tue, 05 Mar 2024 14:00:24 GMT</pubDate>
            <description><![CDATA[ Nearly 2% of all TLS 1.3 connections established with Cloudflare are secured with post-quantum cryptography. What once was the topic of futuristic tech demos will soon be the new security baseline. ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Today, nearly <b>two percent</b> of all TLS 1.3 connections established with Cloudflare are secured with <a href="https://www.cloudflare.com/learning/ssl/quantum/what-is-post-quantum-cryptography/"><b>post-quantum cryptography</b></a><b>.</b> We expect to see double-digit adoption by the end of 2024. Apple <a href="https://security.apple.com/blog/imessage-pq3/">announced</a> in February 2024 that it will secure iMessage with post-quantum cryptography before the end of the year, and <a href="https://signal.org/">Signal</a> chats are <a href="https://signal.org/blog/pqxdh/">already secured</a>. What once was the topic of futuristic tech demos will soon be the new security baseline for the Internet.</p><p>A lot has been happening in the field over the last few years, from mundane name changes (ML-KEM is the new name for Kyber), to new proposed algorithms in the <a href="https://csrc.nist.gov/projects/pqc-dig-sig">signatures onramp</a>, to the catastrophic <a href="https://eprint.iacr.org/2022/975.pdf">attack on SIKE</a>. Plenty that has been written merely three years ago now feels quite out of date. Thus, it is high time for an update: in this blog post we’ll take measure of where we are now in early 2024, what to expect for the coming years, and what you can do today.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/rJBlfZsFpgggNh7HoiSC7/9016d555b9e30dfe492db6cba85d31b3/graph.png" />
            
            </figure><p>Fraction of TLS 1.3 connections established with Cloudflare that are secured with post-quantum cryptography.</p>
    <div>
      <h2>The quantum threat</h2>
      <a href="#the-quantum-threat">
        
      </a>
    </div>
    <p>First things first: why are we migrating our cryptography? It’s because of <b>quantum computers</b>. <a href="https://www.cloudflare.com/learning/ssl/quantum/what-is-quantum-computing/">These marvelous devices</a>, instead of restricting themselves to zeroes and ones, compute using more of what nature actually affords us: quantum superposition, interference, and entanglement. This allows quantum computers to excel at certain very specific computations, notably simulating nature itself, which will be very helpful in developing new materials.</p><p>Quantum computers are not going to replace regular computers, though: they’re actually much worse than regular computers at most tasks. Think of them as graphic cards — specialized devices for specific computations.</p><p>Unfortunately, quantum computers also <a href="/the-quantum-menace">excel</a> at breaking key cryptography that’s in common use today. Thus, we will have to move to <b>post-quantum cryptography</b>: cryptography designed to be resistant against quantum attack. We’ll discuss the exact impact on the different types of cryptography later on. For now quantum computers are rather anemic: they’re simply not good enough today to crack any real-world cryptographic keys.</p><p>That doesn’t mean we shouldn’t worry yet: encrypted traffic can be <a href="https://en.wikipedia.org/wiki/Harvest_now,_decrypt_later">harvested today</a>, and decrypted with a quantum computer in the future.</p>
    <div>
      <h3>Quantum numerology</h3>
      <a href="#quantum-numerology">
        
      </a>
    </div>
    <p>When will they be good enough? Like clockwork, every year there are news stories of new quantum computers with record-breaking number of qubits. This focus on counting qubits is quite misleading. To start, quantum computers are analogue machines, and there is always some noise interfering with the computation.</p><p>There are big differences between the different types of technology used to build quantum computers: silicon-based quantum computers seem to scale well, are quick to execute instructions, but have very noisy qubits. This does not mean they’re useless: with <a href="https://en.wikipedia.org/wiki/Quantum_error_correction">quantum error correcting codes</a> one can effectively turn tens of millions of noisy silicon qubits into a few thousand high-fidelity ones, which could be enough to <a href="https://quantum-journal.org/papers/q-2021-04-15-433/">break RSA</a>. Trapped-ion quantum computers, on the other hand, have much less noise, but have been harder to scale. Only a few hundred-thousand trapped-ion qubits could potentially draw the curtain on RSA.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2O3l7xYEj4myLC4C9NLuDC/4e49bc0d454bc5ffb8d3a4a5e4aa3300/Screenshot-2024-03-05-at-11.53.49.png" />
            
            </figure><p>State-of-art in quantum computing measured by qubit count and noise in <a href="https://sam-jaques.appspot.com/quantum_landscape">2021</a>, <a href="https://sam-jaques.appspot.com/quantum_landscape_2022">2022</a>, and <a href="https://sam-jaques.appspot.com/quantum_landscape_2023">2023</a>. Once the shaded gray area hits the left-most red line, we’re in trouble. Red line is expected to move to the left. Compiled by <a href="https://sam-jaques.appspot.com/">Samuel Jaques</a> of the University of Waterloo.</p><p>We’re only scratching the surface with the number of qubits and noise. For instance, a quirk of many quantum computers is that only adjacent qubits can interact — something that most estimates do not take into account. On the other hand, for a specific quantum computer, a tailored algorithm can perform much better than a generic one. We can only guess what a future quantum computer will look like, and today’s estimates are most likely off by at least an order of magnitude.</p>
    <div>
      <h3>When will quantum computers break real-world cryptography?</h3>
      <a href="#when-will-quantum-computers-break-real-world-cryptography">
        
      </a>
    </div>
    <p>So, when do we expect the demise of RSA-2048 which is in common use today? In a 2022 <a href="https://globalriskinstitute.org/publication/2022-quantum-threat-timeline-report/">survey</a>, over half the interviewed experts thought it’d be more probable than not that by 2037 such a <i>cryptographically relevant</i> quantum computer would’ve been built.</p><p>We can also look at the US government’s timeline for the migration to post-quantum cryptography. The National Security Agency (NSA) aims to finish its migration before <a href="https://media.defense.gov/2022/Sep/07/2003071834/-1/-1/0/CSA_CNSA_2.0_ALGORITHMS_.PDF">2033</a>, and will start to prefer post-quantum ready vendors for many products in 2025. The US government has a similarly ambitious timeline for the country as a whole: the aim is to be done <a href="https://www.whitehouse.gov/briefing-room/statements-releases/2022/05/04/national-security-memorandum-on-promoting-united-states-leadership-in-quantum-computing-while-mitigating-risks-to-vulnerable-cryptographic-systems/">by 2035</a>.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/50GUlzarTvWmpbkxZtourY/add63399c2f3804a98dc184f0fd8e9db/image7.png" />
            
            </figure><p><a href="https://media.defense.gov/2022/Sep/07/2003071834/-1/-1/0/CSA_CNSA_2.0_ALGORITHMS_.PDF">NSA timeline</a> for migrating third-party software to post-quantum cryptography.</p><p>More anecdotally, at industry conferences on the post-quantum migration, I see particularly high participation of the automotive branch. Not that surprising, considering that the median age of a car on the road is 14 years, a lot of money is on the line, and not all cryptography used in cars can be upgraded easily once on the road.</p><p>So when will it arrive? Whether it’s 2034 or 2050, it will be <b>too soon</b>. The immense success of cryptography means it’s all around us now, from dishwasher, to pacemaker, to satellite. Most upgrades will be easy, and fit naturally in the product’s lifecycle, but there will be a long tail of difficult and costly upgrades.</p>
    <div>
      <h3>Two migrations</h3>
      <a href="#two-migrations">
        
      </a>
    </div>
    <p>To help prioritize, it is important to understand that there is a big difference in the difficulty, impact, and urgency of the post-quantum migration for the different kinds of cryptography required to create secure connections. In fact, for most organizations there will be two post-quantum migrations: <b>key agreement</b> and <b>signatures / certificates</b>.</p><p><b>Already post-quantum secure: symmetric cryptography</b></p><p>Let’s explain this for the case of creating a secure connection when visiting a website in a browser. The workhorse is a <b>symmetric cipher</b> such as AES-GCM. It’s what you would think of when thinking of cryptography: both parties, in this case the browser and server, have a shared key, and they encrypt / decrypt their messages with the same key. Unless you have that key, you can’t read anything, or modify anything.</p><p>The good news is that symmetric ciphers, such as <a href="/go-crypto-bridging-the-performance-gap/">AES-GCM</a>, are already post-quantum secure. There is a common misconception that <a href="https://en.wikipedia.org/wiki/Grover%27s_algorithm">Grover’s quantum algorithm</a> requires us to double the length of symmetric keys. On closer inspection of the algorithm, it’s clear that it is <a href="/nist-post-quantum-surprise#grover-s-algorithm">not practical</a>. The way <a href="https://www.nist.gov/">NIST</a>, the US National Institute for Standards and Technology (who have been spearheading the standardization of post-quantum cryptography) defines their post-quantum security levels is very telling. They define a specific security level by saying the scheme should be as hard to crack using either a classical or quantum computer as an existing symmetric cipher as follows:</p>
<table>
<thead>
  <tr>
    <th><span>Level</span></th>
    <th><span>Definition,</span><span> as least as hard to break as … </span></th>
    <th><span>Example</span></th>
  </tr>
</thead>
<tbody>
  <tr>
    <td><span>1</span></td>
    <td><span>To recover the key of </span><span>AES-128</span><span> by exhaustive search</span></td>
    <td><span>ML-KEM-512, SLH-DSA-128s</span></td>
  </tr>
  <tr>
    <td><span>2</span></td>
    <td><span>To find a collision in </span><span>SHA256</span><span> by exhaustive search</span></td>
    <td><span>ML-DSA-44</span></td>
  </tr>
  <tr>
    <td><span>3</span></td>
    <td><span>To recover the key of </span><span>AES-192</span><span> by exhaustive search</span></td>
    <td><span>ML-KEM-768</span></td>
  </tr>
  <tr>
    <td><span>4</span></td>
    <td><span>To find a collision in </span><span>SHA384</span><span> by exhaustive search</span></td>
    <td></td>
  </tr>
  <tr>
    <td><span>5</span></td>
    <td><span>To recover the key of </span><span>AES-256</span><span> by exhaustive search</span></td>
    <td><span>ML-KEM-1024, SLH-DSA-256s</span></td>
  </tr>
</tbody>
</table><p>NIST PQC security levels, higher is harder to break (“more secure”). The examples ML-DSA, SLH-DSA and ML-KEM are covered below.</p><p>There are good intentions behind suggesting doubling the key lengths of symmetric cryptography. In many use cases, the extra cost is not that high, and it mitigates any theoretical risk completely. Scaling symmetric cryptography is cheap: double the bits is typically far less than half the cost. So on the surface, it is simple advice.</p><p>But if we insist on AES-256, it seems only logical to insist on NIST PQC level 5 for the public key cryptography as well. The problem is that public key cryptography does not scale very well. Depending on the scheme, going from level 1 to level 5 typically more than doubles data usage and CPU cost. As we’ll see, deploying post-quantum signatures at level 1 is already painful, and deploying them at level 5 is problematic.</p><p>A second reason is that upgrading symmetric cryptography isn’t always easy. If it requires replacing hardware, it can be costly indeed. An organization that cannot migrate all its cryptography in time simply can’t afford to waste its time doubling symmetric key lengths.</p><p><b>First migration: key agreement</b></p><p>Symmetric ciphers are not enough on their own: how do I know which key to use when visiting a website for the first time? The browser can’t just send a random key, as everyone listening in would see that key as well. You’d think it’s impossible, but there is some clever math to solve this, so that the browser and server can agree on a shared key. Such a scheme is called a <b>key agreement</b> mechanism, and is performed in the TLS <a href="https://www.cloudflare.com/learning/ssl/what-happens-in-a-tls-handshake/">handshake</a>. Today almost all traffic is secured with <a href="https://en.wikipedia.org/wiki/Curve25519">X25519</a>, a Diffie–Hellman-style key agreement, but its security is completely broken by <a href="https://en.wikipedia.org/wiki/Shor%27s_algorithm">Shor’s algorithm</a> on a quantum computer. Thus, any communication secured today with Diffie–Hellman, when stored, can be decrypted in the future by a quantum computer.</p><p>This makes it <b>urgent</b> to upgrade key agreement today. As we will see, luckily, post-quantum key agreement is relatively straight-forward to deploy.</p><p><b>Second migration: signatures / certificates</b></p><p>The key agreement allows secure agreement on a key, but there is a big gap: we do not know <i>with whom</i> we agreed on the key. If we only do key agreement, an attacker in the middle can do separate key agreements with the browser and server, and re-encrypt any exchanged messages. To prevent this we need one final ingredient: authentication.</p><p>This is achieved using <b>signatures</b>. When visiting a website, say <a href="https://cloudflare.com">cloudflare.com</a>, the web server presents a <b>certificate</b> signed by a <a href="https://en.wikipedia.org/wiki/Certificate_authority">certification authority</a> (CA) that vouches that the public key in that certificate is controlled by <a href="https://cloudflare.com">cloudflare.com</a>. In turn, the web server signs the handshake and shared key using the private key corresponding to the public key in the certificate. This allows the client to be sure that they’ve done a key agreement with <a href="https://cloudflare.com">cloudflare.com</a>.</p><p>RSA and <a href="https://www.cloudflare.com/learning/dns/dnssec/ecdsa-and-dnssec/">ECDSA</a> are commonly used traditional signature schemes. Again, Shor’s algorithm makes short work of them, allowing a quantum attacker to forge any signature. That means that a <a href="https://en.wikipedia.org/wiki/Man-in-the-middle_attack">MitM</a> (man-in-the-middle) can break into any connection that uses a signature scheme that is not post-quantum secure. This is of course an active attack: if the attacker isn’t in the middle as the handshake happens, the connection is not affected.</p><p>This makes upgrading signature schemes for TLS on the face of it less urgent, as we only need to have everyone migrated by the time the cryptographically-relevant quantum computer arrives. Unfortunately, we will see that migration to post-quantum signatures is much <b>more difficult</b>, and will require more time.</p>
    <div>
      <h2>Timeline</h2>
      <a href="#timeline">
        
      </a>
    </div>
    <p>Before we dive into the technical challenges of migrating the Internet to post-quantum cryptography, let’s have a look at how we got here, and what to expect in the coming years. Let’s start with how post-quantum cryptography came to be.</p>
    <div>
      <h3>Origin of post-quantum cryptography</h3>
      <a href="#origin-of-post-quantum-cryptography">
        
      </a>
    </div>
    <p>Physicists Feynman and Manin independently proposed quantum computers <a href="https://plato.stanford.edu/entries/qt-quantcomp/">around 1980</a>. It took another 14 years before Shor published <a href="https://ieeexplore.ieee.org/abstract/document/365700">his algorithm</a> attacking public key cryptography. Most post-quantum cryptography predates Shor’s famous algorithm.</p><p>There are various branches of post-quantum cryptography, of which the most prominent are lattice-based, hash-based, multivariate, code-based, and isogeny-based. Except for isogeny-based cryptography, none of these were initially conceived as post-quantum cryptography. In fact, early code-based and hash-based schemes are contemporaries of RSA, being proposed in the 1970s, and comfortably predate the publication of Shor’s algorithm in 1994. Also, the first multivariate scheme from 1988 is comfortably older than Shor’s algorithm. It is a nice coincidence that the most successful branch, lattice-based cryptography, is Shor’s closest contemporary, being proposed <a href="https://dl.acm.org/doi/pdf/10.1145/237814.237838">in 1996</a>. For comparison, elliptic curve cryptography, which is widely used today, was first proposed in 1985.</p><p>In the years after the publication of Shor’s algorithm, cryptographers took measure of the existing cryptography: what’s clearly broken, and what could be post-quantum secure? In 2006, the first annual <a href="https://postquantum.cr.yp.to/">International Workshop on Post-Quantum Cryptography</a> took place. From that conference, an introductory text <a href="https://www.researchgate.net/profile/Nicolas-Sendrier-2/publication/226115302_Code-Based_Cryptography/links/540d62d50cf2df04e7549388/Code-Based-Cryptography.pdf">was prepared</a>, which holds up rather well as an introduction to the field. A notable caveat is the <a href="https://eprint.iacr.org/2022/214.pdf">demise</a> of the <a href="https://www.pqcrainbow.org/">Rainbow</a> signature scheme. In that same year, the elliptic-curve key-agreement X25519 <a href="https://cr.yp.to/ecdh/curve25519-20060209.pdf">was proposed</a>, which now secures the vast majority of all Internet connections.</p>
    <div>
      <h3>NIST PQC competition</h3>
      <a href="#nist-pqc-competition">
        
      </a>
    </div>
    <p>Ten years later, in 2016, <a href="https://nist.gov">NIST</a>, the US National Institute of Standards and Technology, <a href="https://csrc.nist.gov/CSRC/media/Projects/Post-Quantum-Cryptography/documents/call-for-proposals-final-dec-2016.pdf">launched a public competition</a> to standardize post-quantum cryptography. They’re using a similar open format as was used to standardize <a href="https://en.wikipedia.org/wiki/Advanced_Encryption_Standard">AES</a> in 2001, and <a href="https://en.wikipedia.org/wiki/NIST_hash_function_competition">SHA3</a> in 2012. Anyone can participate by submitting schemes and evaluating the proposals. Cryptographers from all over the world submitted algorithms. To focus attention, the list of submissions were whittled down over three rounds. From the original 82, based on public feedback, eight made it into the final round. From those eight, in 2022, NIST chose to <a href="/nist-post-quantum-surprise">pick four to standardize first</a>: one <b>KEM</b> (for key agreement) and three signature schemes.</p>
<table>
<thead>
  <tr>
    <th><span>Old name</span></th>
    <th><span>New name</span></th>
    <th><span>Branch</span></th>
  </tr>
</thead>
<tbody>
  <tr>
    <td><span>Kyber</span></td>
    <td><span>ML-KEM</span><span> (FIPS 203)<br />Module-lattice based Key-Encapsulation Mechanism Standard</span></td>
    <td><span>Lattice-based</span></td>
  </tr>
  <tr>
    <td><span>Dilithium</span></td>
    <td><span>ML-DSA </span><span>(FIPS 204)<br /></span><span>Module-lattice based Digital Signature Standard</span></td>
    <td><span>Lattice-based</span></td>
  </tr>
  <tr>
    <td><span>SPHINCS</span><sup>+</sup></td>
    <td><span>SLH-DSA</span><span> (FIPS 205)<br /></span><span>Stateless Hash-Based Digital Signature Standard</span></td>
    <td><span>Hash-based</span></td>
  </tr>
  <tr>
    <td><span>Falcon</span></td>
    <td><span>FN-DSA<br /></span><span>FFT over NTRU lattices Digital Signature Standard</span></td>
    <td><span>Lattice-based</span></td>
  </tr>
</tbody>
</table><p>First four selected post-quantum algorithms from NIST competition.</p><p>ML-KEM is the only post-quantum key agreement close to standardization now, and despite some occasional difficulty with its larger key sizes, in many cases it allows for a drop-in upgrade.</p><p>The situation is rather different with the signatures: it’s quite telling that NIST chose to standardize three already. And there are even more signatures set to be standardized in the future. The reason is that none of the proposed signatures are close to ideal. In short, they all have much larger keys and signatures than we’re used to. From a security standpoint SLH-DSA is the most conservative choice, but also the worst performer. For public key and signature sizes, FN-DSA is the best of the worst, but is difficult to implement safely because of floating-point arithmetic. This leaves ML-DSA as the default pick. More in depth comparisons are included below.</p><p><b>Name changes</b></p><p>Undoubtedly Kyber is the most familiar name, as it’s a preliminary version of Kyber that has already been deployed by <a href="https://blog.chromium.org/2023/08/protecting-chrome-traffic-with-hybrid.html">Chrome</a> and <a href="/post-quantum-for-all">Cloudflare</a> among others to counter store-now/decrypt-later. We will have to adjust, though. Just like Rijndael is most well-known as AES, and Keccak is SHA3 to most, ML-KEM is set to become the catchy new moniker for Kyber going forward.</p><p><b>Final standards</b></p><p>Although we know NIST will standardize these four, we’re not quite there yet. In August 2023, NIST released <a href="https://csrc.nist.gov/News/2023/three-draft-fips-for-post-quantum-cryptography">three draft standards</a> for the first three with minor changes, and solicited public feedback. FN-DSA is delayed for now, as it’s more difficult to standardize and deploy securely.</p><p>For timely adopters, it’s important to be aware that based on the feedback on the first three drafts, there might be a few small tweaks before the final standards are released. These changes will be minor, but the final versions could well be incompatible on the wire with the current draft standards. These changes are mostly immaterial, only requiring a small update, and do not meaningfully affect the brunt of work required for the migration, including organizational engagement, inventory, and testing. Before shipping, there can be good reasons to wait for the final standards: support for preliminary versions is not widespread, and it might be costly to support both the draft and final standards. Still, many organizations have not started work on the post-quantum migration at all, citing the lack of standards — a situation that has been called <a href="https://www.youtube.com/watch?v=RbwwxZSBjyo&amp;t=1468s">crypto procrastination</a>.</p><p>So, when can we expect the final standards? There is no set timeline, but we expect the first three standards to be out around <b>mid-2024</b>.</p><p><b>Predicting protocol and software support</b></p><p>Having NIST’s final standards is not enough. The next step is to standardize the way the new algorithms are used in higher level protocols. In many cases, such as key agreement in TLS, this is as simple as assigning an identifier to the new algorithms. In other cases, such as <a href="https://www.cloudflare.com/dns/dnssec/how-dnssec-works/">DNSSEC</a>, it requires a bit more thought. Many working groups at the <a href="https://www.ietf.org/">IETF</a> have been preparing for years for the arrival of NIST’s final standards, and I expect that many protocol integrations will be available before the end of 2024. For the moment, let’s focus on <a href="https://www.cloudflare.com/learning/ssl/transport-layer-security-tls/">TLS</a>.</p><p>The next step is software support. Not all ecosystems can move at the same speed, but we have seen a lot of preparation already. We expect several major open ecosystems to have post-quantum cryptography and TLS support available early 2025, if not earlier.</p><p>Again, for TLS there is a big difference again between key agreement and signatures. For key agreement, the server and client can add and enable support for post-quantum key agreement independently. Once enabled on both sides, TLS negotiation will use post-quantum key agreement. We go into detail on <a href="/post-quantum-for-all">TLS negotiation</a> in this blog post. If your product just uses TLS, your store-now/decrypt-now problem could be solved by a simple software update of the TLS library.</p><p>Post-quantum <a href="https://www.cloudflare.com/application-services/products/ssl/">TLS certificates</a> are more of a hassle. Unless you control both ends, you’ll need to install two certificates: one post-quantum certificate for the new clients, and a traditional one for the old clients. If you aren’t using <a href="https://www.cloudflare.com/application-services/solutions/certificate-lifecycle-management/">automated issuance of certificates</a> yet, this might be a good reason to <a href="https://letsencrypt.org/docs/client-options/">check that out</a>. TLS allows the client to signal which signature schemes it supports so that the server can choose to serve a post-quantum certificate only to those clients that support it. Unfortunately, although almost all TLS libraries support setting up multiple certificates, not all servers expose that configuration. If they do, it will still require a configuration change in most cases. (Although undoubtedly <a href="https://caddyserver.com/">caddy</a> will do it for you.)</p><p>Talking about post-quantum certificates: it will take some time before Certification Authorities (CAs) can issue them. Their <a href="https://csrc.nist.gov/glossary/term/hardware_security_module_hsm">HSMs</a> will first need (hardware) support, which then will need to be audited. Also, the <a href="https://cabforum.org/">CA/Browser forum</a> needs to approve the use of the new algorithms. Of these, the audits are likely to be the bottleneck, as there will be a lot of submissions after the publication of the NIST standards. It’s unlikely we will see a post-quantum certificate issued by a CA before 2026.</p><p>This means that it is not unlikely that come 2026, we are in an interesting in-between time, where almost all Internet traffic is protected by post-quantum key agreement, but not a single public post-quantum certificate is used.</p>
    <div>
      <h3>More post-quantum standards</h3>
      <a href="#more-post-quantum-standards">
        
      </a>
    </div>
    <p>NIST is not quite done standardizing post-quantum cryptography. There are two more post-quantum competitions running: <b>round 4</b> and the <b>signatures onramp</b>.</p><p><b>Round 4</b></p><p>From the post-quantum competition, NIST is still considering standardizing one or more of the code-based key agreements <a href="https://bikesuite.org/">BIKE</a>, <a href="https://pqc-hqc.org/">HQC</a>, <a href="https://classic.mceliece.org/">Classic McEliece</a> in a fourth round. The performance of BIKE and HQC, both in key sizes and computational efficiency, is much worse than ML-KEM. NIST is considering standardizing one as a <b>backup KEM</b>, in case there is a cryptanalytic breakthrough against lattice-based cryptography, such as ML-KEM.</p><p>Classic McEliece does not compete with ML-KEM directly as a general purpose KEM. Instead, it’s a specialist: Classic McEliece public keys are very large (268kB), but it has (for a post-quantum KEM) very small ciphertexts (128 bytes). This makes Classic McEliece very attractive for use cases where the public key can be distributed in advance, such as to secure a software update mechanism.</p><p><b>Signatures onramp</b></p><p>In late 2022, after announcing the first four picks, NIST also called a new competition, dubbed the <i>signatures onramp</i>, to find <a href="https://csrc.nist.gov/projects/pqc-dig-sig">additional signature schemes</a>. The competition has two goals. The first is hedging against cryptanalytic breakthroughs against lattice-based cryptography. NIST would like to standardize a signature that performs better than SLH-DSA, but is not based on lattices. Secondly, they’re looking for a signature scheme that might do well in use cases where the current roster doesn’t do well: we will discuss those at length later on in this post.</p><p>In July 2023, NIST posted the <a href="https://csrc.nist.gov/news/2023/additional-pqc-digital-signature-candidates">40 submissions</a> they received for a first round of public review. The cryptographic community got to work, and as is quite normal for a first round, at the time of writing (February 2024) have managed to break 10 submissions completely, and weaken a couple of others drastically. Thom Wiggers maintains a useful <a href="https://pqshield.github.io/nist-sigs-zoo/">website comparing the submissions</a>.</p><p>There are some very promising submissions. We will touch briefly upon them later on. It is worth mentioning that just like the main post-quantum competition, the selection process will take many years. It is unlikely that any of these onramp signature schemes will be standardized before 2027 — if they’re not broken in the first place.</p><p>Before we dive into the nitty-gritty of migrating the Internet to post-quantum cryptography, it’s instructive to look back at some past migrations.</p>
    <div>
      <h2>Looking back: migrating to TLS 1.3</h2>
      <a href="#looking-back-migrating-to-tls-1-3">
        
      </a>
    </div>
    <p>One of the big recent migrations on the Internet was <a href="https://www.cloudflare.com/learning/ssl/why-use-tls-1.3/">the switch from TLS 1.2 to TLS 1.3</a>. Work on the new protocol started around 2014. The goal was ambitious: to start anew, cut a lot of cruft, and have a performant clean transport protocol of the future. After a few years of hard work, the protocol was ready for field tests. In good spirits, in September 2016, we announced <a href="/introducing-tls-1-3">that we support TLS 1.3</a>.</p><p>The followup blog in December 2017 had a rather different tone: “<a href="/why-tls-1-3-isnt-in-browsers-yet">Why TLS 1.3 isn’t in browsers yet</a>”.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/15iyL3KO6ap7JYKFyNcyeP/9ea326e694601026cebcbbe889485683/image3-7.png" />
            
            </figure><p>Adoption of TLS 1.3 in December 2017: less than 0.06%.</p><p>It turned out that <a href="https://datatracker.ietf.org/doc/html/draft-ietf-tls-tls13-11">revision 11</a> of TLS 1.3 was completely undeployable in practice, breaking a few percent of all users. The reason? <a href="https://en.wikipedia.org/wiki/Protocol_ossification">Protocol ossification</a>. TLS was designed with flexibility in mind: the client sends a list of TLS versions it supports, so that the connection can be smoothly upgraded to the newest crypto. That’s the theory, but if you never move the joint, it rusts: for one, it turned out that a lot of server software and middleware simply crashed on just seeing an unknown version. Others would ignore the version number completely, and try to parse the messages as if it was TLS 1.2 anyway. In practice, the version negotiation turned out to be completely broken. So how was this fixed?</p><p>In <a href="https://datatracker.ietf.org/doc/html/draft-ietf-tls-tls13-22">revision 22</a> of the TLS 1.3 draft, changes were made to make TLS 1.3 look like TLS 1.2 on the wire: in particular TLS 1.3 advertises itself as TLS 1.2 with the normal version negotiation. Also, a lot of unnecessary fields are included in the TLS 1.3 ClientHello just to appease any broken middleboxes that might be peeking in.  A server that doesn’t understand TLS 1.3 wouldn’t even see that an attempt was made to negotiate TLS 1.3. Using a sneaky new extension, a second version negotiation mechanism was added. For the details, check out the December 2017 blog post linked above.</p><p>Today TLS 1.3 is a huge success, and is used by more than 93% of the connections.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6hsRBAdJf6bdmg1w0i9JXh/3d7fb9047b740eb5697b746c74143a28/image4-8.png" />
            
            </figure><p>TLS 1.3 adoption in February 2024. QUIC uses TLS 1.3 under the hood.</p><p>To help prevent ossification in the future, new protocols such as TLS 1.3 and QUIC use <a href="https://datatracker.ietf.org/doc/rfc8701/">GREASE</a>, where clients send unknown identifiers on purpose, including cryptographic algorithm identifiers, to help catch similar bugs, and keep the flexibility.</p>
    <div>
      <h2>Migrating the Internet to post-quantum key agreement</h2>
      <a href="#migrating-the-internet-to-post-quantum-key-agreement">
        
      </a>
    </div>
    <p>Now that we understand what we’re dealing with on a high level, let’s dive into upgrading key agreement on the Internet. First, let’s have a closer look at NIST’s first and so far only post-quantum key agreement: ML-KEM.</p><p>ML-KEM was submitted under the name <a href="https://pq-crystals.org/kyber/index.shtml">CRYTALS-Kyber</a>. Even though it will be a US standard, its designers work in industry and academia across France, Switzerland, the Netherlands, Belgium, Germany, Canada, and the United States. Let’s have a look at its performance.</p>
    <div>
      <h3>ML-KEM versus X25519</h3>
      <a href="#ml-kem-versus-x25519">
        
      </a>
    </div>
    <p>Today the vast majority of clients use the traditional key agreement X25519. Let’s compare that to ML-KEM.</p>
<table>
<thead>
  <tr>
    <th></th>
    <th></th>
    <th><span>Keyshares size(in bytes)</span></th>
    <th><span>Ops/sec (higher is better)</span></th>
  </tr>
</thead>
<tbody>
  <tr>
    <td><span>Algorithm</span></td>
    <td><span>PQ</span></td>
    <td><span>Client</span></td>
    <td><span>Server</span></td>
    <td><span>Client</span></td>
    <td><span>Server</span></td>
  </tr>
  <tr>
    <td><span>ML-KEM-512</span></td>
    <td><span>✅</span></td>
    <td><span>800</span></td>
    <td><span>768</span></td>
    <td><span>45,000</span></td>
    <td><span>70,000</span></td>
  </tr>
  <tr>
    <td><span>ML-KEM-768</span></td>
    <td><span>✅</span></td>
    <td><span>1,184</span></td>
    <td><span>1,088</span></td>
    <td><span>29,000</span></td>
    <td><span>45,000</span></td>
  </tr>
  <tr>
    <td><span>ML-KEM-1024</span></td>
    <td><span>✅</span></td>
    <td><span>1,568</span></td>
    <td><span>1,568</span></td>
    <td><span>20,000</span></td>
    <td><span>30,000</span></td>
  </tr>
  <tr>
    <td><span>X25519</span></td>
    <td><span>❌</span></td>
    <td><span>32</span></td>
    <td><span>32</span></td>
    <td><span>19,000</span></td>
    <td><span>19,000</span></td>
  </tr>
</tbody>
</table><p>Size and CPU compared between X25519 and ML-KEM. Performance varies considerably by hardware platform and implementation constraints, and should be taken as a rough indication only.</p><p>ML-KEM-512, -768 and -1024 aim to be as resistant to (quantum) attack as AES-128, -192 and -256 respectively. Even at the AES-128 level, ML-KEM is much bigger than X25519, requiring 1,568 bytes over the wire, whereas X25519 requires a mere 64 bytes.</p><p>On the other hand, even ML-KEM-1024 is typically significantly faster than X25519, although this can vary quite a bit depending on your platform.</p>
    <div>
      <h3>ML-KEM-768 and X25519</h3>
      <a href="#ml-kem-768-and-x25519">
        
      </a>
    </div>
    <p>At Cloudflare, we are not taking advantage of that speed boost just yet. Like many other early adopters, we like to play it safe and deploy a <b>hybrid</b> key-agreement <a href="https://datatracker.ietf.org/doc/draft-tls-westerbaan-xyber768d00/">combining</a> X25519 and (a preliminary version of) ML-KEM-768. This combination might surprise you for two reasons.</p><ol><li><p>Why combine X25519 (“128 bits of security”) with ML-KEM-768 (“192 bits of security”)?</p></li><li><p>Why bother with the non post-quantum X25519?</p></li></ol><p>The apparent security level mismatch is a hedge against improvements in cryptanalysis in lattice-based cryptography. There is a lot of trust in the (non post-quantum) security of X25519: matching AES-128 is more than enough. Although we are comfortable in the security of ML-KEM-512 today, over the coming decades cryptanalysis could improve. Thus, we’d like to keep a margin for now.</p><p>The inclusion of X25519 has two reasons. First, there is always a remote chance that a breakthrough renders all variants of ML-KEM insecure. In that case, X25519 still provides non post-quantum security, and our post-quantum migration didn’t make things worse.</p><p>More important is that we do not only worry about attacks on the algorithm, but also on the implementation. A noteworthy example where we dodged a bullet is that of <a href="https://kyberslash.cr.yp.to/">KyberSlash</a>, a timing attack that affected many implementations of Kyber (an earlier version of ML-KEM), including <a href="https://github.com/cloudflare/circl/security/advisories/GHSA-9763-4f94-gfch">our own</a>. Luckily KyberSlash does not affect Kyber as it is used in TLS. A similar implementation mistake that would actually affect TLS, is likely to require an active attacker. In that case, the likely aim of the attacker wouldn’t be to decrypt data decades down the line, but steal a cookie or other token, or inject a payload. Including X25519 prevents such an attack.</p><p>So how well do ML-KEM-768 and X25519 together perform in practice?</p>
    <div>
      <h2>Performance and protocol ossification</h2>
      <a href="#performance-and-protocol-ossification">
        
      </a>
    </div>
    
    <div>
      <h3>Browser experiments</h3>
      <a href="#browser-experiments">
        
      </a>
    </div>
    <p>Being well aware of potential compatibility and performance issues, Google started <a href="https://security.googleblog.com/2016/07/experimenting-with-post-quantum.html">a first experiment</a> with post-quantum cryptography back in 2016, the same year NIST started their competition. This was followed up by a second larger joint experiment by <a href="/towards-post-quantum-cryptography-in-tls/">Cloudflare</a> and <a href="https://www.imperialviolet.org/2018/12/12/cecpq2.html">Google</a> in 2018. We tested two different hybrid post-quantum key agreements: CECPQ2, which is a combination of the lattice-based NTRU-HRSS and X25519, and CECPQ2b, a combination of the isogeny-based SIKE and again X25519. NTRU-HRSS is very similar to ML-KEM in size, but is computationally somewhat more taxing on the client-side. SIKE on the other hand, has very small keys, is computationally very expensive, and was <a href="https://eprint.iacr.org/2022/975.pdf">completely broken</a> in 2022. With respect to TLS handshake times, X25519+NTRU-HRSS performed very well, being hard to distinguish by eye from the control connections.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5WPFvf7wEyiXYx1xYTBCAj/c207f16128331f55043551c6738b9dd4/image5-2.png" />
            
            </figure><p>Handshake times compared between X25519 (blue), X25519+SIKE (green) and X25519+NTRU-HRSS (orange). </p><p>Unfortunately, a small but significant fraction of clients experienced broken connections with NTRU-HRSS. The reason: the size of the NTRU-HRSS keyshares. In the past, when creating a TLS connection, the first message sent by the client, the so-called <i>ClientHello</i>, almost always fit within a single network packet. The TLS specification allows for a larger <i>ClientHello</i>, however no one really made use of that. Thus, protocol ossification strikes again as there are some middleboxes, load-balancers, and other software that tacitly assume the <i>ClientHello</i> always fits in a single packet.</p><p>Over the subsequent years, Chrome kept running their PQ experiment at a very low rate, and did a great job reaching out to vendors whose products were incompatible. If it were not for these compatibility issues, we would’ve likely seen Chrome ramp up post-quantum key agreement five years earlier.</p><p>Today the situation looks better. At the time of writing, Chrome has enabled post-quantum key-agreement for 10% of all users. That accounts for about 1.8% of all our TLS 1.3 connections, as shown in the figure below. That’s a lot, but we’re not out of the woods yet. There could well be performance and compatibility issues that prevent a further rollout.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1fnypmfnBMi2AR0lvSq4Zb/3e5660bfb014452e84a3d3e26f1f733b/graph--1-.png" />
            
            </figure><p>Fraction of TLS 1.3 connections established with Cloudflare that are secured with post-quantum cryptography. At the moment, it’s more than 99% from Chrome. </p><p>Nonetheless, we feel it’s more probable than not that we will see Chrome enable post-quantum key agreement for more users this year.</p>
    <div>
      <h3>Other browsers</h3>
      <a href="#other-browsers">
        
      </a>
    </div>
    <p>In January 2024, Firefox landed the code to support post-quantum key agreement in <a href="https://www.mozilla.org/en-US/firefox/channel/desktop/">nightly</a>, and it’s likely it will land in Firefox proper later in 2024. For Chrome-derived browsers, such as Edge and Brave, it’s easy to piggyback on the work of Chrome, and we could well see them follow suit when Chrome turns on post-quantum key-agreement by default.</p><p>However, browser to server connections aren’t the only connections important to the Internet.</p>
    <div>
      <h3>Testing connections to customer origins</h3>
      <a href="#testing-connections-to-customer-origins">
        
      </a>
    </div>
    <p>In <a href="/post-quantum-to-origins">September 2023,</a> we added support for our customers to enable post-quantum key agreement on connections from Cloudflare to their origins. That’s connection (3) in the following diagram. This can be done in two ways: the fast way, and the slow but safer way. In both cases, if the origin does not support it, we fall back to traditional key-agreement. We explain the details of these in the blog post on <a href="/post-quantum-to-origins">post-quantum cryptography</a>, but in short, in the fast way we send the post-quantum keyshare immediately, and in the slow but safe way we let the origin ask for post-quantum using a <i>HelloRetryRequest</i> message. Chrome, by the way, is deploying post-quantum key agreement the fast way.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/THODVUtVIIyGgACaPDZa5/7ca19e62617a241d44dda67ec6200a58/image11.png" />
            
            </figure><p>Typical connection flow when a visitor requests an uncached page.</p><p>At the same time, we started regularly testing our customer origins to see if they would support us offering post-quantum key agreement. We found all origins supported the safe but slow method. The fast method didn’t fare as well, as we found that 0.34% of connections would break. That’s higher than the failure rates seen by browsers.</p><p>Unsurprisingly, many failures seem to be caused by the large ClientHello. Interestingly, the majority are caused by servers not correctly implementing <i>HelloRetryRequest</i>. To investigate the cause, we have reached out to customers to ascertain the cause. We’re very grateful to those that have responded, and we’re currently working through the data.</p>
    <div>
      <h3>Outlook</h3>
      <a href="#outlook">
        
      </a>
    </div>
    <p>As we’ve seen, post-quantum key agreement, despite protocol ossification, is relatively straightforward to deploy. We’re also on a great trajectory, as we might well see double-digit client support for post-quantum key agreement later this year.</p><p>Let’s turn to the second, more difficult migration.</p>
    <div>
      <h2>Migrating the Internet to post-quantum signatures</h2>
      <a href="#migrating-the-internet-to-post-quantum-signatures">
        
      </a>
    </div>
    <p>Now, we’ll turn our attention to upgrading the signatures used on the Internet.</p>
    <div>
      <h3>The zoo of post-quantum signatures</h3>
      <a href="#the-zoo-of-post-quantum-signatures">
        
      </a>
    </div>
    <p>Let’s start by sizing up the post-quantum signatures we have available today at the AES-128 security level: ML-DSA-44, FN-DSA-512, and the two variants of SLH-DSA. As a comparison, we also include the venerable Ed25519 and RSA-2048 in wide use today, as well as a sample of five promising signature schemes from the signatures onramp.</p>
<table>
<thead>
  <tr>
    <th></th>
    <th></th>
    <th></th>
    <th><span>Sizes (bytes)</span></th>
    <th><span>CPU time (lower is better)</span></th>
  </tr>
</thead>
<tbody>
  <tr>
    <td></td>
    <td></td>
    <td><span>PQ</span></td>
    <td><span>Public key</span></td>
    <td><span>Signature</span></td>
    <td><span>Signing</span></td>
    <td><span>Verification</span></td>
  </tr>
  <tr>
    <td><span>Standardized</span></td>
    <td><span>Ed25519</span></td>
    <td><span>❌</span></td>
    <td><span>32</span></td>
    <td><span>64</span></td>
    <td><span>1 (baseline)</span></td>
    <td><span>1 (baseline)</span></td>
  </tr>
  <tr>
    <td><span>RSA-2048</span></td>
    <td><span>❌</span></td>
    <td><span>256</span></td>
    <td><span>256</span></td>
    <td><span>70</span></td>
    <td><span>0.3</span></td>
  </tr>
  <tr>
    <td><span>NIST drafts</span></td>
    <td><span>ML-DSA-44</span></td>
    <td><span>✅</span></td>
    <td><span>1,312</span></td>
    <td><span>2,420</span></td>
    <td><span>4.8</span></td>
    <td><span>0.5</span></td>
  </tr>
  <tr>
    <td><span>FN-DSA-512</span></td>
    <td><span>✅</span></td>
    <td><span>897</span></td>
    <td><span>666</span></td>
    <td><span>8 ⚠️</span></td>
    <td><span>0.5</span></td>
  </tr>
  <tr>
    <td><span>SLH-DSA-128s</span></td>
    <td><span>✅</span></td>
    <td><span>32</span></td>
    <td><span>7,856</span></td>
    <td><span>8,000</span></td>
    <td><span>2.8</span></td>
  </tr>
  <tr>
    <td><span>SLH-DSA-128f</span></td>
    <td><span>✅</span></td>
    <td><span>32</span></td>
    <td><span>17,088</span></td>
    <td><span>550</span></td>
    <td><span>7</span></td>
  </tr>
  <tr>
    <td><span>Sample from signatures onramp</span></td>
    <td><span>MAYO</span><sub>one</sub></td>
    <td><span>✅</span></td>
    <td><span>1,168</span></td>
    <td><span>321</span></td>
    <td><span>4.7</span></td>
    <td><span>0.3</span></td>
  </tr>
  <tr>
    <td><span>MAYO</span><sub>two</sub></td>
    <td><span>✅</span></td>
    <td><span>5,488</span></td>
    <td><span>180</span></td>
    <td><span>5</span></td>
    <td><span>0.2</span></td>
  </tr>
  <tr>
    <td><span>SQISign I</span></td>
    <td><span>✅</span></td>
    <td><span>64</span></td>
    <td><span>177</span></td>
    <td><span>60,000</span></td>
    <td><span>500</span></td>
  </tr>
  <tr>
    <td><span>UOV Is-pkc</span></td>
    <td><span>✅</span></td>
    <td><span>66,576</span></td>
    <td><span>96</span></td>
    <td><span>2.5</span></td>
    <td><span>2</span></td>
  </tr>
  <tr>
    <td><span>HAWK512</span></td>
    <td><span>✅</span></td>
    <td><span>1,024</span></td>
    <td><span>555</span></td>
    <td><span>2</span></td>
    <td><span>1</span></td>
  </tr>
</tbody>
</table><p>Comparison of various signature schemes at the security level of AES-128. CPU times vary significantly by platform and implementation constraints and should be taken as a rough indication only. ⚠️FN-DSA signing time when using fast but dangerous floating-point arithmetic — see warning below.</p><p>It is immediately clear that none of the post-quantum signature schemes comes even close to being a drop-in replacement for Ed25519 (which is comparable to ECDSA P-256) as most of the signatures are simply much bigger. The exceptions are SQISign, MAYO, and UOV from the onramp, but they’re far from ideal. MAYO and UOV have large public keys, and SQISign requires an immense amount of computation.</p>
    <div>
      <h3>When to use SLH-DSA</h3>
      <a href="#when-to-use-slh-dsa">
        
      </a>
    </div>
    <p>As mentioned before, today we only have drafts for SLH-DSA and ML-DSA. In every relevant performance metric, ML-DSA beats SLH-DSA handily. (Even the small public keys of SLH-DSA are not any advantage. If you include the ML-DSA public key with its signature, it’s still smaller than an SLH-DSA signature, and in that case you can use the short hash of the ML-DSA public key as a short public key.)</p><p>The advantage of SLH-DSA is that there is a lot of trust in its security. To forge an SLH-DSA signature you need to break the underlying hash function quite badly. It is not enough to break the collision resistance of the hash, as has been done with SHA-1 and MD5. In fact, as of February 2024, an SHA-1 based SLH-DSA would still be considered secure. Of course, SLH-DSA does not use SHA-1, and instead uses SHA2 and SHA3, against which not a single practical attack is known.</p><p>If you can shoulder the cost, SLH-DSA has the best security guarantee, which might be crucial when dealing with long-lasting signatures, or deployments where upgrades are impossible.</p>
    <div>
      <h3>Be careful with FN-DSA</h3>
      <a href="#be-careful-with-fn-dsa">
        
      </a>
    </div>
    <p>Looking ahead a bit: the best of the worst seems to be FN-DSA-512. FN-DSA-512’s signatures and public key together are <i>only</i> 1,563 bytes, with somewhat reasonable signing time. FN-DSA has an <b>achilles heel</b> though — for acceptable signing performance, it requires fast floating-point arithmetic. Without it, signing is about 20 times slower. But speed is not enough, as the floating-point arithmetic has to run in constant time — without it, the FN-DSA private key can be recovered by timing signature creation. Writing safe FN-DSA implementations has turned out to be quite challenging, which makes FN-DSA dangerous when signatures are generated on the fly, such as in a TLS handshake. It is good to stress that this only affects signing. FN-DSA verification does not require floating-point arithmetic (and during verification there wouldn’t be a private key to leak anyway.)</p>
    <div>
      <h2>There are many signatures on the web</h2>
      <a href="#there-are-many-signatures-on-the-web">
        
      </a>
    </div>
    <p>The biggest pain-point of migrating the Internet to post-quantum signatures, is that there are a lot of signatures even in a single connection. When you visit this very website for the first time, we send <b>six signatures and two public keys</b>.</p><p>The majority of these are for the <b>certificate chain</b>: the CA signs the intermediate certificate, which signs the leaf certificate, which in turn signs the TLS transcript to prove the authenticity of the server. If you’re keeping count: we’re still three signatures short.</p><p>Two of these are for <b>SCTs</b> required for <a href="https://certificate.transparency.dev/howctworks/">certificate transparency</a>. Certificate transparency is a key, but lesser known, part of the <a href="https://smallstep.com/blog/everything-pki/#web-pki-vs-internal-pki">Web PKI</a>, the ecosystem that secures browser connections. Its goal is to publicly log every certificate issued, so that misissuances can be detected after the fact. It works by having independent parties run <i>CT logs</i>. Before issuing a certificate, a CA must first submit it to at least two different CT logs. An SCT is a signature of a CT log that acts as a proof, a <i>receipt</i>, that the certificate has been logged.</p><p>The final signature is an <a href="/high-reliability-ocsp-stapling">OCSP staple</a>, which proves that the leaf certificate hasn’t been revoked in the last few days.</p>
    <div>
      <h3>Tailoring signature schemes</h3>
      <a href="#tailoring-signature-schemes">
        
      </a>
    </div>
    <p>There are two aspects of how a signature can be used that are worthwhile to highlight: whether the <b>public key is included</b> with the signature, and whether the signature is <b>online</b> or <b>offline</b>.</p><p>For the SCTs and the signature of the root on the intermediate, the public key is not transmitted during the handshake. Thus, for those, a signature scheme with smaller signatures but larger public keys, such as MAYO or UOV, would be particularly well-suited. For the other signatures, the public key is included, and it’s more important to minimize the sizes of the combined public key and signature.</p><p>The handshake signature is the only signature that is created online — all the other signatures are created ahead of time.  The handshake signature is created and verified only once, whereas the other signatures are typically verified many times by different clients. This means that for the handshake signature, it’s advantageous to balance signing and verification time which are both in the <i>hot path</i>, whereas for the other signatures having better verification time at the cost of slower signing is worthwhile. This is one of the advantages RSA still enjoys over elliptic curve signatures today.</p><p>Putting together different signature schemes is a fun puzzle, but it also comes with some drawbacks. Using multiple different schemes increases the attack surface because an algorithmic or implementation vulnerability in one compromises the whole. Also, the whole ecosystem needs to implement and optimize multiple algorithms, which is a significant burden.</p>
    <div>
      <h2>Putting it together</h2>
      <a href="#putting-it-together">
        
      </a>
    </div>
    <p>So, what are some reasonable combinations to try?</p>
    <div>
      <h3>With NIST’s current picks</h3>
      <a href="#with-nists-current-picks">
        
      </a>
    </div>
    <p>With the draft standards available today, we do not have a lot of options.</p><p>If we simply switch to ML-DSA-44 for all signatures, we’re adding 17kB of data that needs to be transmitted from the server to the client during the TLS handshake. Is that a lot? Probably. We will address that later on.</p><p>If we wait a bit and replace all but the handshake signature with FN-DSA-512, we’re looking at adding only 8kB. That’s much better, but I have to repeat that it’s difficult to implement FN-DSA-512 signing safely without timing side channels, and there is a good chance we’ll shoot ourselves in the foot if we’re not careful.</p><p>Another way to shoot ourselves in the foot <i>today</i> is with stateful hash-based signatures.</p>
    <div>
      <h3>Stateful hash-based signatures</h3>
      <a href="#stateful-hash-based-signatures">
        
      </a>
    </div>
    <p>Apart from symmetric cryptography, there are already post-quantum signature schemes standardized today: LMS / HRSS and XMSS(MT). Just like SLH-DSA, these are hash-based signature schemes, and thus, algorithmically they’re very conservative.</p><p>But they come with a major drawback: you need to <i>remember the state</i>. What is this state? When generating a keypair, you prepare a fixed number of one-time-use slots, and you need to remember which one you’ve used. If you use the same prepared slot <a href="https://eprint.iacr.org/2016/1042">twice</a>, then anyone can create a forgery with those two. Managing this state is not impossible, but quite tricky. What if the server was restored from a backup? The state can be distributed over multiple servers, but that changes the usual signature flow quite a bit, and it’s unclear whether regulators will allow this approach, as the state is typically considered part of the private key.</p><p>So, how do they perform? It’s hard to give a definite answer. These hash-based signature schemes have a lot of knobs to turn and can be fine-tuned to their use case. You can see for yourself, and play around with the parameters on this <a href="https://westerbaan.name/~bas/hashcalc/">website</a>. With standardized variants (with security parameter n=24) for the offline signatures, we can beat ML-DSA-44 in data on the wire, but can’t outperform FN-DSA-512. With security parameter n=16, which has not been standardized, stateful hash-based signatures are competitive with FN-DSA-512, and can even beat it on size. However, n=16 comes with yet another footgun: it allows the signer to create a single signature that validates two different messages — there is no <i>non-repudiation</i>.</p><p>All in all, FN-DSA-512 and stateful hash-based signatures tempt us with a similar and clear performance benefit over ML-DSA-44, but are difficult to use safely.</p>
    <div>
      <h3>Signatures on the horizon</h3>
      <a href="#signatures-on-the-horizon">
        
      </a>
    </div>
    <p>There are some very promising new signature schemes submitted to the NIST onramp.</p><p><a href="https://link.springer.com/chapter/10.1007/3-540-48910-X_15">UOV (unbalanced oil and vinegar)</a> is an old multivariate scheme with a large public key (66.5kB), but small signatures (96 bytes). If we combine UOV for the root and SCTs with ML-DSA-44 for the others, we’re looking at only 10kB — close to FN-DSA-512.</p><p>Over the decades, there have been many attempts to add some structure to UOV public keys, to get a better balance between public key and signature size. Many of these so-called <i>structured multivariate</i> schemes, which includes Rainbow and GeMMS, unfortunately have been broken.</p><p><a href="https://pqmayo.org/">MAYO</a> is the latest proposal for a structured multivariate scheme, designed by the cryptographer that broke <a href="https://eprint.iacr.org/2022/214.pdf">Rainbow</a>. As a structured multivariate scheme, its security requires careful scrutiny, but its utility (given it is not broken) is very appealing.</p><p>MAYO allows for a fine-grained tradeoff between signature and public key size. For the submission, to keep things simple, the authors proposed two concrete variants: MAYO<sub>one </sub>with balanced signature (321 bytes) and public key (1.1kB) sizes, and MAYO<sub>two</sub> that has signatures of 180 bytes, while keeping the public key manageable at 5.4kB. Verification times are excellent, while signing times are somewhat slower than ECDSA, but far better than RSA. Combining both variants in the obvious way, we’re only looking at 3.3kB.</p><p>Purely looking at sizes, SQISign I is the clear winner, even beating RSA-2048. Unfortunately, the computation required for signing, and crucially verification, are way too high. For niche applications, SQISign might be useful, but for general adoption verification times need to improve significantly, even if that requires a larger signature.</p><p>Finally, I would like to mention HAWK512. HAWK is a lattice-based scheme similar to FN-DSA-512, but does not require floating-point arithmetic. This makes HAWK an appealing alternative to FN-DSA. NIST has repeatedly stated that the main purpose of the onramp is to standardize a signature scheme that is not based on lattices — a description HAWK does not fit. We might see <a href="https://groups.google.com/a/list.nist.gov/g/pqc-forum/c/bjVkrZmI9VM/m/Tj9goDJCAAAJ">some innovations</a> of HAWK be included in the final version of FN-DSA, but it is unclear whether that will solve all of FN-DSA implementation concerns.</p><p>There are more promising submissions in the onramp, but those discussed are a fairly representative sample of those interesting to TLS. For instance, <a href="https://snova.pqclab.org/">SNOVA</a> is similar to MAYO, and <a href="https://www.tuovsig.org/">TUOV</a> is similar to UOV. Explore the submissions for yourself on Thom’s <a href="https://github.com/PQShield/nist-sigs-zoo">webpage</a>.</p>
    <div>
      <h2>Do we really care about the extra bytes?</h2>
      <a href="#do-we-really-care-about-the-extra-bytes">
        
      </a>
    </div>
    <p>It will take 17kB extra to swap in ML-DSA-44. That’s a lot compared to the typical handshake today, but it’s not a lot compared to the JavaScript and images served on many web pages. The key point is that the change we must make here affects every single TLS connection, whether it’s used for a bloated website, or a time-critical API call. Also, it’s not just about waiting a bit longer. If you have spotty cellular reception, that extra data can make the difference between being able to load a page, and having the connection time out. (As an aside, talking about bloat: many apps perform a <a href="https://thomwiggers.nl/publication/tls-on-android/tls-on-android.pdf">surprisingly high number of TLS handshakes</a>.)</p><p>Just like with key agreement, performance isn’t our only concern: we also want the connection to succeed in the first place. Back in 2021, <a href="/sizing-up-post-quantum-signatures/">we ran an experiment</a> artificially enlarging the certificate chain to simulate larger post-quantum certificates. We give a short summary of the key result below, but for the details, check out the full <a href="/sizing-up-post-quantum-signatures/">blog post</a>.</p><p>Initially, we wanted to run the experiment on a small sample of regular traffic, in order to get unbiased data. Unfortunately, we found that large certificate chains broke some connections. Thus, to avoid breaking customer connections, we set up the experiment to use background connections launched from our challenge pages. For each participant, we launched two background connections: one with a larger certificate chain (live) and one with a normal chain(control). The graph on the right shows the number of control connections that are missing a corresponding live connection. There are jumps around 10kB and 30kB, suggesting that there are clients or middleboxes  that break when certificate chains grow by more than 10kB or 30kB.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2sU1gCujSAuhaRfScGhoS1/18569ea438ffb0c029f8bcd2b1a53650/image9.png" />
            
            </figure><p>Missing requests when artificially inflating certificate chain size to simulate post-quantum certificates.</p><p>This does not mean that the ML-DSA-44-only route is necessarily unviable. Just like with key agreement, browsers can slowly turn on support for post-quantum certificates. As we hit issues with middleboxes, we can work with vendors to fix what is broken. It is crucial here that servers are configured to be able to serve either a small traditional chain, or a larger post-quantum chain.</p><p>These issues <i>are</i> problematic for a <a href="https://eprint.iacr.org/2018/063.pdf">single-certificate migration</a> strategy. In this approach, the server installs a single traditional certificate that contains a separate post-quantum certificate in a so-called non-critical extension. A client that does not support post-quantum certificates will ignore the extension. In this approach, installing the single certificate will immediately break all clients with compatibility issues, making it a non-starter.</p><p>What about performance? We saw the following impact on TLS handshake time.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/66AlqTNxx8h6oKMZSBqqsD/53e71fcf675af6818fc63eb7ab40a339/image13.png" />
            
            </figure><p>Performance when artificially inflating certificate chain size to simulate post-quantum certificates.</p><p>The jump at around 40kB is caused by an extra round-trip due to a full congestion window. In the 2021 blog post <a href="/sizing-up-post-quantum-signatures/#intermezzo-tcp-s-congestion-window">we go into detail</a> on what that is all about. There is an important caveat: at Cloudflare, because we’re close to the client, we use a larger congestion window. With a typical congestion window, the jump would move to around 10kB. Also, the jump would be larger as typical round-trip times are higher.</p><p>Thus, when adding 9KB, we're looking at a slowdown of about 15%. Crossing the 10kB boundary, we are likely to incur an extra roundtrip, which could well lead to a slowdown of more than 60%. That completely negates the much touted performance benefit that TLS 1.3 has over TLS 1.2, and it’s too high to be enabled by default.</p><p>Is 9kB too much? Enabling post-quantum key agreement wasn’t free either, but enabling post-quantum key agreement was cheaper and actually gets us a tangible security benefit today. However, this thinking is dangerous. If we wait too long before enabling post-quantum certificates by default, we might find ourselves out of time when the quantum computer arrives.</p>
    <div>
      <h2>Way forward</h2>
      <a href="#way-forward">
        
      </a>
    </div>
    <p>Over the coming years, we’ll be working with browsers to test the viability and performance impact of post-quantum authentication in TLS. We expect to add support for post-quantum certificates as soon as they arrive (probably around 2026), but not enable them by default.</p><p>At the same time, we’re exploring various ideas to reduce the number of signatures.</p>
    <div>
      <h2>Reducing number of signatures</h2>
      <a href="#reducing-number-of-signatures">
        
      </a>
    </div>
    <p>Over the last few years, there have been several proposals to reduce the number of signatures used.</p>
    <div>
      <h3>Leaving out intermediate certificates</h3>
      <a href="#leaving-out-intermediate-certificates">
        
      </a>
    </div>
    <p>CAs report the intermediate certificates they use in the <a href="https://www.ccadb.org/cas/intermediates">CCADB</a>. Most browsers ship with the list of intermediates (of CAs they trust). Using that list, a browser is able to establish a connection with a server that forgot to install the intermediate. If a server can leave out the intermediate, then why bother with it?</p><p>There are three competing proposals to leave out the intermediate certificate. The original 2019 proposal is by Martin Thomson, who <a href="https://datatracker.ietf.org/doc/html/draft-kampanakis-tls-scas-latest">suggests simply</a> having the browser send a single bit to indicate that it has an up-to-date list of all intermediates. In that case, the server will leave out the intermediates. This will work well in the majority of cases, but could lead to some hard-to-debug issues in corner cases. For one, not all intermediates are listed in the CCADB, and these missing intermediates aren’t even from custom CAs. Another reason is that the browser could be mistaken about whether it’s up-to-date. A more esoteric issue is that the browser could reconstruct a different chain of certificates than the server had in mind.</p><p>To address these issues, in 2023, Dennis Jackson put forward a more <a href="https://datatracker.ietf.org/doc/draft-jackson-tls-cert-abridge/">robust proposal</a>. In this proposal, every year a fixed list of intermediates is compiled from the CCADB. Instead of a single flag, the browser will send the named lists of intermediates it has. The server will not simply leave out matching intermediates, but rather replace them by the sequence number at which they appear in the list. He also did a survey of the most popular websites, and found that just by leaving out the intermediates today, we can save <a href="https://www.ietf.org/archive/id/draft-jackson-tls-cert-abridge-00.html#name-preliminary-evaluation">more than 2kB</a> compared to certificate compression for half of them. That’s with today’s certificates: yes, X509 certificates are somewhat bloated.</p><p>Finally, there is the more general <a href="https://datatracker.ietf.org/doc/draft-davidben-tls-trust-expr/">TLS trust expressions</a> proposal that allows a browser to signal more in a more fine-grained manner which CAs and intermediates it trusts.</p><p>It’s likely some form of intermediate suppression will be adopted in the coming years. This will push the cost of a ML-DSA-44-only deployment down to less than 13kB.</p>
    <div>
      <h3>KEMTLS</h3>
      <a href="#kemtls">
        
      </a>
    </div>
    <p>Another approach is to change TLS more rigorously by replacing the signature algorithm in the leaf certificate by a KEM. This is called <a href="/kemtls-post-quantum-tls-without-signatures">KEMTLS</a> (or <a href="https://datatracker.ietf.org/doc/draft-celi-wiggers-tls-authkem/">AuthKEM</a> at the IETF). The server proves it controls the leaf certificate, by being able to decrypt a challenge sent by the client. This is not an outlandishly new idea, as older versions of TLS would encrypt a shared key to an RSA certificate.</p><p>KEMTLS does add quite a bit of complexity to TLS 1.3, which was purposely designed to simplify TLS 1.2. Adding complexity adds security concerns, but we <a href="/post-quantum-formal-analysis">soften that</a> by extending TLS 1.3 machine-checked security proof to KEMTLS. Nonetheless, adopting KEMTLS will be a significant engineering effort, and its gains should be worthwhile.</p><p>If we replace an ML-DSA-44 handshake signature of 2,420 bytes by KEMTLS using ML-KEM-512, we save 852 bytes in the total bytes transmitted by client and server. Looking just at the server, we save 1,620 bytes. If that’s 1.6kB saved on 17kB, it’s not very impressive. Also, KEMTLS is of little benefit if small post-quantum signatures such as MAYO<sub>one</sub> are available for the handshake.</p><p>KEMTLS shines in the case that 1.6kB savings pushes the server within the congestion window, such as when UOV is used for all but the handshake and leaf signature. Another advantage of KEMTLS, especially for embedded devices, is that it could reduce the number of algorithms that need to be implemented: you need a KEM for the key agreement anyway, and that could replace the signature scheme you would’ve only used for the handshake signature.</p><p>At the moment, deploying KEMTLS isn’t the lowest hanging fruit, but it could well come into its own, depending on which signature schemes are standardized, and which other protocol changes are made.</p>
    <div>
      <h3>Merkle tree certificates</h3>
      <a href="#merkle-tree-certificates">
        
      </a>
    </div>
    <p>An even more ambitious and involved proposal is <a href="https://datatracker.ietf.org/doc/draft-davidben-tls-merkle-tree-certs/">Merkle tree certificates</a> (MTC). In this proposal, all signatures except the handshake signature are replaced by a short &lt;800 byte Merkle tree certificate. This sounds too good to be true, and there is indeed a catch. MTC doesn’t work in all situations, and for those you will need to fall back to old-fashioned X509 certificates and certificate transparency. So, what’s assumed?</p><ul><li><p>No direct certificate issuance. You can’t get a Merkle tree certificate immediately: you will have to ask for one, and then wait for at least a day before you can use it.</p></li><li><p>Clients (in MTC parlance <i>relying parties</i>) can only check a Merkle tree certificate if they stay up to date with a <i>transparency service</i>. Browsers have an update-mechanism that can be used for this, but a browser that hasn’t been used in a while might be stale.</p></li></ul><p>MTC should be seen as an optimisation for the vast majority of cases.</p>
    <div>
      <h3>Summary</h3>
      <a href="#summary">
        
      </a>
    </div>
    <p>So, how does it actually work? I’ll try to give a short summary — for a longer introduction check out <a href="https://youtu.be/u_sFyz4F7dc?si=inG4bgBwKLzrBuvY&amp;t=2566">David Benjamin’s IETF presentation</a>, or get your hands dirty by <a href="https://github.com/bwesterb/mtc">setting up your own MTC CA</a>.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2pAmfsKWArXUPEZtgXezao/f40906c5f85ec7a141920b3d49fba0af/image1-9.png" />
            
            </figure><p>An overview of a Merkle Tree certificate deployment</p><p>In MTC, CAs issues <i>assertions</i> in a batch in a fixed rhythm. Say once every hour. An example of an assertion is “you can trust P-256 public key ab....23 when connecting to example.com”. Basically an assertion is a certificate without the signature. If a subscriber wants to get a certificate, it sends the assertion to the CA, which vets it, and then queues it for issuance.</p><p>On this <i>batch</i> of assertions, the CA computes a <a href="https://en.wikipedia.org/wiki/Merkle_tree">Merkle tree</a>. We have an <a href="/introducing-certificate-transparency-and-nimbus#buildingaverifiablegloballyconsistentlog">explainer of Merkle trees</a> in our blog post introducing certificate transparency. The short of it is that you can summarize a batch into a single hash by creating a tree hashing pairwise. The root is the summary. The nice thing about Merkle trees is that you can prove that something was in the batch to someone who only has the root, by revealing just a few hashes up the tree, which is called the <i>Merkle tree certificate</i>.</p><p>Each assertion is valid for a fixed number of batches — say 336 batches for a validity of two weeks. This is called the <i>validity window</i>. When issuing a batch, the CA not only publishes the assertions, but also a signature on the roots of all batches that are currently valid, called the <i>signed validity window</i>.</p><p>After the MTC CA has issued the new batch, the <i>subscriber</i> that asked for the certificate to be issued can pull the Merkle tree certificate from the CA. The subscriber can then install it, next to its X509 certificate, but will have to wait a bit before it’s useful.</p><p>Every hour, the <i>transparency services</i>, including those run by browser vendors, pull the new assertions and signed validity window from the CAs they trust. They check whether everything is consistent, including whether the new signed validity window matches with the old one. When satisfied, they republish the batches and signed validity window themselves.</p><p>Every hour, browsers download the latest roots from their trusted <i>transparency service</i>. Now, when connecting to a server, the client will essentially advertise which CAs it trusts, and the sequence number of the latest batch for which it has the roots. The server can then send either a new MTC, an older MTC (if the client is a bit stale), or fall back to a X509 certificate.</p>
    <div>
      <h2>Outlook</h2>
      <a href="#outlook">
        
      </a>
    </div>
    <p>The path for migrating the Internet to post-quantum authentication is much less clear than with key agreement. In the short term, we expect early adoption of post-quantum authentication across the Internet around 2026, but few will turn it on by default. Unless we can get performance much closer to today’s authentication, we expect the vast majority to keep post-quantum authentication disabled, unless motivated by regulation.</p>
    <div>
      <h3>Not just TLS, authentication, and key agreement</h3>
      <a href="#not-just-tls-authentication-and-key-agreement">
        
      </a>
    </div>
    <p>Despite its length, in this blog post, we have only really touched upon migrating TLS. And even TLS we did not cover completely, as we have not discussed <a href="/announcing-encrypted-client-hello">Encrypted ClientHello</a> (we didn’t forget about it). Although important, TLS is not the only protocol key to the security of the Internet. We want to briefly mention a few other challenges, but cannot go into detail. One particular challenge is DNSSEC, which is responsible for securing the resolution of domain names.</p><p>Although key agreement and signatures are the most widely used cryptographic primitives, over the last few years we have seen the adoption of more esoteric cryptography to serve more advanced use cases, such as unlinkable tokens with <a href="/privacy-pass-standard">Privacy Pass</a> / <a href="/eliminating-captchas-on-iphones-and-macs-using-new-standard">PAT</a>, anonymous credentials, and <a href="/inside-geo-key-manager-v2">attribute based encryption</a> to name a few. For most of these advanced cryptographic schemes, there is no known practical post-quantum alternative yet.</p>
    <div>
      <h2>What you can do today</h2>
      <a href="#what-you-can-do-today">
        
      </a>
    </div>
    <p>To finish, let’s review what you can do today. For most organizations the brunt of the work is in the preparation. Where is cryptography used in the first place? What software libraries / what hardware? What are the timelines of your vendors? Do you need to hire expertise? What’s at risk, and how should it be prioritized? Even before you can answer all those, create engagement within the organization. All this work can be started before NIST finishes their standards or software starts shipping with post-quantum cryptography.</p><p>You can also start testing right now since the performance characteristics of the final standards will not be meaningfully different from the preliminary ones available today. If it works with the preliminary ones today in your test environment, the final standards will most likely work just fine in production. We’ve collected a list of software and forks that already support preliminary post-quantum key agreement <a href="https://pq.cloudflareresearch.com/">here</a>.</p><p>Also on <a href="https://pq.cloudflareresearch.com/">that page</a>, we collected instructions on how to turn on post-quantum key agreement in your browser today. (For Chrome it’s <code>enable-tls13-kyber</code> in <code>chrome://flags</code>.)</p><p>If you’re a Cloudflare customer, you can check out how to <a href="/post-quantum-to-origins/">enable post-quantum key agreement to your origin</a>, and <a href="/post-quantum-cryptography-ga">our products</a> that are secured against store-now/decrypt-later today.</p><p>Good luck with your migration, and if you hit any issues, do reach out: <a>ask-research@cloudflare.com</a></p> ]]></content:encoded>
            <category><![CDATA[Security Week]]></category>
            <category><![CDATA[Post-Quantum]]></category>
            <category><![CDATA[Research]]></category>
            <guid isPermaLink="false">2Qs8QVZDDBbXeE8CoXklQr</guid>
            <dc:creator>Bas Westerbaan</dc:creator>
        </item>
        <item>
            <title><![CDATA[Post-quantum cryptography goes GA]]></title>
            <link>https://blog.cloudflare.com/post-quantum-cryptography-ga/</link>
            <pubDate>Fri, 29 Sep 2023 13:05:00 GMT</pubDate>
            <description><![CDATA[ Cloudflare announces Post-Quantum Cryptography as a Generally Available system ]]></description>
            <content:encoded><![CDATA[ <p></p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/CftKQnuBYwGI69XmAFsVq/d0577dd4455257096f07d478ddaf5bdd/image2-28.png" />
            
            </figure><p>Over the last twelve months, we have been talking about the new baseline of encryption on the Internet: <a href="https://www.cloudflare.com/learning/ssl/quantum/what-is-post-quantum-cryptography/">post-quantum cryptography</a>. During Birthday Week last year we announced that our <a href="/post-quantum-for-all/">beta of Kyber was available for testing,</a> and that <a href="/post-quantum-tunnel/">Cloudflare Tunnel</a> could be enabled with post-quantum cryptography. Earlier this year, we made our stance clear that this foundational technology should be available to <a href="/post-quantum-crypto-should-be-free/">everyone for free, forever</a>.</p><p>Today, we have hit a milestone after six years and <a href="/searchresults/#q=post%20quantum%20crypto&amp;sort=relevancy&amp;f:@customer_facing_source=%5BBlog%5D&amp;f:@language=%5BEnglish%5D">31 blog posts</a> in the making: we’re starting to roll out <a href="/post-quantum-to-origins/">General Availability</a><sup>1</sup> of post-quantum cryptography support to our customers, services, and internal systems as described more fully below. This includes products like <a href="/post-quantum-to-origins/">Pingora</a> for origin connectivity, 1.1.1.1, <a href="https://www.cloudflare.com/developer-platform/r2/">R2</a>, Argo Smart Routing, Snippets, and so many more.</p><p>This is a milestone for the Internet. We don't yet know when quantum computers will have enough scale to break today's cryptography, but the benefits of upgrading to post-quantum cryptography now are clear. <a href="/the-tls-post-quantum-experiment/">Fast connections and future-proofed</a> security are all possible today because of the advances made by Cloudflare, Google, Mozilla, the National Institutes of Standards and Technology in the United States, the Internet Engineering Task Force, and numerous academic institutions</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6oRJ1q0ib6rCSfgBJPJyuU/8e2847b56b058eb6267d1b9303a74883/image1-40.png" />
            
            </figure><p>What does General Availability mean? In October 2022 <a href="/post-quantum-for-all/">we enabled <i>X25519+Kyber</i> as a beta for all websites and APIs</a> served through Cloudflare. However, it takes two to tango: the connection is only secured if the browser also supports post-quantum cryptography. Starting August 2023, <a href="https://blog.chromium.org/2023/08/protecting-chrome-traffic-with-hybrid.html">Chrome</a> is slowly enabling <i>X25519+Kyber</i> by default.</p><p>The user’s request is routed through Cloudflare’s network (2). We have upgraded many of these internal connections to use post-quantum cryptography, and expect to be done upgrading all of our internal connections by the end of 2024. That leaves as the final link the connection (3) between us and the <i>origin server</i>.</p><p>We are happy to announce that <b>we are rolling out support for X25519+Kyber for most inbound and outbound connections</b> <b>as Generally Available</b> for use including <i>origin servers</i> and <a href="https://workers.cloudflare.com/">Cloudflare Workers</a> <code>fetch()</code>es.</p>
<table>
<thead>
  <tr>
    <th><span>Plan</span></th>
    <th><span>Support for post-quantum outbound connections</span></th>
  </tr>
</thead>
<tbody>
  <tr>
    <td><span>Free</span></td>
    <td><span>Started roll-out. Aiming for 100% by the end of the October.</span></td>
  </tr>
  <tr>
    <td><span>Pro and business</span></td>
    <td><span>Aiming for 100% by the end of year.</span></td>
  </tr>
  <tr>
    <td><span>Enterprise</span></td>
    <td><span>Roll-out begins February 2024. 100% by March 2024.</span></td>
  </tr>
</tbody>
</table><p>For our Enterprise customers, we will be sending out additional information regularly over the course of the next six months to help prepare you for the roll-out. Pro, Business, and Enterprise customers can skip the roll-out and opt-in within your zone today, or opt-out ahead of time using an API described in our companion blog post on <a href="/post-quantum-to-origins/">post-quantum cryptography</a>. Before rolling out for Enterprise in February 2024, we will add a toggle on the dashboard to opt out.</p><p>If you're excited to get started now, <a href="/post-quantum-to-origins/">check out our blog with the technical details and flip on post-quantum cryptography support via the API</a>!</p>
    <div>
      <h3>What’s included and what is next?</h3>
      <a href="#whats-included-and-what-is-next">
        
      </a>
    </div>
    <p>With an upgrade of this magnitude, we wanted to focus on our most used products first and then expand outward to cover our edge cases. This process has led us to include the following products and systems in this roll out:</p><table>
<thead>
  <tr>
    <td>1.1.1.1</td>
  </tr>
</thead>
<tbody>
  <tr>
    <td>AMP</td>
  </tr>
  <tr>
    <td>API Gateway</td>
  </tr>
  <tr>
    <td>Argo Smart Routing</td>
  </tr>
  <tr>
    <td>Auto Minify</td>
  </tr>
  <tr>
    <td>Automatic Platform Optimization</td>
  </tr>
  <tr>
    <td>Automatic Signed Exchange</td>
  </tr>
  <tr>
    <td>Cloudflare Egress</td>
  </tr>
  <tr>
    <td>Cloudflare Images</td>
  </tr>
  <tr>
    <td>Cloudflare Rulesets</td>
  </tr>
  <tr>
    <td>Cloudflare Snippets</td>
  </tr>
  <tr>
    <td>Cloudflare Tunnel</td>
  </tr>
  <tr>
    <td>Custom Error Pages</td>
  </tr>
  <tr>
    <td>Flow Based Monitoring</td>
  </tr>
  <tr>
    <td>Health checks</td>
  </tr>
  <tr>
    <td>Hermes</td>
  </tr>
  <tr>
    <td>Host Head Checker</td>
  </tr>
  <tr>
    <td>Magic Firewall</td>
  </tr>
  <tr>
    <td>Magic Network Monitoring</td>
  </tr>
  <tr>
    <td>Network Error Logging</td>
  </tr>
  <tr>
    <td>Project Flame</td>
  </tr>
  <tr>
    <td>Quicksilver</td>
  </tr>
  <tr>
    <td>R2 Storage</td>
  </tr>
  <tr>
    <td>Request Tracer</td>
  </tr>
  <tr>
    <td>Rocket Loader</td>
  </tr>
  <tr>
    <td>Speed on Cloudflare Dash</td>
  </tr>
  <tr>
    <td>SSL/TLS</td>
  </tr>
  <tr>
    <td>Traffic Manager</td>
  </tr>
  <tr>
    <td>WAF, Managed Rules</td>
  </tr>
  <tr>
    <td>Waiting Room</td>
  </tr>
  <tr>
    <td>Web Analytics</td>
  </tr>
</tbody>
</table><p>If a product or service you use is not listed here, we have not started rolling out post-quantum cryptography to it yet. We are actively working on rolling out post-quantum cryptography to all products and services including our Zero Trust products. Until we have achieved post-quantum cryptography support in all of our systems, we will publish an update blog in every Innovation Week that covers which products we have rolled out post-quantum cryptography to, the products that will be getting it next, and what is still on the horizon.</p><p>Products we are working on bringing post-quantum cryptography support to soon:</p><table>
<thead>
  <tr>
    <td>Cloudflare Gateway</td>
  </tr>
</thead>
<tbody>
  <tr>
    <td>Cloudflare DNS</td>
  </tr>
  <tr>
    <td>Cloudflare Load Balancer</td>
  </tr>
  <tr>
    <td>Cloudflare Access</td>
  </tr>
  <tr>
    <td>Always Online</td>
  </tr>
  <tr>
    <td>Zaraz</td>
  </tr>
  <tr>
    <td>Logging</td>
  </tr>
  <tr>
    <td>D1</td>
  </tr>
  <tr>
    <td>Cloudflare Workers</td>
  </tr>
  <tr>
    <td>Cloudflare WARP</td>
  </tr>
  <tr>
    <td>Bot Management</td>
  </tr>
</tbody>
</table>
    <div>
      <h3>Why now?</h3>
      <a href="#why-now">
        
      </a>
    </div>
    <p>As we announced earlier this year, post-quantum cryptography will be included for free in all Cloudflare products and services that can support it. The best encryption technology should be accessible to everyone - free of charge - to help support privacy and human rights globally.</p><p>As we <a href="/post-quantum-crypto-should-be-free/">mentioned</a> in March:</p><p><i>“What was once an experimental frontier has turned into the underlying fabric of modern society. It runs in our most critical infrastructure like power systems, hospitals, airports, and banks. We trust it with our most precious memories. We trust it with our secrets. That’s why the Internet needs to be private by default. It needs to be secure by default.”</i></p><p>Our work on post-quantum cryptography is driven by the thesis that quantum computers that can break conventional cryptography create a similar problem to the Year 2000 bug. We know there is going to be a problem in the future that could have catastrophic consequences for users, businesses, and even nation states. The difference this time is we don’t know how the date and time that this break in the computational paradigm will occur. Worse, any traffic captured today could be decrypted in the future. We need to prepare today to be ready for this threat.</p><p>We are excited for everyone to adopt post-quantum cryptography into their systems. To follow the latest developments of our deployment of post-quantum cryptography and third-party client/server support, check out <a href="https://pq.cloudflareresearch.com/">pq.cloudflareresearch.com</a> and keep an eye on this blog.</p><p>***</p><p><sup>1</sup>We are using a <a href="https://datatracker.ietf.org/doc/draft-tls-westerbaan-xyber768d00/">preliminary version</a> of Kyber, NIST’s pick for post-quantum key agreement. Kyber has not been finalized. We expect a final standard to be published in 2024 under the name ML-KEM, which we will then adopt promptly while deprecating support for X25519Kyber768Draft00.</p> ]]></content:encoded>
            <category><![CDATA[Birthday Week]]></category>
            <category><![CDATA[General Availability]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[Research]]></category>
            <category><![CDATA[Post-Quantum]]></category>
            <guid isPermaLink="false">6BFLGzTX8jguAgFnyAFCib</guid>
            <dc:creator>Wesley Evans</dc:creator>
            <dc:creator>Bas Westerbaan</dc:creator>
            <dc:creator>Christopher Patton</dc:creator>
            <dc:creator>Peter Wu</dc:creator>
            <dc:creator>Vânia Gonçalves</dc:creator>
        </item>
        <item>
            <title><![CDATA[Cloudflare now uses post-quantum cryptography to talk to your origin server]]></title>
            <link>https://blog.cloudflare.com/post-quantum-to-origins/</link>
            <pubDate>Fri, 29 Sep 2023 13:00:45 GMT</pubDate>
            <description><![CDATA[ Starting today, you can secure the connection between Cloudflare and your origin server with post-quantum cryptography ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Quantum computers pose a <a href="/the-quantum-menace/">serious threat</a> to security and privacy of the Internet: encrypted communication intercepted today can be decrypted <a href="https://en.wikipedia.org/wiki/Harvest_now,_decrypt_later">in the future</a> by a sufficiently advanced quantum computer. To counter this <a href="https://en.wikipedia.org/wiki/Harvest_now,_decrypt_later">store-now/decrypt-later</a> threat, cryptographers have been hard at work over the last decades proposing and vetting <a href="https://www.cloudflare.com/learning/ssl/quantum/what-is-post-quantum-cryptography/">post-quantum cryptography (PQC)</a>, cryptography that’s designed to withstand attacks of quantum computers. After a six-year public competition, in July 2022, the US National Institute of Standards and Technology (NIST), known for standardizing AES and SHA, announced <a href="https://pq-crystals.org/kyber/index.shtml">Kyber</a> as <a href="/nist-post-quantum-surprise/">their pick</a> for post-quantum key agreement. Now the baton has been handed to Industry to deploy post-quantum key agreement to protect today’s communications from the threat of future decryption by a quantum computer.</p><p>Cloudflare operates as a reverse proxy between clients (“visitors”) and customers’ web servers (“origins”), so that we can protect origin sites from attacks and improve site performance. In this post we explain how we secure the connection from Cloudflare to <i>origin servers</i>. To put that in context, let’s have a look at the connection involved when visiting an uncached page on a website served through Cloudflare.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2WZQZByAjMmuu53BzxjNik/170ebefe3aec6f8277f4c2e4e34b76f1/Connections-involved-when-user-visits-an-uncached-page-on-a-website-served-through-Cloudflare.png" />
            
            </figure><p>The first connection is from the visitor’s browser to Cloudflare. In October 2022, <a href="/post-quantum-for-all/">we enabled <i>X25519+Kyber</i> as a beta for all websites and APIs</a> served through Cloudflare. However, it takes two to tango: the connection is only secured if the browser also supports post-quantum cryptography. As of August 2023, <a href="https://blog.chromium.org/2023/08/protecting-chrome-traffic-with-hybrid.html">Chrome</a> is slowly enabling <i>X25519+Kyber</i> by default.</p><p>The visitor’s request is routed through Cloudflare’s network (2). We have <a href="/post-quantum-cryptography-ga">upgraded</a> many of these internal connections to use post-quantum cryptography, and expect to be done upgrading all of our internal connections by the end of 2024. That leaves as the final link the connection (3) between us and the <i>origin server</i>.</p><p>We are happy to announce that <b>we are rolling out support for X25519+Kyber for most outbound connections</b>, including <i>origin servers</i> and <a href="https://workers.cloudflare.com/">Cloudflare Workers</a> <code>fetch()</code> calls.</p>
<table>
<thead>
  <tr>
    <th><span>Plan</span></th>
    <th><span>Support for post-quantum outbound connections</span></th>
  </tr>
</thead>
<tbody>
  <tr>
    <td><span>Free</span></td>
    <td><span>Started roll-out. Aiming for 100% by the end of the October.</span></td>
  </tr>
  <tr>
    <td><span>Pro and Business</span></td>
    <td><span>Started roll-out. Aiming for 100% by the end of year.</span></td>
  </tr>
  <tr>
    <td><span>Enterprise</span></td>
    <td><span>Start roll-out February 2024. 100% by March 2024.</span></td>
  </tr>
</tbody>
</table><p>You can skip the roll-out and opt-in your zone today, or opt-out ahead of time, using an API described below. Before rolling out this support for enterprise customers in February 2024, we will add a toggle on the dashboard to opt out.</p><p>In this post we will dive into the nitty-gritty of what we enabled; how we have to be a bit subtle to prevent breaking connections to origins that are not ready yet, and how you can add support to your (origin) server.</p><p>But before we dive in, for the impatient:</p>
    <div>
      <h3>Quick start</h3>
      <a href="#quick-start">
        
      </a>
    </div>
    <p>To enable a post-quantum connection between Cloudflare and your origin server today, opt-in your zone to skip the gradual roll-out:</p>
            <pre><code>curl --request PUT \
  --url https://api.cloudflare.com/client/v4/zones/(zone_id)/cache/origin_post_quantum_encryption \
  --header 'Content-Type: application/json' \
  --header 'Authorization: Bearer (API token)' \
  --data '{"value": "preferred"}'</code></pre>
            <p>Replace <a href="https://developers.cloudflare.com/fundamentals/setup/find-account-and-zone-ids/"><code>(zone_id)</code></a> and <a href="https://developers.cloudflare.com/fundamentals/api/get-started/create-token/"><code>(API token)</code></a> appropriately. Then, make sure your server supports TLS 1.3; enable and prefer the key agreement <code>X25519Kyber768Draft00;</code> and ensure it’s configured with <i>server cipher preference</i>. For example, to configure <a href="https://www.nginx.com/">nginx</a> (compiled with a recent <a href="https://boringssl.googlesource.com/boringssl">BoringSSL</a>) like this, use</p>
            <pre><code>	http {
		# [...]
		ssl_ecdh_curve X25519Kyber768Draft00:X25519;
		ssl_prefer_server_ciphers on;
		ssl_protocols TLSv1.3;
	}</code></pre>
            <p>To check your server is properly configured, you can use the <code>bssl</code> tool of <a href="https://github.com/google/boringssl">BoringSSL</a>:</p>
            <pre><code>	$ bssl client -connect (your server):443 -curves X25519:X25519Kyber768Draft00
[...]
	  ECDHE curve: X25519Kyber768Draft00
[...]</code></pre>
            <p>We’re looking for <code>X25519Kyber768Draft00</code> for a post-quantum connection as shown above instead of merely <code>X25519</code>.For more client and server support, check out <a href="https://pq.cloudflareresearch.com/">pq.cloudflareresearch.com</a>. Now, let’s dive in.</p>
    <div>
      <h2>Overview of a TLS 1.3 handshake</h2>
      <a href="#overview-of-a-tls-1-3-handshake">
        
      </a>
    </div>
    <p>To understand how a smooth upgrade is possible, and where it might go wrong, we need to understand a few basics of the TLS 1.3 protocol, which is used to protect traffic on the Internet. A TLS connection starts with a <b>handshake</b> which is used to authenticate the server and derive a shared key. The browser (client) starts by sending a <i>ClientHello</i> message that contains among other things, the hostname (SNI) and the list of key agreement methods it supports.</p><p>To remove a round trip, the client is allowed to make a guess of what the server supports and start the key agreement by sending one or more <i>client keyshares</i>. That guess might be correct (on the left in the diagram below) or the client has to retry (on the right). By the way, this guessing of keyshares is a <a href="/rfc-8446-aka-tls-1-3/">new feature of TLS 1.3</a>, and it is the main reason why it’s faster than TLS 1.2.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/QbpgdMdMdt9aW2nmrBSnT/97fee7c97d8c726e29fbf7b72666bfb6/image2-30.png" />
            
            </figure><p><i>Protocol flow for server-authenticated TLS 1.3 with a supported client keyshare on the left and a</i> HelloRetryRequest <i>on the right.</i></p><p>In both cases the client sends a <i>client keyshare</i> to the server. From this client keyshare the server generates the <i>shared key</i>. The server then returns a <i>server keyshare</i> with which the client can also compute the shared key. This shared key is used to protect the rest of the connection using symmetric cryptography, such as AES.</p><p>Today <a href="https://cr.yp.to/ecdh.html">X25519</a> is used as the key agreement in the vast majority of connections. To secure the connection against store-now/decrypt-later in the post-quantum world, a client can simply send a <a href="https://datatracker.ietf.org/doc/draft-tls-westerbaan-xyber768d00/">X25519+Kyber</a> keyshare.</p>
    <div>
      <h3>Hello! Retry Request? (HRR)</h3>
      <a href="#hello-retry-request-hrr">
        
      </a>
    </div>
    <p>What we just described is the happy flow, where the client guessed correctly which key agreement the server supports. If the server does not support the keyshare that the client sent, then the server picks one of the supported key agreements that the client advertised, and asks for it in a <i>HelloRetryRequest</i> message.</p><p>This is not the only case where a server can use a HelloRetryRequest: even if the client sent keyshares that the server supports, the server is allowed to prefer a different key agreement the client advertised, and ask for it with a HelloRetryRequest. This will turn out to be very useful.</p><p>_HelloRetryRequest_s are mostly undesirable: they add an extra round trip, and bring us back to the performance of TLS 1.2. We already had a transition of key agreement methods: back in the day P-256 was the de facto standard. When browsers couldn’t assume support for the newer X25519, some would send two keyshares, both X25519 and P-256 to prevent a <i>HelloRetryRequest</i>.</p><p>Also today, when enabling <a href="https://blog.chromium.org/2023/08/protecting-chrome-traffic-with-hybrid.html">Kyber in Chrome</a>, Chrome will send two keyshares: X25519 and X25519+Kyber to prevent a <i>HelloRetryRequest</i>. Sending two keyshares is not ideal: it requires the client to compute more, and it takes more space on the wire. This becomes more problematic when we want to send two post-quantum keyshares, as post-quantum keyshares are much larger. Talking about post-quantum keyshares, let’s have a look at X25519+Kyber.</p>
    <div>
      <h2>The nitty-gritty of X25519+Kyber</h2>
      <a href="#the-nitty-gritty-of-x25519-kyber">
        
      </a>
    </div>
    <p>The full name of the post-quantum key agreement we have enabled is <a href="https://datatracker.ietf.org/doc/draft-tls-westerbaan-xyber768d00/">X25519Kyber768Draft00</a>, which has become the industry standard for early deployment. It is the combination (a so-called <i>hybrid</i>, more about that later) of two key agreements: <a href="https://cr.yp.to/ecdh.html">X25519</a> and a <a href="https://datatracker.ietf.org/doc/draft-cfrg-schwabe-kyber/">preliminary version</a> of NIST’s pick Kyber. Preliminary, because standardization of Kyber is not complete: NIST has released a <a href="https://csrc.nist.gov/pubs/fips/203/ipd">draft standard</a> for which it has requested public input. The final standard might change a little, but we do not expect any radical changes in security or performance. One notable change is the name: the NIST standard is set to be called <a href="https://csrc.nist.gov/pubs/fips/203/ipd"><i>ML-KEM</i></a>. Once ML-KEM is released in 2024, we will promptly adopt support for the corresponding hybrid, and deprecate support for X25519Kyber768Draft00. We will announce deprecation on this blog and <a href="https://pq.cloudflareresearch.com/">pq.cloudflareresearch.com</a>.</p>
    <div>
      <h3>Picking security level: 512 vs 768</h3>
      <a href="#picking-security-level-512-vs-768">
        
      </a>
    </div>
    <p>Back in 2022, for incoming connections, <a href="/post-quantum-for-all/">we enabled</a> hybrids with both Kyber512 and Kyber768. The difference is target security level: Kyber512 aims for the same security as AES-128, whereas Kyber768 matches up with AES-192. Contrary to popular belief, AES-128 is <a href="/nist-post-quantum-surprise/#grover-s-algorithm">not broken</a> in practice by quantum computers.</p><p>So why go with Kyber768? After years of analysis, there is no indication that Kyber512 fails to live up to its target security level. The designers of Kyber feel more comfortable, though, with the wider security margin of Kyber768, and we follow their advice.</p>
    <div>
      <h3>Hybrid</h3>
      <a href="#hybrid">
        
      </a>
    </div>
    <p>It is not inconceivable though, that an unexpected improvement in cryptanalysis will completely break Kyber768. Notably <a href="https://eprint.iacr.org/2022/214.pdf">Rainbow</a>, GeMMS and <a href="https://eprint.iacr.org/2022/975">SIDH</a> survived several rounds of public review before being broken. We do have to add nuance here. For a big break you need some mathematical trick, and compared to other schemes, SIDH had a lot of <i>mathematical attack surface</i>. Secondly, even though a scheme participated in many rounds of review doesn’t mean it saw a lot of attention. Because of their performance characteristics, these three schemes have more niche applications, and therefore received much less scrutiny from cryptanalysts. In contrast, Kyber is the big prize: breaking it will ensure fame.</p><p>Notwithstanding, for the moment, we feel it’s wiser to stick with hybrid key agreement. We combine Kyber together with X25519, which is currently the de facto standard key agreement, so that if Kyber turns out to be broken, we retain the non-post quantum security of X25519.</p>
    <div>
      <h3>Performance</h3>
      <a href="#performance">
        
      </a>
    </div>
    <p>Kyber is fast. Very fast. It easily beats X25519, which is already known for its speed:</p><table>
	<thead>
		<tr>
			<th> </th>
			<th> </th>
			<th><span>Size keyshares(in bytes)</span></th>
			<th><span>Ops/sec (higher is better)</span></th>
		</tr>
	</thead>
	<tbody>
		<tr>
			<td><span>Algorithm</span></td>
			<td><span>PQ</span></td>
			<td><strong>Client</strong></td>
			<td><strong>Server</strong></td>
			<td><strong>Client</strong></td>
			<td><strong>Server</strong></td>
		</tr>
		<tr>
			<td><strong>X25519</strong></td>
			<td><span>❌</span></td>
			<td><span>32</span></td>
			<td><span>32</span></td>
			<td><span>17,000</span></td>
			<td><span>17,000</span></td>
		</tr>
		<tr>
			<td><strong>Kyber768</strong></td>
			<td><span>✅</span></td>
			<td><span>1,184</span></td>
			<td><span>1,088</span></td>
			<td><span>31,000</span></td>
			<td><span>70,000</span></td>
		</tr>
		<tr>
			<td><strong>X25519Kyber768Draft00</strong></td>
			<td><span>✅</span></td>
			<td><span>1,216</span></td>
			<td><span>1,120</span></td>
			<td><span>11,000</span></td>
			<td><span>14,000</span></td>
		</tr>
	</tbody>
</table><p>Combined X25519Kyber768Draft00 is slower than X25519, but not by much. The big difference is its size: when connecting the client has to send 1,184 extra bytes for Kyber in the first message. That brings us to the next topic.</p>
    <div>
      <h2>When things break, and how to move forward</h2>
      <a href="#when-things-break-and-how-to-move-forward">
        
      </a>
    </div>
    
    <div>
      <h3>Split ClientHello</h3>
      <a href="#split-clienthello">
        
      </a>
    </div>
    <p>As we saw, the <i>ClientHello</i> is the first message that is sent by the client when setting up a TLS connection. With X25519, the ClientHello almost always fits within one network packet. With Kyber, the ClientHello doesn’t fit anymore with typical packet sizes and needs to be split over two network packets.</p><p>The TLS standard allows for the ClientHello to be split in this way. However, it used to be so exceedingly rare to see a split ClientHello that there is plenty of software and hardware out there that falsely assumes it never happens.</p><p>This so-called <b>protocol ossification</b> is the major challenge rolling out post-quantum key agreement. Back in 2019, during <a href="/the-tls-post-quantum-experiment/">earlier post-quantum experiments</a>, middleboxes of a particular vendor dropped connections with a split ClientHello. Chrome is currently <a href="https://blog.chromium.org/2023/08/protecting-chrome-traffic-with-hybrid.html">slowly ramping up</a> the number of post-quantum connections to catch these issues early. Several reports are listed <a href="https://twitter.com/davidcadrian/status/1692572405368078816">here</a>, and luckily most vendors seem to fix issues promptly.</p><p>Over time, with the slow ramp up of browsers, many of these implementation bugs will be found and corrected. However, we cannot completely rely on this for our outbound connections since in many cases Cloudflare is the sole client to connect directly to the origin server. Thus, we must exercise caution when deploying post-quantum cryptography to ensure that we are still able to reach origin servers even in the presence of buggy implementations.</p>
    <div>
      <h3>HelloRetryRequest to the rescue</h3>
      <a href="#helloretryrequest-to-the-rescue">
        
      </a>
    </div>
    <p>To enable support for post-quantum key agreement on all outbound connections, without risking issues with split ClientHello for those servers that are not ready yet, we make clever use of HelloRetryRequest. Instead of sending a X25519+Kyber keyshare, we will only advertise support for it, and send a non-post quantum secure X25519 keyshare in the first ClientHello.</p><p>If the origin does not support X25519+Kyber, then nothing changes. One might wonder: could merely advertising support for it trip up any origins? This used to be a real concern in the past, but luckily browsers have adopted a clever mechanism called <a href="https://datatracker.ietf.org/doc/html/rfc8701">GREASE</a>: they will send codepoints selected from unpredictable regions to make it hard to implement any software that could trip up on unknown codepoints.</p><p>If the origin does support X25519+Kyber, then it can use the HelloRetryRequest to request a post-quantum key agreement from us.</p><p>Things might still break then: for instance a malfunctioning middlebox, load-balancer, or the server software itself might still trip over the large ClientHello with X25519+Kyber sent in response to the HelloRetryRequest.</p><p>If we’re frank, the HRR trick kicks the can down the road: we as an industry will need to fix broken hardware and software before we can enable post-quantum on every last connection. The important thing though is that those past mistakes will not hold us back from securing the majority of connections. Luckily, from our experience, breakage will not be common.</p><p>So, when you have flipped the switch on your origin server, and things do break against expectation, what could be the root cause?</p>
    <div>
      <h3>Debugging and examples</h3>
      <a href="#debugging-and-examples">
        
      </a>
    </div>
    <p>It’s impossible to exhaustively list all bugs that could interfere with the post-quantum connection, but we like to share a few we’ve seen.</p><p>The first step is to figure out what pieces of hardware and software are involved in the connection. Rarely it’s just the server: there could be a load-balancer, and even a humble router could be at fault.</p><p>One straightforward mistake is to conveniently assume the ClientHello is small by reserving only a small, say 1000 byte, buffer.</p><p>A variation of this is where a server uses a single call to <a href="https://man7.org/linux/man-pages/man2/recv.2.html"><code>recv()</code></a> to read the ClientHello from the TCP connection. This works perfectly fine if it fits within one packet, but when split over multiple, it might require more calls.</p><p>Not all issues that we encountered relate directly to split ClientHello. For instance, servers using the Rust TLS library <a href="https://github.com/rustls/rustls">rustls</a> did <a href="https://github.com/rustls/rustls/issues/1373">not implement HelloRetryRequest correctly</a> before 0.21.7.</p><p>If you turned on post-quantum support for your origin, and hit issues, please do reach out: email <a>ask-research@cloudflare.com</a>.</p>
    <div>
      <h2>Opting in and opting out</h2>
      <a href="#opting-in-and-opting-out">
        
      </a>
    </div>
    <p>Now that you know what might lie in wait for you, let’s cover how to configure the outbound connections of your zone. There are three settings. The setting affects all outbound connections for your zone: to the origin server, but also for <code>fetch()</code> requests made by Workers on your zone.</p><table>
	<thead>
		<tr>
			<th><strong>Setting</strong></th>
			<th><strong>Meaning</strong></th>
		</tr>
	</thead>
	<tbody>
		<tr>
			<td><span><span>supported</span></span></td>
			<td><span>Advertise support for post-quantum key agreement, but send a classical keyshare in the first </span><em>ClientHello</em><span>.When the origin supports and prefers X25519+Kyber, a post-quantum connection will be established, but it incurs an extra roundtrip.This is the most compatible way to enable post-quantum.</span></td>
		</tr>
		<tr>
			<td><span><span>preferred</span></span></td>
			<td><span>Send a post-quantum keyshare in the first </span><em>ClientHello</em><span>.When the origin supports X25519+Kyber, a post-quantum connection will be established without an extra roundtrip. We continue advertising support for classical keyshares as well, so that origins that do not support X25519+Kyber will continue to function.
This is the most performant way to enable post-quantum.</span></td>
		</tr>
		<tr>
			<td><span><span>off</span></span></td>
			<td><span>Do not send or advertise support for post-quantum key agreement to the origin.</span></td>
		</tr>
		<tr>
			<td><span>(default)</span></td>
			<td><span>Allow us to determine the best behavior for your zone. (More about that later.)</span></td>
		</tr>
	</tbody>
</table><p>The setting can be adjusted using the following API call:</p>
            <pre><code>curl --request PUT \
  --url https://api.cloudflare.com/client/v4/zones/(zone_id)/cache/origin_post_quantum_encryption \
  --header 'Content-Type: application/json' \
  --header 'Authorization: Bearer (API token)' \
  --data '{"value": "(setting)"}'</code></pre>
            <p>Here, the parameters are as follows.</p><table>
	<thead>
		<tr>
			<th><strong>Parameter</strong></th>
			<th><strong>Value</strong></th>
		</tr>
	</thead>
	<tbody>
		<tr>
			<td><span>setting</span></td>
			<td><span><span>supported</span>, <span>preferred</span>, or <span>off</span>, with meaning as described above</span></td>
		</tr>
		<tr>
			<td><span>zone_id</span></td>
			<td><span>Identifier of the zone to control. You can </span><a href="https://developers.cloudflare.com/fundamentals/setup/find-account-and-zone-ids/"><u>look up the zone_id</u></a><span> in the dashboard.</span></td>
		</tr>
		<tr>
			<td><span>API token</span></td>
			<td><span>Token used to authenticate you. You can </span><a href="https://developers.cloudflare.com/fundamentals/api/get-started/create-token/"><u>create one in the dashboard</u></a><span>. Use </span><em>create custom token</em><span> and under permissions select </span><em>zone → zone settings → edit</em><span>.</span></td>
		</tr>
	</tbody>
</table>
    <div>
      <h3>Testing whether your origin server is configured correctly</h3>
      <a href="#testing-whether-your-origin-server-is-configured-correctly">
        
      </a>
    </div>
    <p>If you set your zone to <code>preferred</code> mode, you only need to check support for the proper post-quantum key agreement with your origin server. This can be done with the <code>bssl</code> tool of <a href="https://github.com/google/boringssl">BoringSSL</a>:</p>
            <pre><code>	$ bssl client -connect (your server):443 -curves X25519:X25519Kyber768Draft00
[...]
	  ECDHE curve: X25519Kyber768Draft00
[...]</code></pre>
            <p>If you set your zone to <code>supported</code> mode, or if you wait for the gradual roll-out, you will need to make sure that your origin server prefers post-quantum key agreement even if we sent a classical keyshare in the initial <i>ClientHello</i>. This can be done with <a href="https://github.com/cloudflare/boringssl-pq">our fork of BoringSSL</a>:</p>
            <pre><code>	$ git clone https://github.com/cloudflare/boringssl-pq
	[...]
	$ cd boringssl-pq &amp;&amp; cmake -B build &amp;&amp; make -C build
$ build/bssl client -connect (your server):443 -curves X25519:X25519Kyber768Draft00 -disable-second-keyshare
[...]
	  ECDHE curve: X25519Kyber768Draft00
[...]</code></pre>
            
    <div>
      <h2>Scanning ahead to remove the extra roundtrip</h2>
      <a href="#scanning-ahead-to-remove-the-extra-roundtrip">
        
      </a>
    </div>
    <p>With the <i>HelloRetryRequest</i> trick today, we can safely advertise support for post-quantum key agreement to all origins. The downside is that for those origins that do support post-quantum key agreement, we’re incurring an extra roundtrip for the <i>HelloRetryRequest</i>, which hurts performance.</p><p>You can remove the roundtrip by configuring your zone as <code>preferred</code>, but we can do better: the best setting is the one you shouldn’t have to touch.</p><p>We have started scanning all active origins for support of post-quantum key agreement. Roughly every 24 hours, we will attempt a series of about ten TLS connections to your origin, to test support and preferences for the various key agreements.</p><p>Our preliminary results show that 0.5% of origins support a post-quantum connection. As expected, we found that a small fraction (&lt;0.34%) of all origins do not properly establish a connection, when we send a post-quantum keyshare in the first ClientHello, which corresponds to the <code>preferred</code> setting. Unexpectedly the vast majority of these origins do return a <i>HelloRetryRequest</i>, but fail after receiving the second ClientHello with a classical keyshare. We are investigating the exact causes of these failures, and will reach out to vendors to help resolve them.</p><p>Later this year, we will start using these scan results to determine the best setting for zones that haven’t been configured yet. That means that for those zones whose origins support it reliably, we will send a post-quantum keyshare directly without extra roundtrip.</p>
    <div>
      <h3>Also speeding up non post-quantum origins</h3>
      <a href="#also-speeding-up-non-post-quantum-origins">
        
      </a>
    </div>
    <p>The scanner pipeline we built will not just benefit post-quantum origins. By default we send X25519, but not every origin supports or prefers X25519. We find that 4% of origin servers will send us a HelloRetryRequest for other key agreements such as P-384.</p>
<table>
<thead>
  <tr>
    <th><span>Key agreement</span></th>
    <th><span>Fraction supported</span></th>
    <th><span>Fraction preferred</span></th>
  </tr>
</thead>
<tbody>
  <tr>
    <td><span>X25519</span></td>
    <td><span>96%</span></td>
    <td><span>96%</span></td>
  </tr>
  <tr>
    <td><span>P-256</span></td>
    <td><span>97%</span></td>
    <td><span>0.6%</span></td>
  </tr>
  <tr>
    <td><span>P-384</span></td>
    <td><span>89%</span></td>
    <td><span>2.3%</span></td>
  </tr>
  <tr>
    <td><span>P-521</span></td>
    <td><span>82%</span></td>
    <td><span>0.1%</span></td>
  </tr>
  <tr>
    <td><span>X25519Kyber768Draft00</span></td>
    <td><span>0.5%</span></td>
    <td><span>0.5%</span></td>
  </tr>
</tbody>
</table><p>Also, later this year, we will use these scan results to directly send the most preferred keyshare to your origin removing the need for an extra roundtrip caused by HRR.</p>
    <div>
      <h2>Wrapping up</h2>
      <a href="#wrapping-up">
        
      </a>
    </div>
    <p>To mitigate the <i>store-now/decrypt-later</i> threat, and ensure the Internet stays encrypted, the IT industry needs to work together to roll out post-quantum cryptography. We’re excited that today we’re rolling out support for post-quantum secure outbound connections: connections between Cloudflare and the origins.</p><p>We would love it if you would try and enable post-quantum key agreement on your origin. Please, do share your experiences, or reach out for any questions: <a>ask-research@cloudflare.com</a>.</p><p>To follow the latest developments of our deployment of post-quantum cryptography, and client/server support, check out <a href="https://pq.cloudflareresearch.com/">pq.cloudflareresearch.com</a> and keep an eye on this blog.</p> ]]></content:encoded>
            <category><![CDATA[Birthday Week]]></category>
            <category><![CDATA[Post-Quantum]]></category>
            <category><![CDATA[Research]]></category>
            <category><![CDATA[Cryptography]]></category>
            <category><![CDATA[Security]]></category>
            <guid isPermaLink="false">7D9GZLWGiSDKHz84NFp54d</guid>
            <dc:creator>Suleman Ahmad</dc:creator>
            <dc:creator>Luke Valenta</dc:creator>
            <dc:creator>Bas Westerbaan</dc:creator>
        </item>
        <item>
            <title><![CDATA[Cloudflare’s commitment to the 2023 Summit for Democracy]]></title>
            <link>https://blog.cloudflare.com/cloudflare-commitment-to-the-2023-summit-for-democracy/</link>
            <pubDate>Tue, 28 Mar 2023 13:00:00 GMT</pubDate>
            <description><![CDATA[ Cloudflare is proud to participate in and contribute commitments to the 2023 Summit Summit for Democracy because we believe that everyone should have access to an Internet that is faster, ]]></description>
            <content:encoded><![CDATA[ <p></p><p>On Tuesday, March 28, 2023, the US Government will launch the <a href="https://www.state.gov/summit-for-democracy-2023/">Summit for Democracy 2023</a>, following up on the inaugural <a href="https://www.state.gov/summit-for-democracy-2021/">Summit for Democracy 2021</a>. The Summit is co-hosted by the United States, Costa Rica, Zambia, the Netherlands, and South Korea. Cloudflare is proud to participate in and contribute commitments to the Summit because we believe that everyone should have access to an Internet that is faster, more reliable, more private, and more secure.  We work to ensure that the responsibility to respect human rights is embedded throughout our business functions. Cloudflare’s mission — to help build a better Internet — reflects a long-standing belief that we can help make the Internet better for everyone.</p><p>Our mission and core values dovetail with the Summit’s goals of strengthening democratic governance, respect for human rights and human rights defenders, and working in partnership to strengthen respect for these values. As we have <a href="/applying-human-rights-frameworks-to-our-approach-to-abuse/">written about before</a>, access to the Internet allows activists and human rights defenders to expose abuses across the globe, allows collective causes to grow into global movements, and provides the foundation for large-scale organizing for political and social change in ways that have never been possible before.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5rSJ6ucWgBy2LkLs1AU6uJ/6622ab0532e0a40defb39f373a3afda6/Screenshot-2023-03-28-at-11.34.32.png" />
            
            </figure>
    <div>
      <h3>What is the Summit for Democracy?</h3>
      <a href="#what-is-the-summit-for-democracy">
        
      </a>
    </div>
    <p>In December 2021, in an effort to respond to challenges to democracy worldwide, the United States held the first ever global Summit for Democracy. The Summit provided an opportunity to strengthen collaboration between democracies around the world and address common challenges from authoritarian threats.  The United States invited over 100 countries plus the President of the European Commission and the United Nations Secretary-General. The Summit focused on three key themes: (1) defending against authoritarianism; (2) addressing and fighting corruption; and (3) promoting respect for human rights, and gave participants an opportunity to announce commitments, reforms, and initiatives to defend democracy and human rights. The Summit was followed by a Year of Action, during which governments implemented their commitments to the Summit.</p><p>The 2023 Summit will focus more directly on partnering with the private sector to promote an affirmative vision for technology by countering the misuse of technology and shaping emerging technologies so that they strengthen democracy and human rights, which Cloudflare supports in theory and in practice.</p><p>The three-day Summit will highlight the importance of the private sector’s role in responding to challenges to democracy. The first day of the Summit is the <a href="https://www.state.gov/summit-for-democracy-2023/#March28">Thematic Day</a>, where Cabinet-level officials, the private sector and civil society organizations will spotlight key Summit themes. On the second day of the Summit, the <a href="https://www.state.gov/summit-for-democracy-2023/#March29">Plenary Day</a>, the five co-hosts will each host a high-level plenary session. On the final day of the Summit, <a href="https://www.state.gov/summit-for-democracy-2023/#March30">Co-Host Event Day</a>, each of the co-hosts will lead high-level regional conversations with partners from government, civil society, and the private sector.</p><p>Cloudflare will be participating in the Thematic Day and the Co-Host Event Day in Washington, DC, in addition to other related events.</p>
    <div>
      <h3>Cloudflare commitments</h3>
      <a href="#cloudflare-commitments">
        
      </a>
    </div>
    <p>In advance of the 2023 Summit, the United States issued a <a href="https://www.state.gov/wp-content/uploads/2023/02/Private-Sector-Call-to-Advance-Democracy-1.pdf">Call to Action</a> to the private sector to consider commitments that advance an affirmative agenda for democratic renewal. The United States encouraged the private sector to make commitments that align with the <a href="https://www.state.gov/presidential-initiative-for-democratic-renewal-drl-office-of-global-programs-efforts/">Presidential Initiative on Democratic Renewal</a>, the <a href="https://www.state.gov/declaration-for-the-future-of-the-internet">Declaration on the Future of the Internet</a>, and the Summit’s four objectives:</p><ul><li><p>Countering the misuse of technology</p></li><li><p>Fighting corruption</p></li><li><p>Protecting civic space</p></li><li><p>Advancing labor rights</p></li></ul><p>Cloudflare answered the United States’s call to action and made commitments to (1) help democratize post-quantum cryptography; (2) work with researchers to share data on Internet censorship and shutdowns; and (3) engage with civil society on Internet protocols and the application of privacy-enhancing technologies.</p>
    <div>
      <h3>Democratizing post-quantum cryptography by including it for free, by default</h3>
      <a href="#democratizing-post-quantum-cryptography-by-including-it-for-free-by-default">
        
      </a>
    </div>
    <p>At Cloudflare, we believe to enhance privacy as a human right the most advanced cryptography needs to be available to everyone, free of charge, forever. Cloudflare has committed to including post-quantum cryptography for free by default to all customers – including individual web developers, small businesses, non-profits, and governments. In particular, this will benefit at-risk groups using Cloudflare services like humanitarian organizations, human rights defenders, and journalists through <a href="https://www.cloudflare.com/galileo/">Project Galileo</a>, as well as state and local government election websites through the <a href="https://www.cloudflare.com/athenian/">Athenian Project</a>, to help secure their websites, APIs, cloud tools and remote employees against future threats.</p><p>We believe everyone should have access to the next era of <a href="https://www.cloudflare.com/learning/security/what-is-cyber-security/">cybersecurity standards</a>–instantly and for free. To that end, Cloudflare will also publish vendor-neutral roadmaps based on NIST standards to help businesses secure any connections that are not protected by Cloudflare. We hope that others will follow us in making their implementations of post-quantum cryptography free so that we can create a secure and private Internet without a “quantum” up-charge.  More details about our commitment is <a href="https://www.cloudflare.com/press-releases/2023/cloudflare-democratizes-post-quantum-cryptography-by-delivering-it-for-free/">here</a> and <a href="/post-quantum-crypto-should-be-free/">here</a>.</p>
    <div>
      <h3>Working with researchers to better document Internet censorship and shutdowns</h3>
      <a href="#working-with-researchers-to-better-document-internet-censorship-and-shutdowns">
        
      </a>
    </div>
    <p>Cloudflare commits to working with researchers to share data about Internet shutdowns and selective Internet traffic interference and to make the results of the analysis of this data public and accessible. The Cloudflare Network includes 285 locations in over 100 countries, interconnects with over 11,500 networks globally, and serves a significant portion of global Internet traffic. Cloudflare shares aggregated data on the Internet's patterns, insights, threats and trends with the public through <a href="https://radar.cloudflare.com/">Cloudflare Radar</a>, including providing alerts and data to help organizations like <a href="https://www.accessnow.org/">Access Now's</a> <a href="https://www.accessnow.org/campaign/keepiton/">KeepItOn</a> coalition, the <a href="https://freedomonlinecoalition.com/">Freedom Online Coalition</a>, the <a href="https://www.internetsociety.org/">Internet Society</a>, and <a href="https://ooni.org/">Open Observatory of Network Interference</a> (OONI) monitor Internet censorship and shutdowns around the world. Cloudflare commits to working with research partners to identify signatures associated with connection tampering and failures, which are believed to be caused primarily by active censorship and blocking. Cloudflare is well-positioned to observe and report on these signatures from a global perspective, and will provide access to its findings to support additional tampering detection efforts.</p>
    <div>
      <h3>Engaging with civil society on Internet protocols and the development and application of privacy-enhancing technologies</h3>
      <a href="#engaging-with-civil-society-on-internet-protocols-and-the-development-and-application-of-privacy-enhancing-technologies">
        
      </a>
    </div>
    <p>Cloudflare believes that meaningful consultation with civil society is a fundamental part of building an Internet that advances human rights. As Cloudflare works with Internet standards bodies and other Internet providers on the next-generation of privacy-enhancing technologies and protocols, like protocols to <a href="/dns-encryption-explained/">encrypt Domain Name Service</a> records and <a href="/handshake-encryption-endgame-an-ech-update/">Encrypted Client Hello</a> (ECH) and privacy enhancing technologies like OHTTP, we commit to direct engagement with civil society and human rights experts on standards and technologies that might have implications for human rights.</p><p>Cloudflare has long worked with industry partners, stakeholders, and international standards organizations to build a more private, secure, and resilient Internet for everyone. For example, Cloudflare has built privacy technologies into its network infrastructure, helped develop and deploy TLS 1.3 alongside helping lead QUIC  and other Internet protocols, improve transparency around routing and public key infrastructure (PKI), and operating a public DNS resolver that supports encryption protocols. Ensuring civil society and human rights experts are able to contribute and provide feedback as part of those efforts will make certain that future development and application of privacy-enhancing technologies and protocols are consistent with human rights principles and account for human rights impacts.</p><p>Our commitments to democratizing post-quantum cryptography, working with researchers on Internet censorship and shutdowns, and engaging with civil society on Internet protocols and the development and application of privacy-preserving technologies will help to secure access to a free, open, and interconnected Internet.</p>
    <div>
      <h3>Partnering to make the Summit a success</h3>
      <a href="#partnering-to-make-the-summit-a-success">
        
      </a>
    </div>
    <p>In the lead-up to the Summit, Cloudflare has been working in partnership with the US Department of State, the National Security Council, the US Agency for International Development (USAID), and various private sector and civil society partners to prepare for the Summit. As part of our involvement, we have also contributed to roundtables and discussions with the Center for Strategic and International Studies, GNI, the Design 4 Democracy Coalition, and the Freedom Online Coalition. Cloudflare is also participating in official meetings and side events including at the Carnegie Endowment for International Peace and the Council on Foreign Relations.</p><p>In addition to the official Summit events, there are a wide range of events organized by civil society which the <a href="https://accountabilitylab.org/">Accountability Lab</a> has created a <a href="https://summit4democracy.org/">website</a> to highlight. Separately, on Monday, March 27 the <a href="https://globaldemocracycoalition.org/">Global Democracy Coalition</a> convened a <a href="https://globaldemocracycoalition.org/event/partners-for-democracy-day/">Partners Day</a> to organize civil society and other non-governmental events. Many of these events are being held by some of our Galileo partners like the National Democratic Institute, the International Republican Institute, Freedom House, and the Council of Europe.</p><p>Cloudflare is grateful for all of the hard work that our partners in government, civil society, and the private sector have done over the past few months to make this Summit a success. At a time where we are seeing increasing challenges to democracy and the struggle for human rights around the world, maintaining a secure, open, Internet is critical. Cloudflare is proud of our participation in the Summit and in the commitments we are making to help advance human rights. We look forward to continuing our engagement in the Summit partnership to fulfill our mission to help build a better Internet.</p> ]]></content:encoded>
            <category><![CDATA[USA]]></category>
            <category><![CDATA[Human Rights]]></category>
            <category><![CDATA[Post-Quantum]]></category>
            <category><![CDATA[Internet Shutdown]]></category>
            <category><![CDATA[Protocols]]></category>
            <category><![CDATA[Privacy]]></category>
            <category><![CDATA[Project Galileo]]></category>
            <category><![CDATA[Policy & Legal]]></category>
            <guid isPermaLink="false">5NzVC7zRmPw0EB11aLk3Ou</guid>
            <dc:creator>Zaid Zaid</dc:creator>
            <dc:creator>Patrick Day</dc:creator>
        </item>
        <item>
            <title><![CDATA[Post-quantum crypto should be free, so we’re including it for free, forever]]></title>
            <link>https://blog.cloudflare.com/post-quantum-crypto-should-be-free/</link>
            <pubDate>Thu, 16 Mar 2023 13:00:00 GMT</pubDate>
            <description><![CDATA[ Cloudflare makes the most advanced cryptography free for everyone, and it’s in beta today ]]></description>
            <content:encoded><![CDATA[ <p></p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2hf3qqJJpxyJXlzymfKi9Y/5131867168fc9c607b435eb5d5dc0eb3/image1-31.png" />
            
            </figure><p>At Cloudflare, helping to build a better Internet is not just a catchy saying. We are committed to the long-term process of standards development. We love the work of pushing the fundamental technology of the Internet forward in ways that are accessible to everyone. Today we are adding even more substance to that commitment. One of our core beliefs is that privacy is a human right. We believe that to achieve that right the most advanced cryptography needs to be available to everyone, free of charge, forever. Today, we are announcing that our implementations of <a href="https://www.cloudflare.com/learning/ssl/quantum/what-is-post-quantum-cryptography/">post-quantum cryptography</a> will meet that standard: available to everyone, and included free of charge, forever.</p><p>We have a proud history of taking paid encryption products and launching it to the Internet at scale for Free. Even at the cost of short and long-term revenue because it’s the right thing to do. <a href="/introducing-universal-ssl/">In 2014, we made SSL free for every Cloudflare customer with Universal SSL</a>. As we make our implementations of post-quantum cryptography free forever today, we do it in the spirit of that first major announcement:</p><blockquote><p><i>“Having cutting-edge encryption may not seem important to a small blog, but it is critical to advancing the encrypted-by-default future of the Internet. Every byte, however seemingly mundane, that flows encrypted across the Internet makes it more difficult for those who wish to intercept, throttle, or censor the web. In other words, ensuring your personal blog is available over HTTPS makes it more likely that a human rights organization or social media service or independent journalist will be accessible around the world. Together we can do great things.”</i></p></blockquote><p>We hope that others will follow us in making their implementations of <a href="https://www.cloudflare.com/learning/ssl/quantum/what-is-post-quantum-cryptography/">PQC</a> free as well so that we can create a secure and private Internet without a “quantum” up-charge.</p><p>The Internet has matured since the 1990’s and the launch of SSL. What was once an experimental frontier has turned into the underlying fabric of modern society. It runs in our most critical infrastructure like power systems, hospitals, airports, and banks. We trust it with our most precious memories. We trust it with our secrets. That’s why the Internet needs to be private by default. It needs to be secure by default. It’s why we’re committed to ensuring that anyone and everyone can achieve post quantum security for free as well as start deploying it at scale today.</p><p>Our work on post-quantum crypto is driven by the thesis that quantum computers that can break conventional cryptography create a similar problem to the Year 2000 bug. We know there is going to be a problem in the future that could have catastrophic consequences for users, businesses, and even nation states. The difference this time is we don’t know the date and time that this break in the paradigm of how computers operate will occur. We need to prepare today to be ready for this threat.</p><p>To that end we have been preparing for this transition since 2018. At that time we were concerned about the implementation problems other large protocol transitions, like the move to TLS 1.3, had caused our customers and wanted to get ahead of it. Cloudflare Research over the last few years has become a leader and champion of the idea that PQC security wasn’t an afterthought for tomorrow but a real problem that needed to be solved today. We have collaborated with industry partners like <a href="/the-tls-post-quantum-experiment/">Google</a> and Mozilla, contributed to development through participation in the IETF, and even launched an <a href="https://github.com/cloudflare/circl">open source experimental cryptography suite</a> to help move the needle. We have tried hard to work with everyone that wanted to be a part of the process and show our work along the way.</p><p>As we have worked with our partners in both industry and academia to help prepare us and the Internet for a post-quantum future, we have become dismayed by an emerging trend. There are a growing number of vendors out there that want to cash in on the legitimate fear that nervous executives, privacy advocates, and government leaders have about quantum computing breaking traditional encryption. These vendors offer vague solutions based on unproven technologies like “Quantum Key Distribution” or “Post Quantum Security” libraries that package non-standard algorithms that haven’t been through public review with exorbitant price tags like RSA did in the 1990s. They often love to throw around phrases like “AI” and “Post Quantum” without really showing their work on how any of their systems actually function. Security and privacy are table stakes in the modern Internet, and no one should be charged just to get the baseline protection needed in our contemporary world.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7Bs4exeLNWYuSKvgC9FoAl/91a3a5284a5cf93c79649a2440e0b473/image3-24.png" />
            
            </figure>
    <div>
      <h3>Launch your PQC transition today</h3>
      <a href="#launch-your-pqc-transition-today">
        
      </a>
    </div>
    <p>Testing and adopting post-quantum cryptography in modern networks doesn’t have to be hard! In fact, Cloudflare customers can test PQC in their systems today, as we describe later in this post.</p><p>Currently, we support <a href="https://pq-crystals.org/kyber/index.shtml">Kyber</a> for key agreement on any traffic that uses TLS 1.3 including <a href="https://www.cloudflare.com/learning/performance/what-is-http3/">HTTP/3</a>. (<a href="/post-quantum-for-all/">If you want a deep dive on our implementation check out our blog from last fall announcing the beta.</a>) To help you test your traffic to Cloudflare domains with these new key agreement methods we have open-sourced forks for <a href="https://github.com/cloudflare/boringssl-pq">BoringSSL</a>, <a href="https://github.com/cloudflare/go">Go</a> and <a href="https://github.com/cloudflare/qtls-pq">quic-go</a>. For BoringSSL and Go, check out <a href="/experiment-with-pq/#boringssl">the sample code here</a>.</p><p>If you use <a href="/post-quantum-tunnel/">Tunnels with cloudflared then upgrading to PQC is super simple</a>. Make sure you’re on at least version <a href="https://developers.cloudflare.com/cloudflare-one/connections/connect-apps/install-and-setup/tunnel-guide/local/">2022.9.1</a> and simply run <code>cloudflared --post-quantum</code>.</p><p>After testing out how Cloudflare can help you implement PQC in your networks, it’s time to start to prepare yourself for the transition to PQC in all of your systems. This first step of inventorying and identifying is critical to a smooth rollout. We know first hand since we have undertaken an extensive evaluation of all of our systems to earn our <a href="https://www.cloudflare.com/learning/privacy/what-is-fedramp/">FedRAMP Authorization</a> certifications, and we are doing a similar evaluation again to transition all of our internal systems to PQC.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7nudyLXlDPKH38ajGe8gcz/b72ea0c5f415840fe9d7246dee00da0d/image2-16.png" />
            
            </figure>
    <div>
      <h3>How we are setting ourselves up for the future of quantum computing</h3>
      <a href="#how-we-are-setting-ourselves-up-for-the-future-of-quantum-computing">
        
      </a>
    </div>
    <p>Here’s a sneak preview of the path that we are developing right now to fully secure Cloudflare itself against the cryptographic threat of quantum computers. We can break that path down into three parts: internal systems, <a href="https://www.cloudflare.com/learning/security/glossary/what-is-zero-trust/">zero trust</a>, and open source contributions.</p><p>The first part of our path to full PQC adoption at Cloudflare is around all of our connections. The connection between yourself and Cloudflare is just one part of the larger path of the connection. Inside our internal systems we are implementing two significant upgrades in 2023 to ensure that they are PQC secure as well.</p><p>The first is that we use BoringSSL for a substantial amount of connections. We currently use our fork and we are excited that upstream support for Kyber is underway. Any additional internal connections that use a different cryptographic system are being upgraded as well. The second major upgrade we are making is to shift the remaining internal connections that use TLS 1.2 to TLS 1.3. This combination of Kyber and TLS 1.3 will make our internal connections faster and more secure, even though we use a hybrid of classical and post-quantum secure cryptography. It’s a speed and security win-win. <a href="/the-tls-post-quantum-experiment/">And we proved this power house combination would provide that speed and security over three and half years ago thanks to the groundbreaking work of Cloudflare Research and Google.</a></p><p>The next part of that path is all about using PQC and zero trust as allies together. As we think about the security posture of tomorrow being based around post-quantum cryptography, we have to look at the other critical security paradigm being implemented today: zero trust. Today, the zero trust vendor landscape is littered with products that fail to support common protocols like IPv6 and TLS 1.2, let alone the next generation of protocols like TLS 1.3 and QUIC that enable PQC. So many middleboxes struggle under the load of today’s modern protocols. They artificially <a href="/monsters-in-the-middleboxes/">downgrade connections and break end user security</a> all in the name of inspecting traffic because they don’t have a better solution. Organizations big and small struggled to support customers that wanted the highest possible performance and security, while also keeping their businesses safe, because of the resistance of these vendors to adapt to modern standards. We do not want to repeat the mistakes of the past. We are planning and evaluating the needed upgrades to all of our zero trust products to support PQC out of the box. We believe that zero trust and post-quantum cryptography are not at odds with one another, but rather together are the future standard of security.</p><p>Finally, it’s not enough for us to do this for ourselves and for our customers. The Internet is only as strong as its weakest links in the connection chains that network us all together. Every connection on the Internet needs the strongest possible encryption so that businesses can be secure, and everyday users can be ensured of their privacy. We believe that this core technology should be vendor agnostic and open to everyone. To help make that happen our final part of the path is all about contributing to open source projects. We have already been focused on releases to CIRCL. <a href="https://github.com/cloudflare/circl">CIRCL</a> (Cloudflare Interoperable, Reusable Cryptographic Library) is a collection of cryptographic primitives written in Go. The goal of this library is to be used as a tool for experimental deployment of cryptographic algorithms targeting post quantum.</p><p>Later on this year we will be publishing as open source a set of easy to adopt, vendor-neutral roadmaps to help you upgrade your own systems to be secure against the future today. We want the security and privacy created by post quantum crypto to be accessible and free for everyone. We will also keep writing extensively about our post quantum journey. To learn more about how you can turn on PQC today, and how we have been building post-quantum cryptography at Cloudflare, please check out these resources:</p><ul><li><p><a href="/post-quantum-for-all/">Defending against future threats: Cloudflare goes post-quantum</a></p></li><li><p><a href="/post-quantum-tunnel/">Introducing post-quantum Cloudflare Tunnel</a></p></li><li><p><a href="/nist-post-quantum-surprise/">NIST’s pleasant post-quantum surprise</a></p></li><li><p><a href="/post-quantumify-cloudflare/">Post-quantumify internal services: Logfwrdr, Tunnel, and gokeyless</a></p></li><li><p><a href="/post-quantum-taxonomy/">The post-quantum state: a taxonomy of challenges</a></p></li><li><p><a href="/the-tls-post-quantum-experiment/">The TLS Post-Quantum Experiment</a></p></li></ul><p></p> ]]></content:encoded>
            <category><![CDATA[Security Week]]></category>
            <category><![CDATA[Post-Quantum]]></category>
            <category><![CDATA[Research]]></category>
            <guid isPermaLink="false">07cQYHrLiDpSMFd2bEEzf</guid>
            <dc:creator>Wesley Evans</dc:creator>
            <dc:creator>Bas Westerbaan</dc:creator>
        </item>
        <item>
            <title><![CDATA[No, AI did not break post-quantum cryptography]]></title>
            <link>https://blog.cloudflare.com/kyber-isnt-broken/</link>
            <pubDate>Thu, 16 Mar 2023 13:00:00 GMT</pubDate>
            <description><![CDATA[ The recent news reports of AI cracking post-quantum cryptography are greatly exaggerated. In this blog, we take a deep dive into the world of side-channel attacks and how AI has been used for more than a decade already to aid it ]]></description>
            <content:encoded><![CDATA[ <p></p><p><a href="https://www.securityweek.com/ai-helps-crack-a-nist-recommended-post-quantum-encryption-algorithm/">News coverage</a> of a recent paper caused a bit of a stir with this headline: “<a href="https://www.securityweek.com/ai-helps-crack-a-nist-recommended-post-quantum-encryption-algorithm/">AI Helps Crack NIST-Recommended Post-Quantum Encryption Algorithm</a>”. The news article claimed that <a href="https://pq-crystals.org/kyber/index.shtml">Kyber</a>, the encryption algorithm in question, <a href="/post-quantum-for-all/">which we have deployed world-wide</a>, had been “broken.” Even more dramatically, the news article claimed that “the revolutionary aspect of the research was to apply deep learning analysis to side-channel differential analysis”, which seems aimed to scare the reader into wondering what will <a href="https://www.cloudflare.com/learning/ai/what-is-artificial-intelligence/">Artificial Intelligence (AI)</a> break next?</p><p>Reporting on the paper has been wildly inaccurate: <b>Kyber is not broken</b> and AI has been used for more than a decade now to aid side-channel attacks. To be crystal clear: our concern is with the news reporting around the paper, not the quality of the paper itself. In this blog post, we will explain how AI is actually helpful in cryptanalysis and dive into the <a href="https://eprint.iacr.org/2022/1713">paper</a> by Dubrova, Ngo, and Gärtner (DNG), that has been misrepresented by the news coverage. We’re honored to have Prof. Dr. <a href="https://www.cs.ru.nl/~lejla/">Lejla Batina</a> and Dr. <a href="https://www.ru.nl/en/people/picek-s">Stjepan Picek</a>, world-renowned experts in the field of applying AI to side-channel attacks, join us on this blog.</p><p>We start with some background, first on side-channel attacks and then on Kyber, before we dive into the paper.</p>
    <div>
      <h2>Breaking cryptography</h2>
      <a href="#breaking-cryptography">
        
      </a>
    </div>
    <p>When one thinks of breaking cryptography, one imagines a room full of mathematicians puzzling over minute patterns in intercepted messages, aided by giant computers, until they figure out the key. Famously in World War II, the Nazis’ Enigma cipher machine code was completely broken in this way, allowing the Allied forces to read along with their communications.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/oJxQ8srTTh2HafYmNbKaP/c07d942fdda653c963fd047756e8b6f4/image7-5.png" />
            
            </figure><p>It’s exceedingly rare for modern established cryptography to get broken head-on in this way. The last catastrophically broken cipher was RC4, designed in 1987, while AES, designed in 1998, stands proud with barely a scratch. The last big break of a cryptographic hash was on SHA-1, designed in 1995, while SHA-2, published in 2001, remains untouched in practice.</p><p>So what to do if you can’t break the cryptography head-on? Well, you get clever.</p>
    <div>
      <h2>Side-channel attacks</h2>
      <a href="#side-channel-attacks">
        
      </a>
    </div>
    <p>Can you guess the pin code for this gate?</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1UhJuQgQaoWjZ4JHSAHff6/0bebc7553f411c4246a750e39b1f627f/image4-1.jpg" />
            
            </figure><p>You can clearly see that some of the keys are more worn than the others, suggesting heavy use. This observation gives us some insight into the correct pin, namely the digits. But the correct order is not immediately clear. It might be 1580, 8510, or even 115085, but it’s a lot easier than trying every possible pin code. This is an example of a <i>side-channel attack</i>. Using the security feature (entering the PIN) had some unintended consequences (abrading the paint), which leaks information.</p><p>There are many different types of side channels, and which one you should worry about depends on the context. For instance, the sounds your keyboard makes as you type <a href="https://dl.acm.org/doi/abs/10.1145/1609956.1609959">leaks what you write</a>, but you should not worry about that if no one is listening in.</p>
    <div>
      <h3>Remote timing side channel</h3>
      <a href="#remote-timing-side-channel">
        
      </a>
    </div>
    <p>When writing cryptography in software, one of the best known side channels is the time it takes for an algorithm to run. For example, let’s take the classic example of creating an RSA signature. Grossly simplified, to sign a message <i>m</i> with private key <i>d</i>, we compute the signature <i>s</i> as m<sup>d</sup> (mod n). Computing the exponent of a big number is hard, but luckily, because we’re doing modular arithmetic, there is the <a href="https://youtu.be/cbGB__V8MNk">square-and-multiply</a> trick. Here is a naive implementation in pseudocode:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4ugGnB4EmViLt8e9E97YyJ/d204b867a50f5470d0ae680f4eea47fd/Screenshot-2023-03-16-at-12.35.40.png" />
            
            </figure><p>The algorithm loops over the bits of the secret key, and does a <i>multiply</i> step if the current bit is a 1. Clearly, the runtime depends on the secret key. Not great, but if the attacker can only time the full run, then they only learn the number of 1s in the secret key. The typical catastrophic timing attack against RSA instead is hidden behind the “<b>mod</b> n”. In a naive implementation this modular reduction is slower if the number being reduced is larger or equal <i>n</i>. This <a href="https://www.cs.sjsu.edu/faculty/stamp/students/article.html#:~:text=Timing%20attacks%20are%20a%20form,performed%20or%20the%20media%20used.">allows</a> an attacker to send specially crafted messages to tease out the secret key bit-by-bit and similar attacks are <a href="https://crypto.stanford.edu/~dabo/papers/ssl-timing.pdf">surprisingly practical</a>.</p><p>Because of this, the mantra is: cryptography should run in “constant time”. This means that the runtime does not depend on any secret information. In our example, to remove the first timing issue, one would replace the if-statement with something equivalent to:</p>
            <pre><code>	s = ((s * powerOfM) mod n) * bit(s, i) + s * (1 - bit(s, i))</code></pre>
            <p>This ensures that the multiplication is always done. Similar countermeasures prevent practically all remote timing attacks.</p>
    <div>
      <h3>Power side-channel</h3>
      <a href="#power-side-channel">
        
      </a>
    </div>
    <p>The story is quite different for power side-channel attacks. Again, the classic example is RSA signatures. If we hook up an oscilloscope to a smartcard that uses the naive algorithm from before, and measure the power usage while it signs, we can read off the private key by eye:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5BgQG934ghuxV9D6JoSF0H/14c882de669ff23990f96a1281304e3f/image1-33.png" />
            
            </figure><p>Even if we use a constant-time implementation, there are still minute changes in power usage that can be detected. The underlying issue is that hardware gates that switch use more power than those that don’t. For instance, computing 127 + 64 takes more energy than 64 + 64.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3U58Xc7isXGpdg2bHxhTRA/c8b3ea9d01d38f9af644a2cd51079caf/image3-25.png" />
            
            </figure><p><i>127+64 and 64+64 in binary. There are more switched bits in the first.</i></p><p><b>Masking</b>A common countermeasure against power side-channel leakage is <i>masking</i>. This means that before using the secret information, it is split randomly into <i>shares</i>. Then, the brunt of the computation is done on the shares, which are finally recombined.</p><p>In the case of RSA, before creating a new signature, one can generate a random <i>r</i> and compute m<sup>d+r</sup> (mod n) and m<sup>r</sup> (mod n) separately. From these, the final signature m<sup>d</sup> (mod n) can be computed with some extra care.</p><p>Masking is not a perfect defense. The parts where shares are created or recombined into the final value are especially vulnerable. It does make it harder for the attacker: they will need to collect more power traces to cut through the noise. In our example we used two shares, but we could bump that up even higher. There is a trade-off between power side-channel resistance and implementation cost.</p><p>One of the challenging parts in the field is to estimate how much secret information is actually leaked through the traces, and how to extract it. Here machine learning enters the picture.</p>
    <div>
      <h3>Machine learning: extracting the key from the traces</h3>
      <a href="#machine-learning-extracting-the-key-from-the-traces">
        
      </a>
    </div>
    <p><a href="https://www.cloudflare.com/learning/ai/what-is-machine-learning/">Machine learning</a>, of which deep learning is a part, represents the capability of a system to acquire its knowledge by extracting patterns from data —  in this case, the secrets from the power traces. Machine learning algorithms can be divided into several categories based on their learning style. The most popular machine learning algorithms in side-channel attacks follow the supervised learning approach. In supervised learning, there are two phases: 1) training, where a machine learning model is trained based on known labeled examples (e.g., side-channel measurements where we know the key) and 2) testing, where, based on the trained model and additional side-channel measurements (now, with an unknown key), the attacker guesses the secret key. A common depiction of such attacks is given in the figure below.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4V2bcSsbSJtCmorq8HaUzc/1545a94c79aef5d72b6acd8573cf5f07/image5-3.png" />
            
            </figure><p>While the threat model may sound counterintuitive, it is actually not difficult to imagine that the attacker will have access (and control) of a device similar to the one being attacked.</p><p>In side-channel analysis, the attacks following those two phases (training and testing) are called profiling attacks.</p><p>Profiling attacks are not new. The first such attack, called the <a href="https://link.springer.com/chapter/10.1007/3-540-36400-5_3">template attack</a>, appeared in 2002. Diverse <a href="https://link.springer.com/article/10.1007/s13389-011-0023-x">machine learning techniques</a> have been used since around 2010, all reporting good results and the ability to break various targets. The big breakthrough came in 2016, when the side-channel community started using <a href="https://www.cloudflare.com/learning/ai/what-is-deep-learning/">deep learning</a>. It greatly increased the effectiveness of power side-channel attacks both against symmetric-key and public-key cryptography, even if the targets were protected with, for instance, masking or some other countermeasures. To be clear: it doesn’t magically figure out the key, but it gets much better at extracting the leaked bits from a smaller number of power traces.</p><p>While machine learning-based side-channel attacks are powerful, they have limitations. Carefully implemented countermeasures make the attacks more difficult to conduct. Finding a good machine learning model that can break a target can be far from trivial: this phase, commonly called tuning, can last weeks on powerful clusters.</p><p>What will the future bring for machine learning/AI in side-channel analysis? Counter intuitively, we would like to see more powerful and easy to use attacks. You’d think that would make us worse off, but to the contrary it will allow us to better estimate how much actual information is leaked by a device. We also hope that we will be able to better understand why certain attacks work (or not), so that more cost-effective countermeasures can be developed. As such, the future for AI in side-channel analysis is bright especially for security evaluators, but we are still far from being able to break most of the targets in real-world applications.</p>
    <div>
      <h2>Kyber</h2>
      <a href="#kyber">
        
      </a>
    </div>
    <p>Kyber is a <a href="/the-quantum-menace/">post-quantum</a> (PQ) key encapsulation method (KEM). After a six-year worldwide competition, the <a href="https://www.nist.gov/">National Institute of Standards and Technology</a> (NIST) <a href="/nist-post-quantum-surprise/">selected</a> Kyber as the post-quantum key agreement they will standardize. The goal of a key agreement is for two parties that haven’t talked to each other before to agree securely on a <i>shared key</i> they can use for symmetric encryption (such as <a href="/it-takes-two-to-chacha-poly/">Chacha20Poly1305</a>). As a KEM, it works slightly different with different terminology than a traditional <a href="https://developers.cloudflare.com/internet/protocols/tls#ephemeral-diffie-hellman-handshake">Diffie–Hellman</a> key agreement (such as X25519):</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/Dbor76rFhzvg1PhuCI0xT/9bbd95bd542a065a679b5b27e2c188cb/image9-3.png" />
            
            </figure><p>When connecting to a website the client first generates a new <i>ephemeral</i> keypair that consists of a <i>private</i> and <i>public key</i>. It sends the public key to the server. The server then <i>encapsulates</i>  a shared key with that public key, which gives it a random <i>shared key</i>, which it keeps, and a ciphertext (in which the shared key is hidden), which the server returns to the client. The client can then use its private key to <i>decapsulate</i> the shared key from the ciphertext. Now the server and client can communicate with each other using the shared key.</p><p>Key agreement is particularly important to make secure against attacks of quantum computers. The reason is that an attacker can store traffic today, and crack the key agreement in the future, revealing the shared key and all communication encrypted with it afterwards. That is why we have already <a href="/post-quantum-for-all/">deployed support</a> for Kyber across our network.</p>
    <div>
      <h2>The DNG paper</h2>
      <a href="#the-dng-paper">
        
      </a>
    </div>
    <p>With all the background under our belt, we’re ready to take a look at the <a href="https://eprint.iacr.org/2022/1713.pdf">DNG paper</a>. The authors perform a power side-channel attack on their own masked implementation of Kyber with six shares.</p>
    <div>
      <h3>Point of attack</h3>
      <a href="#point-of-attack">
        
      </a>
    </div>
    <p>They attack the <i>decapsulation</i> step. In the decapsulation step, after the shared key is extracted, it’s encapsulated again, and compared against the original ciphertext to detect tampering. For this <i>re-encryption</i> step, the precursor of the shared key—let’s call it the secret—is encoded bit-by-bit into a polynomial. To be precise, the 256-bit <i>secret</i> needs to be converted to a polynomial with 256 coefficients <i>modulo</i> q=3329, where the i<sup>th</sup> coefficient is (q+1)/2 if the ith b<sup>th</sup> is 1 and zero otherwise.</p><p>This function sounds simple enough, but creating a masked version is tricky. The rub is that the natural way to create shares of the <i>secret</i> is to have shares that <i>xor</i> together to be the secret, and that the natural way to share polynomials is to have shares that <i>add</i> together to get to the intended polynomial.</p><p>This is the two-shares implementation of the conversion that the DNG paper attacks:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7GjnFQyhXsLRqHJqgypOpx/c93d159488b4ba8ef043e1049eb21649/image8.png" />
            
            </figure><p>The code loops over the bits of the two shares. For each bit, it creates a mask, that’s <code>0xffff</code> if the bit was 1 and 0 otherwise. Then this mask is used to add (q+1)/2 to the polynomial share if appropriate. Processing a 1 will use a bit more power. It doesn’t take an AI to figure out that this will be a leaky function. In fact, this pattern was pointed out to be weak <a href="https://eprint.iacr.org/2016/923">back in 2016</a>, and explicitly mentioned to be a risk for masked Kyber <a href="https://eprint.iacr.org/2016/923">in 2020</a>. Apropos, one way to mitigate this, is to process multiple bits at once — for the state of the art, tune into <a href="https://csrc.nist.gov/Projects/post-quantum-cryptography/workshops-and-timeline/pqc-seminars">April 2023’s NIST PQC seminar</a>. For the moment, let’s allow the paper its weak target.</p><p>The authors do not claim any fundamentally new attack here. Instead, they improve the effectiveness of the attack in two ways: the way they train the <a href="https://www.cloudflare.com/learning/ai/what-is-neural-network/">neural network</a>, and how to use multiple traces more effectively by changing the ciphertext sent. So, what did they achieve?</p>
    <div>
      <h3>Effectiveness</h3>
      <a href="#effectiveness">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1nVauArEr9f940eJstkDu5/8c793320acb617a3ae0436908196e546/image6-7.png" />
            
            </figure><p>To test the attack, they use a <a href="https://www.newae.com/products/NAE-CW1173#:~:text=The%20ChipWhisperer%2DLite%20integrates%20hardware,all%20into%20a%20single%20board.">Chipwhisperer-lite</a> board, which has a Cortex M4 CPU, which they downclock to 24Mhz. Power usage is sampled at 24Mhz, with high 10-bit precision.</p><p>To train the neural networks, 150,000 power traces are collected for decapsulation of different ciphertexts (with known shared key) for the same KEM keypair. This is already a somewhat unusual situation for a real-world attack: for key agreement KEM keypairs are ephemeral; generated and used only once. Still, there are certainly legitimate use cases for long-term KEM keypairs, such as <a href="/kemtls-post-quantum-tls-without-signatures/">for authentication</a>, <a href="/hybrid-public-key-encryption/">HPKE</a>, and in particular <a href="/encrypted-client-hello/">ECH</a>.</p><p>The training is a key step: different devices even from the same manufacturer can have wildly different power traces running the same code. Even if two devices are of the same model, their power traces might still differ significantly.</p><p>The main contribution highlighted by the authors is that they train their neural networks to attack an implementation with 6 shares, by starting with a neural network trained to attack an implementation with 5 shares. That one can be trained from a model to attack 4 shares, and so on. Thus to apply their method, of these 150,000 power traces, one-fifth must be from an implementation with 6 shares, another one-fifth from one with 5 shares, et cetera. It seems unlikely that anyone will deploy a device where an attacker can switch between the number of shares used in the masking on demand.</p><p>Given these affordances, the attack proper can commence. The authors report that, from a single power trace of a two-share decapsulation, they could recover the shared key under these ideal circumstances with probability… 0.12%. They do not report the numbers for single trace attacks on more than two shares.</p><p>When we’re allowed multiple traces of the same decapsulation, side-channel attacks become much more effective. The second trick is a clever twist on this: instead of creating a trace of decapsulation of exactly the same message, the authors <i>rotate</i> the ciphertext to move bits of the shared key in more favorable positions. With 4 traces that are rotations of the same message, the success probability against the two-shares implementation goes up to 78%. The six-share implementation stands firm at 0.5%. When allowing 20 traces from the six-share implementation, the shared key can be recovered with an 87% chance.</p>
    <div>
      <h3>In practice</h3>
      <a href="#in-practice">
        
      </a>
    </div>
    <p>The hardware used in the demonstration might be somewhat comparable to a smart card, but it is very different from high-end devices such as smartphones, desktop computers and servers. Simple power analysis side-channel attacks on even just embedded 1GHz processors are much more challenging, requiring <a href="https://eprint.iacr.org/2015/727.pdf">tens of thousands of traces</a> using a high-end oscilloscope connected close to the processor. There are much better avenues for attack with this kind of physical access to a server: just connect the oscilloscope to the memory bus.</p><p>Except for especially vulnerable applications, such as smart cards and HSMs, power-side channel attacks are widely considered infeasible. Although sometimes, when the planets align,  an especially potent power side-channel attack can be turned into a remote timing attack due to throttling, as demonstrated by <a href="/hertzbleed-explained/">Hertzbleed</a>. To be clear: the present attack does not even come close.</p><p>And even for these vulnerable applications, such as smart cards, this attack is not particularly potent or surprising. In the field, it is not a question of whether a masked implementation leaks its secrets, because it always does. It’s a question of how hard it is to actually pull off. Papers such as the DNG paper contribute by helping manufacturers estimate how many countermeasures to put in place, to make attacks too costly. It is not the first paper studying power side-channel attacks on Kyber and it will not be the last.</p>
    <div>
      <h2>Wrapping up</h2>
      <a href="#wrapping-up">
        
      </a>
    </div>
    <p>AI did not completely undermine a new wave of cryptography, but instead is a helpful tool to deal with noisy data and discover the vulnerabilities within it. There is a big difference between a direct break of cryptography and a power side-channel attack. Kyber is not broken, and the presented power side-channel attack is not cause for alarm.</p> ]]></content:encoded>
            <category><![CDATA[Security Week]]></category>
            <category><![CDATA[Post-Quantum]]></category>
            <category><![CDATA[Cryptography]]></category>
            <category><![CDATA[Research]]></category>
            <category><![CDATA[Guest Post]]></category>
            <guid isPermaLink="false">5nkxnUeSeZlhciKkqTAbah</guid>
            <dc:creator>Lejla Batina (Guest Author)</dc:creator>
            <dc:creator>Stjepan Picek (Guest Author)</dc:creator>
            <dc:creator>Bas Westerbaan</dc:creator>
        </item>
    </channel>
</rss>