
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/">
    <channel>
        <title><![CDATA[ The Cloudflare Blog ]]></title>
        <description><![CDATA[ Get the latest news on how products at Cloudflare are built, technologies used, and join the teams helping to build a better Internet. ]]></description>
        <link>https://blog.cloudflare.com</link>
        <atom:link href="https://blog.cloudflare.com/" rel="self" type="application/rss+xml"/>
        <language>en-us</language>
        
        <lastBuildDate>Sun, 05 Apr 2026 17:39:53 GMT</lastBuildDate>
        <item>
            <title><![CDATA[Keeping the Internet fast and secure: introducing Merkle Tree Certificates]]></title>
            <link>https://blog.cloudflare.com/bootstrap-mtc/</link>
            <pubDate>Tue, 28 Oct 2025 13:00:00 GMT</pubDate>
            <description><![CDATA[ Cloudflare is launching an experiment with Chrome to evaluate fast, scalable, and quantum-ready Merkle Tree Certificates, all without degrading performance or changing WebPKI trust relationships. ]]></description>
            <content:encoded><![CDATA[ <p>The world is in a race to build its first quantum computer capable of solving practical problems not feasible on even the largest conventional supercomputers. While the quantum computing paradigm promises many benefits, it also threatens the security of the Internet by breaking much of the cryptography we have come to rely on.</p><p>To mitigate this threat, Cloudflare is helping to migrate the Internet to Post-Quantum (PQ) cryptography. Today, <a href="https://radar.cloudflare.com/adoption-and-usage#post-quantum-encryption"><u>about 50%</u></a> of traffic to Cloudflare's edge network is protected against the most urgent threat: an attacker who can intercept and store encrypted traffic today and then decrypt it in the future with the help of a quantum computer. This is referred to as the <a href="https://en.wikipedia.org/wiki/Harvest_now,_decrypt_later"><u>harvest now, decrypt later</u></a><i> </i>threat.</p><p>However, this is just one of the threats we need to address. A quantum computer can also be used to crack a server's <a href="https://www.cloudflare.com/application-services/products/ssl/">TLS certificate</a>, allowing an attacker to impersonate the server to unsuspecting clients. The good news is that we already have PQ algorithms we can use for quantum-safe authentication. The bad news is that adoption of these algorithms in TLS will require significant changes to one of the most complex and security-critical systems on the Internet: the Web Public-Key Infrastructure (WebPKI).</p><p>The central problem is the sheer size of these new algorithms: signatures for ML-DSA-44, one of the most performant PQ algorithms standardized by NIST, are 2,420 bytes long, compared to just 64 bytes for ECDSA-P256, the most popular non-PQ signature in use today; and its public keys are 1,312 bytes long, compared to just 64 bytes for ECDSA. That's a roughly 20-fold increase in size. Worse yet, the average TLS handshake includes a number of public keys and signatures, adding up to 10s of kilobytes of overhead per handshake. This is enough to have a <a href="https://blog.cloudflare.com/another-look-at-pq-signatures/#how-many-added-bytes-are-too-many-for-tls"><u>noticeable impact</u></a> on the performance of TLS.</p><p>That makes drop-in PQ certificates a tough sell to enable today: they don’t bring any security benefit before Q-day — the day a cryptographically relevant quantum computer arrives — but they do degrade performance. We could sit and wait until Q-day is a year away, but that’s playing with fire. Migrations always take longer than expected, and by waiting we risk the security and privacy of the Internet, which is <a href="https://developers.cloudflare.com/ssl/edge-certificates/universal-ssl/"><u>dear to us</u></a>.</p><p>It's clear that we must find a way to make post-quantum certificates cheap enough to deploy today by default for everyone — not just those that can afford it. In this post, we'll introduce you to the plan we’ve brought together with industry partners to the <a href="https://datatracker.ietf.org/group/plants/about/"><u>IETF</u></a> to redesign the WebPKI in order to allow a smooth transition to PQ authentication with no performance impact (and perhaps a performance improvement!). We'll provide an overview of one concrete proposal, called <a href="https://datatracker.ietf.org/doc/draft-davidben-tls-merkle-tree-certs/"><u>Merkle Tree Certificates (MTCs)</u></a>, whose goal is to whittle down the number of public keys and signatures in the TLS handshake to the bare minimum required.</p><p>But talk is cheap. We <a href="https://blog.cloudflare.com/experiment-with-pq/"><u>know</u></a> <a href="https://blog.cloudflare.com/announcing-encrypted-client-hello/"><u>from</u></a> <a href="https://blog.cloudflare.com/why-tls-1-3-isnt-in-browsers-yet/"><u>experience</u></a> that, as with any change to the Internet, it's crucial to test early and often. <b>Today we're announcing our intent to deploy MTCs on an experimental basis in collaboration with Chrome Security.</b> In this post, we'll describe the scope of this experiment, what we hope to learn from it, and how we'll make sure it's done safely.</p>
    <div>
      <h2>The WebPKI today — an old system with many patches</h2>
      <a href="#the-webpki-today-an-old-system-with-many-patches">
        
      </a>
    </div>
    <p>Why does the TLS handshake have so many public keys and signatures?</p><p>Let's start with Cryptography 101. When your browser connects to a website, it asks the server to <b>authenticate</b> itself to make sure it's talking to the real server and not an impersonator. This is usually achieved with a cryptographic primitive known as a digital signature scheme (e.g., ECDSA or ML-DSA). In TLS, the server signs the messages exchanged between the client and server using its <b>secret key</b>, and the client verifies the signature using the server's <b>public key</b>. In this way, the server confirms to the client that they've had the same conversation, since only the server could have produced a valid signature.</p><p>If the client already knows the server's public key, then only <b>1 signature</b> is required to authenticate the server. In practice, however, this is not really an option. The web today is made up of around a billion TLS servers, so it would be unrealistic to provision every client with the public key of every server. What's more, the set of public keys will change over time as new servers come online and existing ones rotate their keys, so we would need some way of pushing these changes to clients.</p><p>This scaling problem is at the heart of the design of all PKIs.</p>
    <div>
      <h3>Trust is transitive</h3>
      <a href="#trust-is-transitive">
        
      </a>
    </div>
    <p>Instead of expecting the client to know the server's public key in advance, the server might just send its public key during the TLS handshake. But how does the client know that the public key actually belongs to the server? This is the job of a <b>certificate</b>.</p><p>A certificate binds a public key to the identity of the server — usually its DNS name, e.g., <code>cloudflareresearch.com</code>. The certificate is signed by a Certification Authority (CA) whose public key is known to the client. In addition to verifying the server's handshake signature, the client verifies the signature of this certificate. This establishes a chain of trust: by accepting the certificate, the client is trusting that the CA verified that the public key actually belongs to the server with that identity.</p><p>Clients are typically configured to trust many CAs and must be provisioned with a public key for each. Things are much easier however, since there are only 100s of CAs instead of billions. In addition, new certificates can be created without having to update clients.</p><p>These efficiencies come at a relatively low cost: for those counting at home, that's <b>+1</b> signature and <b>+1</b> public key, for a total of <b>2 signatures and 1 public key</b> per TLS handshake.</p><p>That's not the end of the story, however. As the WebPKI has evolved, so have these chains of trust grown a bit longer. These days it's common for a chain to consist of two or more certificates rather than just one. This is because CAs sometimes need to rotate<b> </b>their keys, just as servers do. But before they can start using the new key, they must distribute the corresponding public key to clients. This takes time, since it requires billions of clients to update their trust stores. To bridge the gap, the CA will sometimes use the old key to issue a certificate for the new one and append this certificate to the end of the chain.</p><p>That's<b> +1</b> signature and<b> +1</b> public key, which brings us to<b> 3 signatures and 2 public keys</b>. And we still have a little ways to go.</p>
    <div>
      <h3>Trust but verify</h3>
      <a href="#trust-but-verify">
        
      </a>
    </div>
    <p>The main job of a CA is to verify that a server has control over the domain for which it’s requesting a certificate. This process has evolved over the years from a high-touch, CA-specific process to a standardized, <a href="https://datatracker.ietf.org/doc/html/rfc8555/"><u>mostly automated process</u></a> used for issuing most certificates on the web. (Not all CAs fully support automation, however.) This evolution is marked by a number of security incidents in which a certificate was <b>mis-issued </b>to a party other than the server, allowing that party to impersonate the server to any client that trusts the CA.</p><p>Automation helps, but <a href="https://en.wikipedia.org/wiki/DigiNotar#Issuance_of_fraudulent_certificates"><u>attacks</u></a> are still possible, and mistakes are almost inevitable. <a href="https://blog.cloudflare.com/unauthorized-issuance-of-certificates-for-1-1-1-1/"><u>Earlier this year</u></a>, several certificates for Cloudflare's encrypted 1.1.1.1 resolver were issued without our involvement or authorization. This apparently occurred by accident, but it nonetheless put users of 1.1.1.1 at risk. (The mis-issued certificates have since been revoked.)</p><p>Ensuring mis-issuance is detectable is the job of the Certificate Transparency (CT) ecosystem. The basic idea is that each certificate issued by a CA gets added to a public <b>log</b>. Servers can audit these logs for certificates issued in their name. If ever a certificate is issued that they didn't request itself, the server operator can prove the issuance happened, and the PKI ecosystem can take action to prevent the certificate from being trusted by clients.</p><p>Major browsers, including Firefox and Chrome and its derivatives, require certificates to be logged before they can be trusted. For example, Chrome, Safari, and Firefox will only accept the server's certificate if it appears in at least two logs the browser is configured to trust. This policy is easy to state, but tricky to implement in practice:</p><ol><li><p>Operating a CT log has historically been fairly expensive. Logs ingest billions of certificates over their lifetimes: when an incident happens, or even just under high load, it can take some time for a log to make a new entry available for auditors.</p></li><li><p>Clients can't really audit logs themselves, since this would expose their browsing history (i.e., the servers they wanted to connect to) to the log operators.</p></li></ol><p>The solution to both problems is to include a signature from the CT log along with the certificate. The signature is produced immediately in response to a request to log a certificate, and attests to the log's intent to include the certificate in the log within 24 hours.</p><p>Per browser policy, certificate transparency adds <b>+2</b> signatures to the TLS handshake, one for each log. This brings us to a total of <b>5 signatures and 2 public keys</b> in a typical handshake on the public web.</p>
    <div>
      <h3>The future WebPKI</h3>
      <a href="#the-future-webpki">
        
      </a>
    </div>
    <p>The WebPKI is a living, breathing, and highly distributed system. We've had to patch it a number of times over the years to keep it going, but on balance it has served our needs quite well — until now.</p><p>Previously, whenever we needed to update something in the WebPKI, we would tack on another signature. This strategy has worked because conventional cryptography is so cheap. But <b>5 signatures and 2 public keys </b>on average for each TLS handshake is simply too much to cope with for the larger PQ signatures that are coming.</p><p>The good news is that by moving what we already have around in clever ways, we can drastically reduce the number of signatures we need.</p>
    <div>
      <h3>Crash course on Merkle Tree Certificates</h3>
      <a href="#crash-course-on-merkle-tree-certificates">
        
      </a>
    </div>
    <p><a href="https://datatracker.ietf.org/doc/draft-davidben-tls-merkle-tree-certs/"><u>Merkle Tree Certificates (MTCs)</u></a> is a proposal for the next generation of the WebPKI that we are implementing and plan to deploy on an experimental basis. Its key features are as follows:</p><ol><li><p>All the information a client needs to validate a Merkle Tree Certificate can be disseminated out-of-band. If the client is sufficiently up-to-date, then the TLS handshake needs just <b>1 signature, 1 public key, and 1 Merkle tree inclusion proof</b>. This is quite small, even if we use post-quantum algorithms.</p></li><li><p>The MTC specification makes certificate transparency a first class feature of the PKI by having each CA run its own log of exactly the certificates they issue.</p></li></ol><p>Let's poke our head under the hood a little. Below we have an MTC generated by one of our internal tests. This would be transmitted from the server to the client in the TLS handshake:</p>
            <pre><code>-----BEGIN CERTIFICATE-----
MIICSzCCAUGgAwIBAgICAhMwDAYKKwYBBAGC2ksvADAcMRowGAYKKwYBBAGC2ksv
AQwKNDQzNjMuNDguMzAeFw0yNTEwMjExNTMzMjZaFw0yNTEwMjgxNTMzMjZaMCEx
HzAdBgNVBAMTFmNsb3VkZmxhcmVyZXNlYXJjaC5jb20wWTATBgcqhkjOPQIBBggq
hkjOPQMBBwNCAARw7eGWh7Qi7/vcqc2cXO8enqsbbdcRdHt2yDyhX5Q3RZnYgONc
JE8oRrW/hGDY/OuCWsROM5DHszZRDJJtv4gno2wwajAOBgNVHQ8BAf8EBAMCB4Aw
EwYDVR0lBAwwCgYIKwYBBQUHAwEwQwYDVR0RBDwwOoIWY2xvdWRmbGFyZXJlc2Vh
cmNoLmNvbYIgc3RhdGljLWN0LmNsb3VkZmxhcmVyZXNlYXJjaC5jb20wDAYKKwYB
BAGC2ksvAAOB9QAAAAAAAAACAAAAAAAAAAJYAOBEvgOlvWq38p45d0wWTPgG5eFV
wJMhxnmDPN1b5leJwHWzTOx1igtToMocBwwakt3HfKIjXYMO5CNDOK9DIKhmRDSV
h+or8A8WUrvqZ2ceiTZPkNQFVYlG8be2aITTVzGuK8N5MYaFnSTtzyWkXP2P9nYU
Vd1nLt/WjCUNUkjI4/75fOalMFKltcc6iaXB9ktble9wuJH8YQ9tFt456aBZSSs0
cXwqFtrHr973AZQQxGLR9QCHveii9N87NXknDvzMQ+dgWt/fBujTfuuzv3slQw80
mibA021dDCi8h1hYFQAA
-----END CERTIFICATE-----</code></pre>
            <p>Looks like your average PEM encoded certificate. Let's decode it and look at the parameters:</p>
            <pre><code>$ openssl x509 -in merkle-tree-cert.pem -noout -text
Certificate:
    Data:
        Version: 3 (0x2)
        Serial Number: 531 (0x213)
        Signature Algorithm: 1.3.6.1.4.1.44363.47.0
        Issuer: 1.3.6.1.4.1.44363.47.1=44363.48.3
        Validity
            Not Before: Oct 21 15:33:26 2025 GMT
            Not After : Oct 28 15:33:26 2025 GMT
        Subject: CN=cloudflareresearch.com
        Subject Public Key Info:
            Public Key Algorithm: id-ecPublicKey
                Public-Key: (256 bit)
                pub:
                    04:70:ed:e1:96:87:b4:22:ef:fb:dc:a9:cd:9c:5c:
                    ef:1e:9e:ab:1b:6d:d7:11:74:7b:76:c8:3c:a1:5f:
                    94:37:45:99:d8:80:e3:5c:24:4f:28:46:b5:bf:84:
                    60:d8:fc:eb:82:5a:c4:4e:33:90:c7:b3:36:51:0c:
                    92:6d:bf:88:27
                ASN1 OID: prime256v1
                NIST CURVE: P-256
        X509v3 extensions:
            X509v3 Key Usage: critical
                Digital Signature
            X509v3 Extended Key Usage:
                TLS Web Server Authentication
            X509v3 Subject Alternative Name:
                DNS:cloudflareresearch.com, DNS:static-ct.cloudflareresearch.com
    Signature Algorithm: 1.3.6.1.4.1.44363.47.0
    Signature Value:
        00:00:00:00:00:00:02:00:00:00:00:00:00:00:02:58:00:e0:
        44:be:03:a5:bd:6a:b7:f2:9e:39:77:4c:16:4c:f8:06:e5:e1:
        55:c0:93:21:c6:79:83:3c:dd:5b:e6:57:89:c0:75:b3:4c:ec:
        75:8a:0b:53:a0:ca:1c:07:0c:1a:92:dd:c7:7c:a2:23:5d:83:
        0e:e4:23:43:38:af:43:20:a8:66:44:34:95:87:ea:2b:f0:0f:
        16:52:bb:ea:67:67:1e:89:36:4f:90:d4:05:55:89:46:f1:b7:
        b6:68:84:d3:57:31:ae:2b:c3:79:31:86:85:9d:24:ed:cf:25:
        a4:5c:fd:8f:f6:76:14:55:dd:67:2e:df:d6:8c:25:0d:52:48:
        c8:e3:fe:f9:7c:e6:a5:30:52:a5:b5:c7:3a:89:a5:c1:f6:4b:
        5b:95:ef:70:b8:91:fc:61:0f:6d:16:de:39:e9:a0:59:49:2b:
        34:71:7c:2a:16:da:c7:af:de:f7:01:94:10:c4:62:d1:f5:00:
        87:bd:e8:a2:f4:df:3b:35:79:27:0e:fc:cc:43:e7:60:5a:df:
        df:06:e8:d3:7e:eb:b3:bf:7b:25:43:0f:34:9a:26:c0:d3:6d:
        5d:0c:28:bc:87:58:58:15:00:00</code></pre>
            <p>While some of the parameters probably look familiar, others will look unusual. On the familiar side, the subject and public key are exactly what we might expect: the DNS name is <code>cloudflareresearch.com</code> and the public key is for a familiar signature algorithm, ECDSA-P256. This algorithm is not PQ, of course — in the future we would put ML-DSA-44 there instead.</p><p>On the unusual side, OpenSSL appears to not recognize the signature algorithm of the issuer and just prints the raw OID and bytes of the signature. There's a good reason for this: the MTC does not have a signature in it at all! So what exactly are we looking at?</p><p>The trick to leave out signatures is that a Merkle Tree Certification Authority (MTCA) produces its <i>signatureless</i> certificates <i>in batches</i> rather than individually. In place of a signature, the certificate has an <b>inclusion proof</b> of the certificate in a batch of certificates signed by the MTCA.</p><p>To understand how inclusion proofs work, let's think about a slightly simplified version of the MTC specification. To issue a batch, the MTCA arranges the unsigned certificates into a data structure called a <b>Merkle tree</b> that looks like this:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4LGhISsS07kbpSgDkqx8p2/68e3b36deeca7f97139654d2c769df68/image3.png" />
          </figure><p>Each leaf of the tree corresponds to a certificate, and each inner node is equal to the hash of its children. To sign the batch, the MTCA uses its secret key to sign the head of the tree. The structure of the tree guarantees that each certificate in the batch was signed by the MTCA: if we tried to tweak the bits of any one of the certificates, the treehead would end up having a different value, which would cause the signature to fail.</p><p>An inclusion proof for a certificate consists of the hash of each sibling node along the path from the certificate to the treehead:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4UZZHkRwsBLWXRYeop4rXv/8598cde48c27c112bc4992889f3d5799/image1.gif" />
          </figure><p>Given a validated treehead, this sequence of hashes is sufficient to prove inclusion of the certificate in the tree. This means that, in order to validate an MTC, the client also needs to obtain the signed treehead from the MTCA.</p><p>This is the key to MTC's efficiency:</p><ol><li><p>Signed treeheads can be disseminated to clients out-of-band and validated offline. Each validated treehead can then be used to validate any certificate in the corresponding batch, eliminating the need to obtain a signature for each server certificate.</p></li><li><p>During the TLS handshake, the client tells the server which treeheads it has. If the server has a signatureless certificate covered by one of those treeheads, then it can use that certificate to authenticate itself. That's <b>1 signature,1 public key and 1 inclusion proof</b> per handshake, both for the server being authenticated.</p></li></ol><p>Now, that's the simplified version. MTC proper has some more bells and whistles. To start, it doesn’t create a separate Merkle tree for each batch, but it grows a single large tree, which is used for better transparency. As this tree grows, periodically (sub)tree heads are selected to be shipped to browsers, which we call <b>landmarks</b>. In the common case browsers will be able to fetch the most recent landmarks, and servers can wait for batch issuance, but we need a fallback: MTC also supports certificates that can be issued immediately and don’t require landmarks to be validated, but these are not as small. A server would provision both types of Merkle tree certificates, so that the common case is fast, and the exceptional case is slow, but at least it’ll work.</p>
    <div>
      <h2>Experimental deployment</h2>
      <a href="#experimental-deployment">
        
      </a>
    </div>
    <p>Ever since early designs for MTCs emerged, we’ve been eager to experiment with the idea. In line with the IETF principle of “<a href="https://www.ietf.org/runningcode/"><u>running code</u></a>”, it often takes implementing a protocol to work out kinks in the design. At the same time, we cannot risk the security of users. In this section, we describe our approach to experimenting with aspects of the Merkle Tree Certificates design <i>without</i> changing any trust relationships.</p><p>Let’s start with what we hope to learn. We have lots of questions whose answers can help to either validate the approach, or uncover pitfalls that require reshaping the protocol — in fact, an implementation of an early MTC draft by <a href="https://www.cs.ru.nl/masters-theses/2025/M_Pohl___Implementation_and_Analysis_of_Merkle_Tree_Certificates_for_Post-Quantum_Secure_Authentication_in_TLS.pdf"><u>Maximilian Pohl</u></a> and <a href="https://www.ietf.org/archive/id/draft-davidben-tls-merkle-tree-certs-07.html#name-acknowledgements"><u>Mia Celeste</u></a> did exactly this. We’d like to know:</p><p><b>What breaks?</b> Protocol ossification (the tendency of implementation bugs to make it harder to change a protocol) is an ever-present issue with deploying protocol changes. For TLS in particular, despite having built-in flexibility, time after time we’ve found that if that flexibility is not regularly used, there will be buggy implementations and middleboxes that break when they see things they don’t recognize. TLS 1.3 deployment <a href="https://blog.cloudflare.com/why-tls-1-3-isnt-in-browsers-yet/"><u>took years longer</u></a> than we hoped for this very reason. And more recently, the rollout of PQ key exchange in TLS caused the Client Hello to be split over multiple TCP packets, something that many middleboxes <a href="https://tldr.fail/"><u>weren't ready for</u></a>.</p><p><b>What is the performance impact?</b> In fact, we expect MTCs to <i>reduce </i>the size of the handshake, even compared to today's non-PQ certificates. They will also reduce CPU cost: ML-DSA signature verification is about as fast as ECDSA, and there will be far fewer signatures to verify. We therefore expect to see a <i>reduction in latency</i>. We would like to see if there is a measurable performance improvement.</p><p><b>What fraction of clients will stay up to date? </b>Getting the performance benefit of MTCs requires the clients and servers to be roughly in sync with one another. We expect MTCs to have fairly short lifetimes, a week or so. This means that if the client's latest landmark is older than a week, the server would have to fallback to a larger certificate. Knowing how often this fallback happens will help us tune the parameters of the protocol to make fallbacks less likely.</p><p>In order to answer these questions, we are implementing MTC support in our TLS stack and in our certificate issuance infrastructure. For their part, Chrome is implementing MTC support in their own TLS stack and will stand up infrastructure to disseminate landmarks to their users.</p><p>As we've done in past experiments, we plan to enable MTCs for a subset of our free customers with enough traffic that we will be able to get useful measurements. Chrome will control the experimental rollout: they can ramp up slowly, measuring as they go and rolling back if and when bugs are found.</p><p>Which leaves us with one last question: who will run the Merkle Tree CA?</p>
    <div>
      <h3>Bootstrapping trust from the existing WebPKI</h3>
      <a href="#bootstrapping-trust-from-the-existing-webpki">
        
      </a>
    </div>
    <p>Standing up a proper CA is no small task: it takes years to be trusted by major browsers. That’s why Cloudflare isn’t going to become a “real” CA for this experiment, and Chrome isn’t going to trust us directly.</p><p>Instead, to make progress on a reasonable timeframe, without sacrificing due diligence, we plan to "mock" the role of the MTCA. We will run an MTCA (on <a href="https://github.com/cloudflare/azul/"><u>Workers</u></a> based on our <a href="https://blog.cloudflare.com/azul-certificate-transparency-log/"><u>StaticCT logs</u></a>), but for each MTC we issue, we also publish an existing certificate from a trusted CA that agrees with it. We call this the <b>bootstrap certificate</b>. When Chrome’s infrastructure pulls updates from our MTCA log, they will also pull these bootstrap certificates, and check whether they agree. Only if they do, they’ll proceed to push the corresponding landmarks to Chrome clients. In other words, Cloudflare is effectively just “re-encoding” an existing certificate (with domain validation performed by a trusted CA) as an MTC, and Chrome is using certificate transparency to keep us honest.</p>
    <div>
      <h2>Conclusion</h2>
      <a href="#conclusion">
        
      </a>
    </div>
    <p>With almost 50% of our traffic already protected by post-quantum encryption, we’re halfway to a fully post-quantum secure Internet. The second part of our journey, post-quantum certificates, is the hardest yet though. A simple drop-in upgrade has a noticeable performance impact and no security benefit before Q-day. This means it’s a hard sell to enable today by default. But here we are playing with fire: migrations always take longer than expected. If we want to keep an ubiquitously private and secure Internet, we need a post-quantum solution that’s performant enough to be enabled by default <b>today</b>.</p><p>Merkle Tree Certificates (MTCs) solves this problem by reducing the number of signatures and public keys to the bare minimum while maintaining the WebPKI's essential properties. We plan to roll out MTCs to a fraction of free accounts by early next year. This does not affect any visitors that are not part of the Chrome experiment. For those that are, thanks to the bootstrap certificates, there is no impact on security.</p><p>We’re excited to keep the Internet fast <i>and</i> secure, and will report back soon on the results of this experiment: watch this space! MTC is evolving as we speak, if you want to get involved, please join the IETF <a href="https://mailman3.ietf.org/mailman3/lists/plants@ietf.org/"><u>PLANTS mailing list</u></a>.</p> ]]></content:encoded>
            <category><![CDATA[Post-Quantum]]></category>
            <category><![CDATA[Research]]></category>
            <category><![CDATA[Cryptography]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[TLS]]></category>
            <category><![CDATA[Chrome]]></category>
            <category><![CDATA[Google]]></category>
            <category><![CDATA[IETF]]></category>
            <category><![CDATA[Transparency]]></category>
            <category><![CDATA[Rust]]></category>
            <category><![CDATA[Open Source]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <guid isPermaLink="false">4jURWdZzyjdrcurJ4LlJ1z</guid>
            <dc:creator>Luke Valenta</dc:creator>
            <dc:creator>Christopher Patton</dc:creator>
            <dc:creator>Vânia Gonçalves</dc:creator>
            <dc:creator>Bas Westerbaan</dc:creator>
        </item>
        <item>
            <title><![CDATA[Eliminating Cold Starts 2: shard and conquer]]></title>
            <link>https://blog.cloudflare.com/eliminating-cold-starts-2-shard-and-conquer/</link>
            <pubDate>Fri, 26 Sep 2025 13:00:00 GMT</pubDate>
            <description><![CDATA[ We reduced Cloudflare Workers cold starts by 10x by optimistically routing to servers with already-loaded Workers. Learn how we did it here. ]]></description>
            <content:encoded><![CDATA[ <p>Five years ago, we announced that we were <a href="https://blog.cloudflare.com/eliminating-cold-starts-with-cloudflare-workers/"><u>Eliminating Cold Starts with Cloudflare Workers</u></a>. In that episode, we introduced a technique to pre-warm Workers during the <a href="https://www.cloudflare.com/learning/ssl/what-happens-in-a-tls-handshake/"><u>TLS handshake</u></a> of their first request. That technique takes advantage of the fact that the <a href="https://www.cloudflare.com/learning/ssl/what-is-sni/"><u>TLS Server Name Indication (SNI)</u></a> is sent in the very first message of the TLS handshake. Armed with that SNI, we often have enough information to pre-warm the request’s target Worker.</p><p>Eliminating cold starts by pre-warming <a href="https://www.cloudflare.com/developer-platform/products/workers/"><u>Workers</u></a> during TLS handshakes was a huge step forward for us, but “eliminate” is a strong word. Back then, Workers were still relatively small, and had cold starts constrained by limits explained later in this post. We’ve relaxed those limits, and users routinely deploy complex applications on Workers, often replacing origin servers. Simultaneously, TLS handshakes haven’t gotten any slower. In fact, <a href="https://www.cloudflare.com/learning/ssl/why-use-tls-1.3/"><u>TLS 1.3</u></a> only requires a single round trip for a handshake – compared to three round trips for TLS 1.2 – and is more widely used than it was in 2021.</p><p>Earlier this month, we finished deploying a new technique intended to keep pushing the boundary on cold start reduction. The new technique (or old, depending on your perspective) uses a consistent hash ring to take advantage of our global <a href="https://www.cloudflare.com/network"><u>network</u></a>. We call this mechanism “Worker sharding”.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3wfKoSIUzu20UtrJ3thfLh/c5821aa90f72a344962b83dbbfbb4508/image12.png" />
          </figure>
    <div>
      <h3>What’s in a cold start?</h3>
      <a href="#whats-in-a-cold-start">
        
      </a>
    </div>
    <p>A Worker is the basic unit of compute in our <a href="https://www.cloudflare.com/learning/serverless/what-is-serverless/"><u>serverless computing</u></a> platform. It has a simple lifecycle. We instantiate it from source code (typically JavaScript), make it serve a bunch of requests (often HTTP, but not always), and eventually shut it down some time after it stops receiving traffic, to re-use its resources for other Workers. We call that shutdown process “eviction”.</p><p>The most expensive part of the Worker’s lifecycle is the initial instantiation and first request invocation. We call this part a “cold start”. Cold starts have several phases: fetching the script source code, compiling the source code, performing a top-level execution of the resulting JavaScript module, and finally, performing the initial invocation to serve the incoming HTTP request that triggered the whole sequence of events in the first place.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4dsRI0IS9GmJFRzCWeaQDw/db362a6962e20976565d6ed0fc11cdf2/image11.png" />
          </figure>
    <div>
      <h3>Cold starts have become longer than TLS handshakes</h3>
      <a href="#cold-starts-have-become-longer-than-tls-handshakes">
        
      </a>
    </div>
    <p>Fundamentally, our TLS handshake technique depends on the handshake lasting longer than the cold start. This is because the duration of the TLS handshake is time that the visitor must spend waiting, regardless, so it’s beneficial to everyone if we do as much work during that time as possible. If we can run the Worker’s cold start in the background while the handshake is still taking place, and if that cold start finishes <i>before</i> the handshake, then the request will ultimately see zero cold start delay. If, on the other hand, the cold start takes <i>longer</i> than the TLS handshake, then the request will see some part of the cold start delay – though the technique still helps reduce that visible delay.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/54wEBpC1hCRXN97zMiAXu1/f1434aac21b40093bc12d8931c7bc1ec/image7.png" />
          </figure><p>In the early days, TLS handshakes lasting longer than Worker cold starts was a safe bet, and cold starts typically won the race. One of our <a href="https://blog.cloudflare.com/cloud-computing-without-containers/#cold-starts"><u>early blog posts explaining how our platform works</u></a> mentions 5 millisecond cold start times – and that was correct, at the time!</p><p>For every limit we have, our users have challenged us to relax them. Cold start times are no different. </p><p>There are two crucial limits which affect cold start time: Worker script size and the startup CPU time limit. While we didn’t make big announcements at the time, we have quietly raised both of those limits since our last <i>Eliminating Cold Starts</i> blog post:</p><ul><li><p>Worker script size (compressed) increased from <a href="https://github.com/cloudflare/cloudflare-docs/pull/6613"><u>1 MB to 5 MB</u></a>, then again from <a href="https://github.com/cloudflare/cloudflare-docs/pull/9083"><u>5 MB to 10 MB</u></a>, for paying users.</p></li><li><p>Worker script size (compressed) increased from <a href="https://github.com/cloudflare/cloudflare-docs/pull/18400"><u>1 MB to 3 MB</u></a> for free users.</p></li><li><p>Startup CPU time increased from <a href="https://github.com/cloudflare/cloudflare-docs/pull/9154"><u>200ms to 400ms</u></a>.</p></li></ul><p>We relaxed these limits because our users wanted to deploy increasingly complex applications to our platform. And deploy they did! But the increases have a cost:</p><ul><li><p>Increasing script size increases the amount of data we must transfer from script storage to the Workers runtime.</p></li><li><p>Increasing script size also increases the time complexity of the script compilation phase.</p></li><li><p>Increasing the startup CPU time limit increases the maximum top-level execution time.</p></li></ul><p>Taken together, cold starts for complex applications began to lose the TLS handshake race.</p>
    <div>
      <h3>Routing requests to an existing Worker</h3>
      <a href="#routing-requests-to-an-existing-worker">
        
      </a>
    </div>
    <p>With relaxed script size and startup time limits, optimizing cold start time directly was a losing battle. Instead, we needed to figure out how to reduce the absolute <i>number</i> of cold starts, so that requests are simply less likely to incur one.</p><p>One option is to route requests to existing Worker instances, where before we might have chosen to start a new instance.</p><p>Previously, we weren’t particularly good at routing requests to existing Worker instances. We could trivially coalesce requests to a single Worker instance if they happened to land on a machine which already hosted a Worker, because in that case it’s not a distributed systems problem. But what if a Worker already existed in our data center on a different server, and some other server received a request for the Worker? We would always choose to cold start a new Worker on the machine which received the request, rather than forward the request to the machine with the already-existing Worker, even though forwarding the request would avoid the cold start.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5Yf9jgJcUGIppzhobkbG3j/4113c3a1910961b014f4f4ff9a1866be/image8.png" />
          </figure><p>To drive the point home: Imagine a visitor sends one request per minute to a data center with 300 servers, and that the traffic is load balanced evenly across all servers. On average, each server will receive one request every five hours. In particularly busy data centers, this span of time could be long enough that we need to evict the Worker to re-use its resources, resulting in a 100% cold start rate. That’s a terrible experience for the visitor.</p><p>Consequently, we found ourselves explaining to users, who saw high latency while prototyping their applications, that their latency would counterintuitively <i>decrease</i> once they put sufficient traffic on our network. This highlighted the inefficiency in our original, simple design.</p><p>If, instead, those requests were all coalesced onto one single server, we would notice multiple benefits. The Worker would receive one request per minute, which is short enough to virtually guarantee that it won’t be evicted. This would mean the visitor may experience a single cold start, and then have a 100% “warm request rate.” We would also use 99.7% (299 / 300) less memory serving this traffic. This makes room for other Workers, decreasing their eviction rate, and increasing <i>their</i> warm request rates, too – a virtuous cycle!</p><p>There’s a cost to coalescing requests to a single instance, though, right? After all, we’re adding latency to requests if we have to proxy them around the data center to a different server.</p><p>In practice, the added time-to-first-byte is less than one millisecond, and is the subject of continual optimization by our IPC and performance teams. One millisecond is far less than a typical cold start, meaning it’s always better, in every measurable way, to proxy a request to a warm Worker than it is to cold start a new one.</p>
    <div>
      <h3>The consistent hash ring</h3>
      <a href="#the-consistent-hash-ring">
        
      </a>
    </div>
    <p>A solution to this very problem lies at the heart of many of our products, including one of our oldest: the HTTP cache in our <a href="https://www.cloudflare.com/application-services/products/cdn/"><u>Content Delivery Network</u></a>.</p><p>When a visitor requests a cacheable web asset through Cloudflare, the request gets routed through a pipeline of proxies. One of those proxies is a caching proxy, which stores the asset for later, so we can serve it to future requests without having to request it from the origin again.</p><p>A Worker cold start is analogous to an HTTP cache miss, in that a request to a warm Worker is like an HTTP cache hit.</p><p>When our standard HTTP proxy pipeline routes requests to the caching layer, it chooses a cache server based on the request's cache key to optimize the HTTP cache hit rate. <a href="https://developers.cloudflare.com/cache/how-to/cache-keys/"><u>The cache key is the request’s URL, plus some other details</u></a>. This technique is often called “sharding”. The servers are considered to be individual shards of a larger, logical system – in this case a data center’s HTTP cache. So, we can say things like, “Each data center contains one logical HTTP cache, and that cache is sharded across every server in the data center.”</p><p>Until recently, we could not make the same claim about the set of Workers in a data center. Instead, each server contained its own standalone set of Workers, and they could easily duplicate effort.</p><p>We borrow the cache’s trick to solve that. In fact, we even use the same type of data structure used by our HTTP cache to choose servers: a <a href="https://en.wikipedia.org/wiki/Consistent_hashing"><u>consistent hash ring</u></a>. A naive sharding implementation might use a classic hash table mapping Worker script IDs to server addresses. That would work fine for a set of servers which never changes. But servers are actually ephemeral and have their own lifecycle. They can crash, get rebooted, taken out for maintenance, or decommissioned. New ones can come online. When these events occur, the size of the hash table would change, necessitating a re-hashing of the whole table. Every Worker’s home server would change, and all sharded Workers would be cold started again!</p><p>A consistent hash ring improves this scenario significantly. Instead of establishing a direct correspondence between script IDs and server addresses, we map them both to a number line whose end wraps around to its beginning, also known as a ring. To look up the home server of a Worker, first we hash its script, and then we find where it lies on the ring. Next, we take the server address which comes directly on or after that position on the ring, and consider that the Worker’s home.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3UJuyYc8D1CYDrRKgBmz7e/ce1da2614c2a5d712a3ab471a0ec7675/image2.png" />
          </figure><p>If a new server appears for some reason, all the Workers that lie before it on the ring get re-homed, but none of the other Workers are disturbed. Similarly, if a server disappears, all the Workers which lay before it on the ring get re-homed.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/61yVG9Cd7z9R7pCbDQKUXT/42a1d5f6f02b33b4c321dc3788a8d724/image6.png" />
          </figure><p>We refer to the Worker’s home server as the “shard server”. In request flows involving sharding, there is also a “shard client”. It’s also a server! The shard client initially receives a request, and, using its consistent hash ring, looks up which shard server it should send the request to. I’ll be using these two terms – shard client and shard server – in the rest of this post.</p>
    <div>
      <h3>Handling overload</h3>
      <a href="#handling-overload">
        
      </a>
    </div>
    <p>The nature of HTTP assets lend themselves well to sharding. If they are cacheable, they are static, at least for their cache <a href="https://www.cloudflare.com/learning/cdn/glossary/time-to-live-ttl/"><u>Time to Live (TTL)</u></a> duration. So, serving them requires time and space complexity which scales linearly with their size.</p><p>But Workers aren’t JPEGs. They are live units of compute which can use up to five minutes of CPU time per request. Their time and space complexity do not necessarily scale with their input size, and can vastly outstrip the amount of computing power we must dedicate to serving even a huge file from cache.</p><p>This means that individual Workers can easily get overloaded when given sufficient traffic. So, no matter what we do, we need to keep in mind that we must be able to scale back up to infinity. We will never be able to guarantee that a data center has only one instance of a Worker, and we must always be able to horizontally scale at the drop of a hat to support burst traffic. Ideally this is all done without producing any errors.</p><p>This means that a shard server must have the ability to refuse requests to invoke Workers on it, and shard clients must always gracefully handle this scenario.</p>
    <div>
      <h3>Two load shedding options</h3>
      <a href="#two-load-shedding-options">
        
      </a>
    </div>
    <p>I am aware of two general solutions to shedding load gracefully, without serving errors.</p><p>In the first solution, the client asks politely if it may issue the request. It then sends the request if it  receives a positive response. If it instead receives a “go away” response, it handles the request differently, like serving it locally. In HTTP, this pattern can be found in <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Status/100"><u>Expect: 100-continue semantics</u></a>. The main downside is that this introduces one round-trip of latency to set the expectation of success before the request can be sent. (Note that a common naive solution is to just retry requests. This works for some kinds of requests, but is not a general solution, as requests may carry arbitrarily large bodies.)</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4e7OUD3T8kT5ct6OiETfwb/595c92d02312956921d379ceb4268139/image4.png" />
          </figure><p>The second general solution is to send the request without confirming that it can be handled by the server, then count on the server to forward the request elsewhere if it needs to. This could even be back to the client. This avoids the round-trip of latency that the first solution incurs, but there is a tradeoff: It puts the shard server in the request path, pumping bytes back to the client. Fortunately, we have a trick to minimize the amount of bytes we actually have to send back in this fashion, which I’ll describe in the next section.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1Cb7zWYrMfmQnMBooe8g2o/2a3d2b906b85cf8eff7666630ed78b9e/image9.png" />
          </figure>
    <div>
      <h3>Optimistically sending sharded requests</h3>
      <a href="#optimistically-sending-sharded-requests">
        
      </a>
    </div>
    <p>There are a couple of reasons why we chose to optimistically send sharded requests without waiting for permission.</p><p>The first reason of note is that we expect to see very few of these refused requests in practice. The reason is simple: If a shard client receives a refusal for a Worker, then it must cold start the Worker locally. As a consequence, it can serve all future requests locally without incurring another cold start. So, after a single refusal, the shard client won’t shard that Worker any more (until traffic for the Worker tapers off enough for an eviction, at least).</p><p>Generally, this means we expect that if a request gets sharded to a different server, the shard server will most likely accept the request for invocation. Since we expect success, it makes a lot more sense to optimistically send the entire request to the shard server than it does to incur a round-trip penalty to establish permission first.</p><p>The second reason is that we have a trick to avoid paying too high a cost for proxying the request back to the client, as I mentioned above.</p><p>We implement our cross-instance communication in the Workers runtime using <a href="https://capnproto.org/"><u>Cap’n Proto RPC</u></a>, whose distributed object model enables some incredible features, like <a href="https://blog.cloudflare.com/javascript-native-rpc/"><u>JavaScript-native RPC</u></a>. It is also the elder, spiritual sibling to the just-released <a href="https://blog.cloudflare.com/capnweb-javascript-rpc-library/"><u>Cap’n Web</u></a>.</p><p>In the case of sharding, Cap’n Proto makes it very easy to implement an optimal request refusal mechanism. When the shard client assembles the sharded request, it includes a handle (<a href="https://capnproto.org/rpc.html#distributed-objects"><u>called a </u><i><u>capability</u></i></a> in Cap’n Proto) to a lazily-loaded local instance of the Worker. This lazily-loaded instance has the <a href="https://github.com/cloudflare/workerd/blob/f93fd2625f8d4131d9d50762c09deddb01bb4c70/src/workerd/io/worker-interface.capnp#L582"><u>same exact interface as any other Worker exposed over RPC</u></a>. The difference is just that it’s lazy – it doesn’t get cold started until invoked. In the event the shard server decides it must refuse the request, it does not return a “go away” response, but instead returns the shard client’s own lazy capability!</p><p>The shard client’s application code only sees that it received a capability from the shard server. It doesn’t know where that capability is actually implemented. But the shard client’s <i>RPC system</i> does know where the capability lives! Specifically, it recognizes that the returned capability is actually a local capability – the same one that it passed to the shard server. Once it realizes this, it also realizes that any request bytes it continues to send to the shard server will just come looping back. So, it stops sending more request bytes, waits to receive back from the shard server all the bytes it already sent, and shortens the request path as soon as possible. This takes the shard server entirely out of the loop, preventing a “trombone effect.”</p>
    <div>
      <h3>Workers invoking Workers</h3>
      <a href="#workers-invoking-workers">
        
      </a>
    </div>
    <p>With load shedding behavior figured out, we thought the hard part was over.</p><p>But, of course, Workers may invoke other Workers. There are many ways this could occur, most obviously via <a href="https://developers.cloudflare.com/workers/runtime-apis/bindings/service-bindings/"><u>Service Bindings</u></a>. Less obviously, many of our favorite features, such as <a href="https://developers.cloudflare.com/kv/"><u>Workers KV</u></a>, are actually cross-Worker invocations. But there is one product, in particular, that stands out for its powerful ability to invoke other Workers: <a href="https://developers.cloudflare.com/cloudflare-for-platforms/workers-for-platforms/"><u>Workers for Platforms</u></a>.</p><p>Workers for Platforms allows you to run your own functions-as-a-service on Cloudflare infrastructure. To use the product, you deploy three special types of Workers:</p><ul><li><p>a dynamic dispatch Worker</p></li><li><p>any number of user Workers</p></li><li><p>an optional, parameterized <a href="https://developers.cloudflare.com/cloudflare-for-platforms/workers-for-platforms/configuration/outbound-workers/"><u>outbound Worker</u></a></p></li></ul><p>A typical request flow for Workers for Platforms goes like so: First, we invoke the dynamic dispatch Worker. The dynamic dispatch Worker chooses and invokes a user Worker. Then, the user Worker invokes the outbound Worker to intercept its subrequests. The dynamic dispatch Worker chose the outbound Worker's arguments prior to invoking the user Worker.</p><p>To really amp up the fun, the dynamic dispatch Worker could have a <a href="https://developers.cloudflare.com/workers/observability/logs/tail-workers/"><u>tail Worker</u></a> attached to it. This tail Worker would need to be invoked with traces related to all the preceding invocations. Importantly, it should be invoked one single time with all events related to the request flow, not invoked multiple times for different fragments of the request flow.</p><p>You might further ask, can you nest Workers for Platforms? I don’t know the official answer, but I can tell you that the code paths do exist, and they do get exercised.</p><p>To support this nesting doll of Workers, we keep a context stack during invocations. This context includes things like ownership overrides, resource limit overrides, trust levels, tail Worker configurations, outbound Worker configurations, feature flags, and so on. This context stack was manageable-ish when everything was executed on a single thread. For sharding to be truly useful, though, we needed to be able to move this context stack around to other machines.</p><p>Our choice of Cap’n Proto RPC as our primary communications medium helped us make sense of it all. To shard Workers deep within a stack of invocations, we serialize the context stack into a Cap’n Proto data structure and send it to the shard server. The shard server deserializes it into native objects, and continues the execution where things left off.</p><p>As with load shedding, Cap’n Proto’s distributed object model provides us simple answers to otherwise difficult questions. Take the tail Worker question – how do we coalesce tracing data from invocations which got fanned out across any number of other servers back to one single place? Easy: create a capability (a live Cap’n Proto object) for a reportTraces() callback on the dynamic dispatch Worker’s home server, and put that in the serialized context stack. Now, that context stack can be passed around at will. That context stack will end up in multiple places: At a minimum, it will end up on the user Worker’s shard server and the outbound Worker’s shard server. It may also find its way to other shard servers if any of those Workers invoked service bindings! Each of those shard servers can call the reportTraces() callback, and be confident that the data will make its way back to the right place: the dynamic dispatch Worker’s home server. None of those shard servers need to actually know <i>where</i> that home server is. Phew!</p>
    <div>
      <h3>Eviction rates down, warm request rates up</h3>
      <a href="#eviction-rates-down-warm-request-rates-up">
        
      </a>
    </div>
    <p>Features like this are always satisfying to roll out, because they produce graphs showing huge efficiency gains.</p><p>Once fully rolled out, only about 4% of total requests from enterprise traffic ended up being sharded. To put that another way, 96% of all enterprise requests are to Workers which are sufficiently loaded that we <i>must</i> run multiple instances of them in a data center.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3BsNColqVqUYNpfUv1wz8W/d80edb0c93cf741eab3603a9b2ef57e8/image5.png" />
          </figure><p>Despite that low total rate of sharding, we reduced our global Worker eviction rate by 10x. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1w3sQH8iQRP6oQFCCTm9BQ/815c49d51c473aafebfb001f58762797/image10.png" />
          </figure><p>Our eviction rate is a measure of memory pressure within our system. You can think of it like garbage collection at a macro level, and it has the same implications. Fewer evictions means our system uses memory more efficiently. This has the happy consequence of using less CPU to clean up our memory. More relevant to Workers users, the increased efficiency means we can keep Workers in memory for an order of magnitude longer, improving their warm request rate and reducing their latency.</p><p>The high leverage shown – sharding just 4% of our traffic to improve memory efficiency by 10x – is a consequence of the power-law distribution of Internet traffic.</p><p>A <a href="https://en.wikipedia.org/wiki/Power_law"><u>power law distribution</u></a> is a phenomenon which occurs across many fields of science, including linguistics, sociology, physics, and, of course, computer science. Events which follow power law distributions typically see a huge amount clustered in some small number of “buckets”, and the rest spread out across a large number of those “buckets”. Word frequency is a classic example: A small handful of words like “the”, “and”, and “it” occur in texts with extremely high frequency, while other words like “eviction” or “trombone” might occur only once or twice in a text.</p><p>In our case, the majority of Workers requests goes to a small handful of high-traffic Workers, while a very long tail goes to a huge number of low-traffic Workers. The 4% of requests which were sharded are all to low-traffic Workers, which are the ones that benefit the most from sharding.</p><p>So did we eliminate cold starts? Or will there be an <i>Eliminating Cold Starts 3</i> in our future?</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/419NdQonMOOAsirFmMl885/28b0266ee311ebffb342002e5e1cf54e/image3.png" />
          </figure><p>For enterprise traffic, our warm request rate increased from 99.9% to 99.99% – that’s three 9’s to four 9’s. Conversely, this means that the cold start rate went from 0.1% to 0.01% of requests, a 10x decrease. A moment’s thought, and you’ll realize that this is coherent with the eviction rate graph I shared above: A 10x decrease in the number of Workers we destroy over time must imply we’re creating 10x fewer to begin with.</p><p>Simultaneously, our warm request rate became less volatile throughout the course of the day.</p><p>Hmm.</p><p>I hate to admit this to you, but I still notice a little bit of space at the top of the graph. 😟</p><p><a href="https://www.cloudflare.com/careers/"><u>Can you help us get to five 9’s?</u></a></p> ]]></content:encoded>
            <category><![CDATA[Birthday Week]]></category>
            <category><![CDATA[Cap'n Proto]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Engineering]]></category>
            <category><![CDATA[TLS]]></category>
            <guid isPermaLink="false">1mLzSCaF2U02LD3DDd97Mn</guid>
            <dc:creator>Harris Hancock</dc:creator>
        </item>
        <item>
            <title><![CDATA[Automatically Secure: how we upgraded 6,000,000 domains by default to get ready for the Quantum Future]]></title>
            <link>https://blog.cloudflare.com/automatically-secure/</link>
            <pubDate>Wed, 24 Sep 2025 14:00:00 GMT</pubDate>
            <description><![CDATA[ After a year since we started enabling Automatic SSL/TLS, we want to talk about these results, why they matter, and how we’re preparing for the next leap in Internet security. ]]></description>
            <content:encoded><![CDATA[ <p>The Internet is in constant motion. Sites scale, traffic shifts, and attackers adapt. Security that worked yesterday may not be enough tomorrow. That’s why the technologies that protect the web — such as Transport Layer Security (TLS) and emerging post-quantum cryptography (PQC) — must also continue to evolve. We want to make sure that everyone benefits from this evolution automatically, so we enabled the strongest protections by default.</p><p>During <a href="https://blog.cloudflare.com/introducing-automatic-ssl-tls-securing-and-simplifying-origin-connectivity/"><u>Birthday Week 2024</u></a>, we announced Automatic SSL/TLS: a service that scans origin server configurations of domains behind Cloudflare, and automatically upgrades them to the most secure encryption mode they support. In the past year, <b>this system has quietly strengthened security for more than 6 million domains </b>— ensuring Cloudflare can always connect to origin servers over the safest possible channel, without customers lifting a finger.</p><p>Now, a year after we started enabling Automatic SSL/TLS, we want to talk about these results, why they matter, and how we’re preparing for the next leap in Internet security.</p>
    <div>
      <h2>The Basics: TLS protocol</h2>
      <a href="#the-basics-tls-protocol">
        
      </a>
    </div>
    <p>Before diving in, let’s review the basics of Transport Layer Security (<a href="https://www.cloudflare.com/learning/ssl/transport-layer-security-tls/"><u>TLS</u></a>). The protocol allows two strangers (like a client and server) to communicate securely.</p><p>Every secure web session begins with a TLS handshake. Before a single byte of your data moves across the Internet, servers and clients need to agree on a shared secret key that will protect the confidentiality and integrity of your data. The key agreement handshake kicks off with a TLS <i>ClientHello</i> message. This message is the browser/client announcing, “Here’s who I want to talk to (via <a href="https://www.cloudflare.com/learning/ssl/what-is-sni/"><u>SNI</u></a>), and here are the key agreement methods I understand.” The server then proves who it is with its own credentials in the form of a certificate, and together they establish a shared secret key that will protect everything that follows. </p><p>TLS 1.3 added a clever shortcut: instead of waiting to be told which method to use for the shared key agreement, the browser can guess what key agreement the server supports, and include one or more <a href="https://blog.cloudflare.com/rfc-8446-aka-tls-1-3/"><u>keyshares</u></a> right away. If the guess is correct, the handshake skips an extra round trip and the secure connection is established more quickly. If the guess is wrong, the server responds with a <i>HelloRetryRequest</i> (HRR), telling the browser which key agreement method to retry with. This speculative guessing is a major reason TLS 1.3 is so much faster than TLS 1.2.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/W2t0sZjiliwZ0FGfFFG6k/113c44b54da1c0355d5bf76fba3080fa/1-2.png" />
          </figure><p>Once both sides agree, the chosen keyshare is used to create a shared secret that encrypts the messages they exchange and allows only the right parties to decrypt them.</p>
    <div>
      <h3>The nitty-gritty details of key agreement</h3>
      <a href="#the-nitty-gritty-details-of-key-agreement">
        
      </a>
    </div>
    <p>Up until recently, most of these handshakes have relied on <a href="https://blog.cloudflare.com/a-relatively-easy-to-understand-primer-on-elliptic-curve-cryptography/"><u>elliptic curve cryptography</u></a> (ECC) using a curve known as X25519. But looming on the horizon are quantum computers, which could one day break ECC algorithms like X25519 and others. To prepare, the industry is shifting toward post-quantum key agreement with <a href="https://nvlpubs.nist.gov/nistpubs/fips/nist.fips.203.pdf"><u>MLKEM</u></a>, deployed in a hybrid mode (<a href="https://datatracker.ietf.org/doc/draft-ietf-tls-ecdhe-mlkem/"><u>X25519 + MLKEM</u></a>). This ensures that even if quantum machines arrive, <a href="https://en.wikipedia.org/wiki/Harvest_now,_decrypt_later"><u>harvested traffic today</u></a> can’t be decrypted tomorrow. X25519 + MLKEM is <a href="https://radar.cloudflare.com/adoption-and-usage#post-quantum-encryption-adoption"><u>steadily rising to become the most popular</u></a> key agreement for connections to Cloudflare.</p><p>The TLS handshake model is the foundation for how we encrypt web communications today. The history of TLS is really the story of <i>iteration under pressure</i>. It’s a protocol that had to keep evolving, so trust on the web could keep pace with how Internet traffic has changed. It’s also what makes technologies like <b>Cloudflare’s Automatic SSL/TLS</b> possible, by abstracting decades of protocol battles and crypto engineering into a single click, so customer websites can be secured by default without requiring every operator to be a cryptography expert.</p>
    <div>
      <h2>History Lesson: Stumbles and Standards</h2>
      <a href="#history-lesson-stumbles-and-standards">
        
      </a>
    </div>
    <p>Early versions of TLS (then called SSL) in the 1990s suffered from weak keys, limited protection against attacks like <a href="https://en.wikipedia.org/wiki/Man-in-the-middle_attack"><u>man-in-the-middle</u></a>, and low adoption on the Internet. To stabilize things, the <a href="https://www.ietf.org/"><u>IETF</u></a> stepped in and released <a href="https://www.ietf.org/rfc/rfc2246.txt"><u>TLS 1.0</u></a>, followed by TLS <a href="https://datatracker.ietf.org/doc/html/rfc4346"><u>1.1</u></a> and <a href="https://datatracker.ietf.org/doc/html/rfc5246"><u>1.2</u></a> through the 2000s. These versions added stronger ciphers and patched new attack vectors, but years of fixes and extensions left the protocol bloated and hard to evolve.</p><p>The early 2010s marked a turning point. After the <a href="https://iapp.org/news/a/the-snowden-disclosures-10-years-on"><u>Snowden disclosures</u></a>, the Internet doubled down on encryption by default. Initiatives like <a href="https://en.wikipedia.org/wiki/Let%27s_Encrypt"><u>Let’s Encrypt</u></a>, the mass adoption of <a href="https://en.wikipedia.org/wiki/HTTPS"><u>HTTPS</u></a>, and Cloudflare’s own commitment to offer <a href="https://www.cloudflare.com/application-services/products/ssl/"><u>SSL/TLS for free</u></a> turned encryption from optional, expensive, and complex into an easy baseline requirement for a safer Internet.</p><p>All of this momentum led to <a href="https://datatracker.ietf.org/doc/html/rfc8446"><u>TLS 1.3</u></a> (2018), which cut away legacy baggage, locked in modern cipher suites, and made encrypted connections nearly as fast as the underlying transport protocols like TCP—and sometimes even faster with <a href="https://en.wikipedia.org/wiki/QUIC"><u>QUIC</u></a>.</p>
    <div>
      <h2>The CDN Twist</h2>
      <a href="#the-cdn-twist">
        
      </a>
    </div>
    <p>As Content Delivery Networks (CDNs) rose to prominence, they reshaped how TLS was deployed. Instead of a browser talking directly to a distant server hosting content (what Cloudflare calls an origin), it now spoke to the nearest edge data center, which may in-turn speak to an origin server on the client’s behalf.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7CTywdNaDxUXcGHVg5i1MP/975f9b0a74b2b5c5fb59ecb64d3268bb/2.png" />
          </figure><p>This created <b>two distinct TLS layers</b>:</p><ul><li><p><b>Edge ↔ Browser TLS:</b> The front door, built to quickly take on new improvements in security and performance. Edges and browsers adopt modern protocols (TLS 1.3, QUIC, session resumption) to cut down on latency.</p></li><li><p><b>Edge ↔ Origin TLS:</b> The backhaul, which must be more flexible. Origins might be older, more poorly maintained, run legacy TLS stacks, or require custom certificate handling.</p></li></ul><p>In practice, CDNs became <i>translators</i>: modernizing encryption at the edge while still bridging to legacy origins. It’s why you can have a blazing-fast TLS 1.3 session from your phone, even if the origin server behind the CDN hasn’t been upgraded in years. </p><p>This is where Automatic SSL/TLS sits in the story of how we secure Internet communications. </p>
    <div>
      <h2>Automatic SSL/TLS </h2>
      <a href="#automatic-ssl-tls">
        
      </a>
    </div>
    <p>Automatic SSL/TLS grew out of Cloudflare’s mission to ensure the web was as encrypted as possible. While we had initially spent an incredibly long time developing secure connections for the “front door” (from browsers to Cloudflare’s edge) with <a href="https://blog.cloudflare.com/introducing-universal-ssl/"><u>Universal SSL</u></a>, we knew that the “back door” (from Cloudflare’s edge to origin servers) would be slower and harder to upgrade. </p><p>One option we offered was <a href="https://developers.cloudflare.com/cloudflare-one/connections/connect-networks/"><u>Cloudflare Tunnel</u></a>, where a lightweight agent runs near the origin server and tunnels traffic securely back to Cloudflare. This approach ensures the connection always uses modern encryption, without requiring changes on the origin itself.</p><p>But not every customer uses Tunnel. Many connect origins directly to Cloudflare’s edge, where encryption depends on the origin server’s configuration. Traditionally this meant customers had to either manually select an <a href="https://developers.cloudflare.com/ssl/origin-configuration/ssl-modes/"><u>encryption mode</u></a> that worked for their origin server or rely on the default chosen by Cloudflare. </p><p>To improve the experience of choosing an encryption mode, we introduced our <a href="https://blog.cloudflare.com/ssl-tls-recommender/"><u>SSL/TLS Recommender</u></a> in 2021.</p><p>The Recommender scanned customer origin servers and then provided recommendations for their most secure encryption mode. For example, if the Recommender detected that an origin server was using a certificate signed by a trusted Certificate Authority (CA) such as Let’s Encrypt, rather than a self-signed certificate, it would recommend upgrading from <a href="https://developers.cloudflare.com/ssl/origin-configuration/ssl-modes/full/"><b><u>Full</u></b><u> encryption mode</u></a> to <a href="https://developers.cloudflare.com/ssl/origin-configuration/ssl-modes/full-strict/"><b><u>Full (Strict)</u></b><u> encryption mode</u></a>.</p><p>Based on how the origin responded, Recommender would tell customers if they could improve their SSL/TLS encryption mode to be more secure. The following encryption modes represent what the SSL/TLS Recommender could recommend to customers based on their origin responses: </p><table><tr><td><p><b>SSL/TLS mode</b></p></td><td><p><b>HTTP from visitor</b></p></td><td><p><b>HTTPS from visitor</b></p></td></tr><tr><td><p><a href="https://developers.cloudflare.com/ssl/origin-configuration/ssl-modes/off/"><u>Off</u></a></p></td><td><p>HTTP to Origin</p></td><td><p>HTTP to Origin</p></td></tr><tr><td><p><a href="https://developers.cloudflare.com/ssl/origin-configuration/ssl-modes/flexible/"><u>Flexible</u></a></p></td><td><p>HTTP to Origin</p></td><td><p>HTTP to Origin</p></td></tr><tr><td><p><a href="https://developers.cloudflare.com/ssl/origin-configuration/ssl-modes/full/"><u>Full</u></a></p></td><td><p>HTTP to Origin</p></td><td><p>HTTPS to Origin without certification validation check</p></td></tr><tr><td><p><a href="https://developers.cloudflare.com/ssl/origin-configuration/ssl-modes/full-strict/"><u>Full (strict)</u></a></p></td><td><p>HTTP to Origin</p></td><td><p>HTTPS to Origin with certificate validation check</p></td></tr><tr><td><p><a href="https://developers.cloudflare.com/ssl/origin-configuration/ssl-modes/ssl-only-origin-pull/"><u>Strict (SSL-only origin pull)</u></a></p></td><td><p>HTTPS to Origin with certificate validation check</p></td><td><p>HTTPS to Origin with certificate validation check</p></td></tr></table><p>However, in the three years after launching our Recommender we discovered something troubling: of the over two million domains using Recommender, <b>only 30% of the recommendations that the system provided were followed</b>. A significant number of users would not complete the next step of pushing the button to inform Cloudflare that we could communicate with their origin over a more secure setting. </p><p>We were seeing sub-optimal settings that our customers could upgrade from without risk of breaking their site, but for various reasons, our users did not follow through with the recommendations. So we pushed forward by building a system that worked with Recommender and actioned the recommendations by default. </p>
    <div>
      <h2>How does Automatic SSL/TLS work? </h2>
      <a href="#how-does-automatic-ssl-tls-work">
        
      </a>
    </div>
    <p>Automatic SSL/TLS<b> </b>works by crawling websites, looking for content over both HTTP and HTTPS, then comparing the results for compatibility. It also performs checks against the TLS certificate presented by the origin and looks at the type of content that is served to ensure it matches. If the downloaded content matches, Automatic SSL/TLS elevates the encryption level for the domain to the compatible and stronger mode, without risk of breaking the site.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/49AaKdddEOgWXk1Oxlg2Qp/be44b863e2f4c797fa58c8b81f93f51a/3.png" />
          </figure><p>More specifically, these are the steps that Automatic SSL/TLS takes to upgrade domain’s security: </p><ol><li><p>Each domain is scheduled for a scan <b>once per month</b> (or until it reaches the maximum supported encryption mode).</p></li><li><p>The scan evaluates the current <a href="https://developers.cloudflare.com/ssl/origin-configuration/ssl-modes/"><u>encryption mode</u></a> for the domain. If it’s lower than what the Recommender thinks the domain can support based on the <a href="https://blog.cloudflare.com/introducing-automatic-ssl-tls-securing-and-simplifying-origin-connectivity/#:~:text=When%20the%20Recommender,recommendation%20is%20followed."><u>results</u></a> of its probes and content scans, the system begins a gradual upgrade.</p></li><li><p>Automatic SSL/TLS begins to upgrade the domain by connecting with origins over the more secure mode starting with just 1% of its traffic.</p></li><li><p>If connections to the origin succeed, the result is logged as successful.</p><ol><li><p>If they fail, the system records the failure to Cloudflare’s control plane and aborts the upgrade. Traffic is immediately downgraded back to the previous SSL/TLS setting to ensure seamless operation.</p></li></ol></li><li><p>If no issues are found, the new SSL/TLS encryption mode is applied to traffic in 10% increments until 100% of traffic uses the recommended mode.</p></li><li><p>Once 100% of traffic has been successfully upgraded with no TLS-related errors, the domain’s SSL/TLS setting is permanently updated.</p></li><li><p><b>Special handling for Flexible → Full/Strict:</b> These upgrades are more cautious because customers’ <a href="https://developers.cloudflare.com/cache/how-to/cache-keys/"><u>cache keys</u></a> are changed (from <code>http</code> to <code>https</code> origin scheme).</p><ol><li><p>In this situation, traffic ramps up from 1% to 10% in 1% increments, allowing customers’ cache to warm-up.</p></li><li><p>After 10%, the system resumes the standard 10% increments until 100%.</p></li></ol></li></ol><p>We know that transparency and visibility are critical, especially when automated systems make changes. To keep customers informed, Automatic SSL/TLS sends a weekly digest to account <a href="https://developers.cloudflare.com/fundamentals/manage-members/roles/"><u>Super Administrators</u></a> whenever updates are made to domain encryption modes. This way, you always have visibility into what changed and when.  </p><p>In short, Automatic SSL/TLS automates what used to be trial and error: finding the strongest SSL/TLS mode your site can support while keeping everything working smoothly.</p>
    <div>
      <h2>How are we doing so far?  </h2>
      <a href="#how-are-we-doing-so-far">
        
      </a>
    </div>
    <p>So far we have onboarded <b>all Free, Pro, and Business domains to use Automatic SSL/TLS</b>. We also have enabled this for <b>all new domains</b> that will onboard onto Cloudflare regardless of plantype. Soon, we will start onboarding Enterprise customers as well. If you already have an Enterprise domain and want to try out Automatic SSL/TLS we encourage you to <a href="https://developers.cloudflare.com/ssl/origin-configuration/ssl-modes/#automatic-ssltls-default"><u>enable it in the SSL/TLS section</u></a> of the dashboard or via the API. </p><p>As of the publishing of this blog, we’ve upgraded over<b> 6 million domains</b> to be more secure without the website operators needing to manually configure anything on Cloudflare. </p><table><tr><td><p><b>Previous Encryption Mode</b></p></td><td><p><b>Upgraded Encryption Mode</b></p></td><td><p><b>Number of domains</b></p></td></tr><tr><td><p>Flexible</p></td><td><p>Full</p></td><td><p>~ 2,200,000</p></td></tr><tr><td><p>Flexible</p></td><td><p>Full (strict)</p></td><td><p>~ 2,000,000</p></td></tr><tr><td><p>Full </p></td><td><p>Full (strict)</p></td><td><p>~ 1,800,000</p></td></tr><tr><td><p>Off</p></td><td><p>Full</p></td><td><p>~ 7,000</p></td></tr><tr><td><p>Off</p></td><td><p>Full (strict)</p></td><td><p>~ 5,000</p></td></tr></table><p>We’re most excited about the over 4 million domains that moved from <a href="https://developers.cloudflare.com/ssl/origin-configuration/ssl-modes/flexible/"><u>Flexible</u></a> or <a href="https://developers.cloudflare.com/ssl/origin-configuration/ssl-modes/off/"><u>Off</u></a>, which uses HTTP to origin servers, to <a href="https://developers.cloudflare.com/ssl/origin-configuration/ssl-modes/full/"><u>Full</u></a> or <a href="https://developers.cloudflare.com/ssl/origin-configuration/ssl-modes/full-strict/"><u>Strict</u></a>, which uses HTTPS. </p><p>If you have a reason to use a particular encryption mode (e.g., on a test domain that isn’t production ready) you can always disable Automatic SSL/TLS and manually set the encryption mode that works best for your use case.</p><p>Today, SSL/TLS mode works on a domain-wide level, which can feel blunt. This means that one suboptimal subdomain can keep the entire domain in a less secure TLS setting, to ensure availability. Our long-term goal is to make these controls more precise, so that Automatic SSL/TLS and encryption modes can optimize security per origin or subdomain, rather than treating every hostname the same.</p>
    <div>
      <h2>Impact on origin-facing connections</h2>
      <a href="#impact-on-origin-facing-connections">
        
      </a>
    </div>
    <p>Since we began onboarding domains to <b>Automatic SSL/TLS</b> in late 2024 and early 2025, we’ve been able to measure how origin connections across our network are shifting toward stronger security. Looking at the ratios across all origin requests, the trends are clear:</p><ul><li><p><b>Encryption is rising.</b> Plaintext connections are steadily declining, a reflection of Automatic SSL/TLS helping millions of domains move to HTTPS by default. We’ve seen <b>a correlated 7-8% reduction in plaintext origin-bound connections.</b> Still, some origins remain on outdated configurations, and these should be upgraded to keep pace with modern security expectations.</p></li><li><p><b>TLS 1.3 is surging.</b> Since late 2024, TLS 1.3 adoption has climbed sharply, now making up the majority of encrypted origin traffic (almost 60%). While Automatic SSL/TLS doesn’t control which TLS version an origin supports, this shift is an encouraging sign for both performance and security.</p></li><li><p><b>Older versions are fading.</b> Month after month, TLS 1.2 continues to shrink, while TLS 1.0 and 1.1 are now so rare they barely register.</p></li></ul><p>The decline in plaintext connections is encouraging, but it also highlights a long tail of servers still relying on outdated packages or configurations. Sites like <a href="https://www.ssllabs.com/ssltest/"><u>SSL Labs</u></a> can be used, for instance, to check a server’s TLS configuration. However, simply copy-pasting settings to achieve a high rating can be risky, so we encourage customers to review their origin TLS configurations carefully. In addition, <a href="https://developers.cloudflare.com/ssl/origin-configuration/origin-ca/"><u>Cloudflare origin CA</u></a> or <a href="https://developers.cloudflare.com/cloudflare-one/connections/connect-networks/"><u>Cloudflare Tunnel</u></a> can help provide guidance for upgrading origin security.</p>
    <div>
      <h3>Upgraded domain results</h3>
      <a href="#upgraded-domain-results">
        
      </a>
    </div>
    <p>Instead of focusing on the entire network of origin-facing connections from Cloudflare, we’re now going to drill into specific changes that we’ve seen from domains that have been upgraded by <b>Automatic SSL/TLS</b>. </p><p>By January 2025, most domains had been enrolled in Automatic SSL/TLS, and the results were dramatic: a near 180-degree shift from plaintext to encrypted communication with origins. After that milestone, traffic patterns leveled off into a steady plateau, reflecting a more stable baseline of secure connections across the network. There is some drop in encrypted traffic which may represent some of the originally upgraded domains manually turning off Automatic SSL/TLS.</p><p>But the story doesn’t end there. In the past two months (July and August 2025), we’ve observed another noticeable uptick in encrypted origin traffic. This likely reflects customers upgrading outdated origin packages and enabling stronger TLS support—evidence that Automatic SSL/TLS not only raised the floor on encryption but continues nudging the long tail of domains toward better security.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6nJe12swMSMXBQsgzEhXtq/78debf8e0c3efbaf66bce8cf6e623c80/4.png" />
          </figure><p>To further explore the “encrypted” line above, we wanted to see what the delta was between TLS 1.2 and 1.3. Originally we wanted to include all TLS versions we support but the levels of 1.0 and 1.1 were so small that they skewed the graph and were taken out. We see a noticeable rise in the support for both TLS 1.2 and 1.3 between Cloudflare and origin servers. What is also interesting to note here is the network-wide decrease in TLS 1.2 but for the domains that have been automatically upgraded a generalized increase, potentially also signifying origin TLS stacks that could be updated further.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/BoRlq4irKWuvuXs5E4e8l/3971165f5029a03ae64dac79235a8671/5.png" />
          </figure><p>Finally, for Full (Strict) mode,<b> </b>we wanted to investigate the number of successful certificate validations we performed. This line shows a dramatic, approximately 40%, increase in successful certificate validations performed for customers upgraded by Automatic SSL/TLS. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5nNyiMNQ4xtOubbrhnDuRY/af16c0792a73de71fa0176e6c1cfeb0b/6.png" />
          </figure><p>We’ve seen a largely successful rollout of Automatic SSL/TLS so far, with millions of domains upgraded to stronger encryption by default. We’ve seen help Automatic SSL/TLS improve origin-facing security, safely pushing connections to stronger modes whenever possible, without risking site breakage. Looking ahead, we’ll continue to expand this capability to more customer use cases as we help to build a more encrypted Internet.</p>
    <div>
      <h2>What will we build next for Automatic SSL/TLS? </h2>
      <a href="#what-will-we-build-next-for-automatic-ssl-tls">
        
      </a>
    </div>
    <p>We’re expanding Automatic SSL/TLS with new features that give customers more visibility and control, while keeping the system safe by default. First, we’re building an <b>ad-hoc scan</b> option that lets you rescan your origin earlier than the standard monthly cadence. This means if you’ve just rotated certificates, upgraded your origin’s TLS configuration, or otherwise changed how your server handles encryption, you won’t need to wait for the next scheduled pass—Cloudflare will be able to re-evaluate and move you to a stronger mode right away.</p><p>In addition, we’re working on <b>error surfacing</b> that will highlight origin connection problems directly in the dashboard and provide actionable guidance for remediation. Instead of discovering after the fact that an upgrade failed, or a change on the origin resulted in a less secure setting than what was set previously, customers will be able to see where the issue lies and how to fix it. </p><p>Finally, for <b>newly onboarded domains</b>, we plan to add clearer guidance on when to finish configuring the origin before Cloudflare runs its first scan and sets an encryption mode. Together, these improvements are designed to reduce surprises, give customers more agency, and ensure smoother upgrades. We expect all three features to roll out by June 2026.</p>
    <div>
      <h2>Post Quantum Era</h2>
      <a href="#post-quantum-era">
        
      </a>
    </div>
    <p>Looking ahead, quantum computers introduce a serious risk: data encrypted today can be harvested and decrypted years later once quantum attacks become practical. To counter this <a href="https://en.wikipedia.org/wiki/Harvest_now,_decrypt_later"><u>harvest-now, decrypt-later</u></a> threat, the industry is moving towards post-quantum cryptography (PQC)—algorithms designed to withstand quantum attacks. We have extensively written on this subject <a href="https://blog.cloudflare.com/tag/post-quantum/"><u>in our previous blogs</u></a>.</p><p>In August 2024, NIST finalized its PQC standards: <a href="https://nvlpubs.nist.gov/nistpubs/fips/nist.fips.203.pdf"><u>ML-KEM</u></a> for key agreement, and <a href="https://nvlpubs.nist.gov/nistpubs/fips/nist.fips.204.pdf"><u>ML-DSA</u></a> and <a href="https://nvlpubs.nist.gov/nistpubs/fips/nist.fips.205.pdf"><u>SLH-DSA</u></a> for digital signatures. In collaboration with industry partners, Cloudflare has helped drive the development and deployment of PQC. We have deployed the hybrid key agreement, combining ML-KEM (post-quantum secure) and X25519 (classical), to secure TLS 1.3 traffic to our servers and internal systems. As of mid-September 2025, <a href="https://radar.cloudflare.com/adoption-and-usage#post-quantum-encryption-adoption"><u>around 43%</u></a> of human-generated connections to Cloudflare are already protected with the hybrid post-quantum secure key agreement – a huge milestone in preparing the Internet for the quantum era.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2hgIUNO8TM50kvAOvzQ8rg/cdbe5b3d64390fc4b946036e2f37471d/6.png" />
          </figure><p>But things look different on the other side of the network. When Cloudflare connects to origins, we act as the client, navigating a fragmented landscape of hosting providers, software stacks, and middleboxes. Each origin may support a different set of cryptographic features, and not all are ready for hybrid post-quantum handshakes.</p><p>To manage this diversity without the risk of breaking connections, we relied on <i>HelloRetryRequest</i>. Instead of sending post-quantum keyshare immediately in the <i>ClientHello</i>, we only advertise support for it. If the origin server supports the post-quantum key agreement, it uses <i>HelloRetryRequest</i> to request it from Cloudflare, and creates the post-quantum connection. The downside is this extra round trip (from the retry) cancels out the performance gains of TLS 1.3 and makes the connection feel closer to TLS 1.2 for uncached requests.</p><p>Back in 2023, <a href="https://developers.cloudflare.com/ssl/post-quantum-cryptography/pqc-to-origin/"><u>we launched an API endpoint</u></a>, so customers could manually opt their origins into preferring post-quantum connections. If set, we avoid the extra roundtrip and try to create a post-quantum connection at the start of the TLS session. Similarly, we extended post-quantum protection to <a href="https://blog.cloudflare.com/post-quantum-tunnel/"><u>Cloudflare tunnel</u></a>, making it one of the easiest ways to get origin-facing PQ today.</p><p><b>Starting Q4 2025, we’re taking the next step – making it </b><b><i>automatic</i></b><b>. </b>Just as we’ve done with SSL/TLS upgrades, Automatic SSL/TLS will begin testing, ramping, and enabling post-quantum handshakes with origins—without requiring customers to change a thing, as long as their origins support post-quantum key agreement.</p><p>Behind the scenes, we’re already scanning active origins about every 24 hours to test support and preferences for both classical and post-quantum key agreements. We’ve worked directly with vendors and customers to identify compatibility issues, and this new scanning system will be fully integrated into <a href="https://developers.cloudflare.com/ssl/origin-configuration/ssl-modes/#automatic-ssltls-default"><u>Automatic SSL/TLS</u></a>.</p><p>And the benefits won't stop at post-quantum. Even for classical handshakes, optimization matters. Today, the X25519 algorithm is used by default, but <b>our scanning data shows that more than 6% of origins currently prefer a different key agreement algorithm, </b>which leads to unnecessary <i>HelloRetryRequests </i>and wasted round trips<b>.</b> By folding this scanning data into Automatic SSL/TLS, we’ll improve connection establishment for classical TLS as well—squeezing out extra speed and reliability across the board.</p><p>As enterprises and hosting providers adopt PQC, our preliminary scanning pipeline has already found that <b>around 4% of origins could benefit from a post-quantum-preferred key agreement even today</b>, as shown below. This is an 8x increase since <a href="https://blog.cloudflare.com/post-quantum-to-origins/"><u>we started our scans in 2023</u></a>. We expect this number to grow at a steady pace as the industry continues to migrate to post-quantum protocols.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3g2Um1vTz6cuCtoYWtMU4C/7551fb50305a8512fa7cc22844024b99/8.png" />
          </figure><p>As part of this change, we will also<b> phase out</b> support for the pre-standard version X25519Kyber768 to support the final ML-KEM standard, again using a hybrid, from edge to origin connections.</p><p>With Automatic SSL/TLS, we will soon by default scan your origins proactively to directly send the most preferred keyshare to your origin removing the need for any extra roundtrip, improving both security and performance of your origin connections collectively.</p><p>At Cloudflare, we’ve always believed security is a right, not a privilege. From Universal SSL to post-quantum cryptography, our mission has been to make the strongest protections free and available to everyone. <b>Automatic SSL/TLS</b> is the next step—upgrading every domain to the best protocols automatically. Check the SSL/TLS section of your dashboard to ensure it’s enabled and join the millions of sites already secured for today and ready for tomorrow.</p> ]]></content:encoded>
            <category><![CDATA[Birthday Week]]></category>
            <category><![CDATA[TLS]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[Encryption]]></category>
            <category><![CDATA[Post-Quantum]]></category>
            <category><![CDATA[Research]]></category>
            <category><![CDATA[CDN]]></category>
            <guid isPermaLink="false">7nO4wFW304Eh2r48934ugz</guid>
            <dc:creator>Alex Krivit</dc:creator>
            <dc:creator>Suleman Ahmad</dc:creator>
            <dc:creator>Yawar Jamal</dc:creator>
        </item>
        <item>
            <title><![CDATA[Addressing the unauthorized issuance of multiple TLS certificates for 1.1.1.1]]></title>
            <link>https://blog.cloudflare.com/unauthorized-issuance-of-certificates-for-1-1-1-1/</link>
            <pubDate>Thu, 04 Sep 2025 17:30:00 GMT</pubDate>
            <description><![CDATA[ Unauthorized TLS certificates were issued for 1.1.1.1 by a Certification Authority without permission from Cloudflare. These rogue certificates have now been revoked. ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Over the past few days Cloudflare has been notified through our vulnerability disclosure program and the <a href="https://groups.google.com/g/certificate-transparency/c/we_8SNGqA3w/m/ILXqa0hzAgAJ?utm_medium=email&amp;utm_source=footer"><u>certificate transparency mailing list</u></a> that unauthorized certificates were issued by <a href="https://www.fina.hr/"><u>Fina CA</u></a> for 1.1.1.1, one of the IP addresses used by our <a href="https://one.one.one.one/"><u>public DNS resolver service</u></a>. From February 2024 to August 2025, Fina CA <a href="https://crt.sh/?iPAddress=1.1.1.1&amp;match=="><u>issued</u></a> twelve certificates for 1.1.1.1 without our permission. We did not observe unauthorized issuance for any properties managed by Cloudflare other than 1.1.1.1.</p><p>We have no evidence that bad actors took advantage of this error. To impersonate Cloudflare's public DNS resolver 1.1.1.1, an attacker would not only require an unauthorized certificate and its corresponding private key, but attacked users would also need to trust the Fina CA. Furthermore, traffic between the client and 1.1.1.1 would have to be intercepted.</p><p>While this unauthorized issuance is an unacceptable lapse in security by Fina CA, we should have caught and responded to it earlier. After speaking with Fina CA, it appears that they issued these certificates for the purposes of internal testing. However, no CA should be issuing certificates for domains and IP addresses without checking control. At present all certificates have been <a href="http://rdc.fina.hr/RDC2020/FinaRDCCA2020partc1.crl"><u>revoked</u></a>. We are awaiting a full post-mortem from Fina.</p><p>While we regret this situation, we believe it is a useful opportunity to walk through how trust works on the Internet between networks like ourselves, destinations like 1.1.1.1, CAs like Fina, and devices like the one you are using to read this. To learn more about the mechanics, please keep reading.</p>
    <div>
      <h3>Background</h3>
      <a href="#background">
        
      </a>
    </div>
    <p>Cloudflare operates a <a href="https://one.one.one.one/"><u>public DNS resolver 1.1.1.1 service</u></a> that millions of devices use to resolve domain names from a human-readable format such as example.com to an IP address like 192.0.2.42 or 2001:db8::2a.</p><p>The 1.1.1.1 service is accessible using various methods, across multiple domain names, such as <a href="https://cloudflare-dns.com"><u>cloudflare-dns.com</u></a> and <a href="https://one.one.one.one"><u>one.one.one.one</u></a>, and also using various IP addresses, such as 1.1.1.1, 1.0.0.1, 2606:4700:4700::1111, and 2606:4700:4700::1001. <a href="https://one.one.one.one/family/"><u>1.1.1.1 for Families</u></a> also provides public DNS resolver services and is hosted on different IP addresses — 1.1.1.2, 1.1.1.3, 1.0.0.2, 1.0.0.3, 2606:4700:4700::1112, 2606:4700:4700::1113, 2606:4700:4700::1002, 2606:4700:4700::1003.</p><p>As originally specified in <a href="https://datatracker.ietf.org/doc/html/rfc1034"><u>RFC 1034</u></a> and <a href="https://datatracker.ietf.org/doc/html/rfc1035"><u>RFC 1035</u></a>, the DNS protocol includes no privacy or authenticity protections. DNS queries and responses are exchanged between client and server in plain text over UDP or TCP. These represent around 60% of queries received by the Cloudflare 1.1.1.1 service. The lack of privacy or authenticity protection means that any intermediary can potentially read the DNS query and response and modify them without the client or the server being aware.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6zbEvgOCwZomZTbgSGFuEo/d638f36eebdbf2577ea0b8172dff843e/image1.png" />
          </figure><p>To address these shortcomings, we have helped develop and deploy multiple solutions at the IETF. The two of interest to this post are DNS over TLS (DoT, <a href="https://datatracker.ietf.org/doc/html/rfc7858"><u>RFC 7878</u></a>) and DNS over HTTPS (DoH, <a href="https://datatracker.ietf.org/doc/html/rfc8484"><u>RFC 8484</u></a>). In both cases the DNS protocol itself is mainly unchanged, and the desirable security properties are implemented in a lower layer, replacing the simple use of plain-text in UDP and TCP in the original specification. Both DoH and DoT use TLS to establish an authenticated, private, and encrypted channel over which DNS messages can be exchanged. To learn more you can read <a href="https://blog.cloudflare.com/dns-encryption-explained/"><u>DNS Encryption Explained</u></a>.</p><p>During the <a href="https://www.cloudflare.com/learning/ssl/transport-layer-security-tls/"><u>TLS handshake</u></a>, the server proves its identity to the client by presenting a certificate. The client validates this certificate by verifying that it is signed by a Certification Authority that it already trusts. Only then does it establish a connection with the server. Once connected, TLS provides encryption and integrity for the DNS messages exchanged between client and server. This protects DoH and DoT against eavesdropping and tampering between the client and server.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/21YeKS2nYIFDI9uC3uClXE/6115e00945458d42627d973aef75076c/image2.png" />
          </figure><p>The TLS certificates used in DoT and DoH are the same kinds of certificates HTTPS websites serve. Most website certificates are issued for domain names like <a href="http://example.com"><u>example.com</u></a>. When a client connects to that website, they resolve the name <a href="http://example.com"><u>example.com</u></a> to an IP like 192.0.2.42, then connect to the domain on that IP address. The server responds with a TLS certificate containing <a href="http://example.com"><u>example.com</u></a>, which the device validates.</p><p>However, DNS server certificates tend to be used slightly differently. Certificates used for DoT and DoH have to contain the service IP addresses, not just domain names. This is due to clients being unable to resolve a domain name in order to contact their resolver, like <a href="https://cloudflare-dns.com"><u>cloudflare-dns.com</u></a>. Instead, devices are first set up by connecting to their resolver via a known IP address, such as 1.1.1.1 in the case of Cloudflare public DNS resolver. When this connection uses DoT or DoH, the resolver responds with a TLS certificate issued for that IP address, which the client validates. If the certificate is valid, the client believes that it is talking to the owner of 1.1.1.1 and starts sending DNS queries.</p><p>You can see that the IP addresses are included in the certificate Cloudflare’s public resolver uses for DoT/DoH:</p>
            <pre><code>Certificate:
  Data:
      Version: 3 (0x2)
      Serial Number:
          02:7d:c8:c5:e1:72:94:ae:c9:ed:3f:67:72:8e:8a:08
      Signature Algorithm: sha256WithRSAEncryption
      Issuer: C=US, O=DigiCert Inc, CN=DigiCert Global G2 TLS RSA SHA256 2020 CA1
      Validity
          Not Before: Jan  2 00:00:00 2025 GMT
          Not After : Jan 21 23:59:59 2026 GMT
      Subject: C=US, ST=California, L=San Francisco, O=Cloudflare, Inc., CN=cloudflare-dns.com
      X509v3 extensions:
          X509v3 Subject Alternative Name:
              DNS:cloudflare-dns.com, DNS:*.cloudflare-dns.com, DNS:one.one.one.one, IP Address:1.0.0.1, IP Address:1.1.1.1, IP Address:162.159.36.1, IP Address:162.159.46.1, IP Address:2606:4700:4700:0:0:0:0:1001, IP Address:2606:4700:4700:0:0:0:0:1111, IP Address:2606:4700:4700:0:0:0:0:64, IP Address:2606:4700:4700:0:0:0:0:6400</code></pre>
            
    <div>
      <h3>Rogue certificate issuance</h3>
      <a href="#rogue-certificate-issuance">
        
      </a>
    </div>
    <p>The section above describes normal, expected use of Cloudflare public DNS resolver 1.1.1.1 service, using certificates managed by Cloudflare. However, Cloudflare has been made aware of other, unauthorized certificates being issued for 1.1.1.1. Since certificate validation is the mechanism by which DoH and DoT clients establish the authenticity of a DNS resolver, this is a concern. Let’s now dive a little further in the security model provided by DoH and DoT.</p><p>Consider a client that is preconfigured to use the 1.1.1.1 resolver service using DoT. The client must establish a TLS session with the configured server before it can send any DNS queries. To be trusted, the server needs to present a certificate issued by a CA that the client trusts. The collection of certificates trusted by the client is also called the root store.</p>
            <pre><code>Certificate:
  Data:
      Version: 3 (0x2)
      Serial Number:
          02:7d:c8:c5:e1:72:94:ae:c9:ed:3f:67:72:8e:8a:08
      Signature Algorithm: sha256WithRSAEncryption
      Issuer: C=US, O=DigiCert Inc, CN=DigiCert Global G2 TLS RSA SHA256 2020 CA1</code></pre>
            <p>A Certification Authority (CA) is an organisation, such as DigiCert in the section above, whose role is to receive requests to sign certificates and verify that the requester has control of the domain. In this incident, Fina CA issued certificates for 1.1.1.1 without Cloudflare's involvement. This means that Fina CA did not properly check whether the requestor had legitimate control over 1.1.1.1. According to Fina CA:</p><blockquote><p>“They were issued for the purpose of internal testing of certificate issuance in the production environment. An error occurred during the issuance of the test certificates when entering the IP addresses and as such they were published on Certificate Transparency log servers.”</p></blockquote><p>Although it’s not clear whether Fina CA sees it as an error, we emphasize that it is not an error to publish test certificates on Certificate Transparency (more about what that is later on). Instead, the error at hand is Fina CA using their production keys to sign a certificate for an IP address without permission of the controller. We have <a href="https://ripe84.ripe.net/archives/video/747/"><u>talked about</u></a> misuse of 1.1.1.1 in documentation, lab, and testing environments at length. Instead of the Cloudflare public DNS resolver 1.1.1.1 IP address, Fina should have used an IP address it controls itself.</p><p>Unauthorized certificates are unfortunately not uncommon, whether due to negligence — such as <a href="https://bugzilla.mozilla.org/show_bug.cgi?id=1930029"><u>IdenTrust</u></a> in November 2024 — or compromise. Famously in 2011, the Dutch CA DigiNotar was <a href="https://threatpost.com/final-report-diginotar-hack-shows-total-compromise-ca-servers-103112/77170/"><u>hacked</u></a>, and its keys were used to issue hundreds of certificates. This hack was a wake-up call and motivated the introduction of Certificate Transparency (CT), later formalised in <a href="https://datatracker.ietf.org/doc/html/rfc6962"><u>RFC 6962</u></a>. The goal of Certificate Transparency is not to directly prevent misissuance, but to be able to detect any misissuance once it has happened, by making sure every certificate issued by a CA is publicly available for inspection.</p><p>In certificate transparency several independent parties, including <a href="https://blog.cloudflare.com/introducing-certificate-transparency-monitoring/"><u>Cloudflare</u></a>, operate public logs of issued certificates. Many modern browsers do not accept certificates unless they provide proof in the form of signed certificate timestamps (<a href="https://certificate.transparency.dev/howctworks/"><u>SCT</u></a>s) that the certificate has been logged in at least two logs. Domain owners can therefore monitor all public CT logs for any certificate containing domains they care about. If they see a certificate for their domains that they did not authorize, they can raise the alarm. CT is also the data source for public services such as <a href="https://crt.sh"><u>crt.sh</u></a> and Cloudflare Radar’s <a href="https://radar.cloudflare.com/certificate-transparency"><u>certificate transparency page</u></a>.</p><p>Not all clients require proof of inclusion in certificate transparency. Browsers do, but most DNS clients don’t. We were fortunate that Fina CA did submit the unauthorized certificates to the CT logs, which allowed them to be discovered.</p>
    <div>
      <h3>Investigation into potential malicious use</h3>
      <a href="#investigation-into-potential-malicious-use">
        
      </a>
    </div>
    <p>Our immediate concern was that someone had maliciously used the certificates to impersonate the 1.1.1.1 service. Such an attack would require all the following:</p><ol><li><p>An attacker would require a rogue certificate and its corresponding private key.</p></li><li><p>Attacked clients would need to trust the Fina CA.</p></li><li><p>Traffic between the client and 1.1.1.1 would have to be intercepted.</p></li></ol><p>In light of this incident, we have reviewed these requirements one by one:</p><p>1. We know that a certificate was issued without Cloudflare's involvement. We must assume that a corresponding private key exists, which is not under Cloudflare's control. This could be used by an attacker. Fina CA wrote to us that the private keys were exclusively in Fina’s controlled environment and were immediately destroyed even before the certificates were revoked. As we have no way to verify this, we have and continue to take steps to detect malicious use as described in point 3.</p><p>2. Furthermore, some clients trust Fina CA. It is included by default in <a href="https://learn.microsoft.com/en-us/security/trusted-root/participants-list"><u>Microsoft’s root store</u></a> and in an <a href="https://eidas.ec.europa.eu/efda/trust-services/browse/eidas/tls"><u>EU Trust Service provider</u></a>. We can exclude some clients, as the CA certificate is not included by default in the root stores of <a href="https://android.googlesource.com/platform/system/ca-certificates/+/master/files/"><u>Android</u></a>, <a href="https://support.apple.com/en-us/HT209143"><u>Apple</u></a>, <a href="https://wiki.mozilla.org/CA/Included_Certificates"><u>Mozilla</u></a>, or <a href="https://g.co/chrome/root-policy"><u>Chrome</u></a>. These users cannot have been affected with these default settings. For these certificates to be used nefariously, the client’s root store must include the Certification Authority (CA) that issued them. Upon discovering the problem, we immediately reached out to Fina CA, Microsoft, and the <a href="https://eidas.ec.europa.eu/efda/trust-services/browse/eidas/tls/tl/HR"><u>EU Trust Service provider</u></a>. Microsoft responded quickly, and started rolling out an update to their <i>disallowed list</i>, which should cause clients that use it to stop trusting the certificate.</p><p>3. Finally, we have launched an investigation into possible interception between users and 1.1.1.1. The first way this could happen is when the attacker is on-path of the client request. Such man-in-the-middle attacks are likely to be invisible to us. Clients will get responses from their on-path middlebox and we have no reliable way of telling that is happening. On-path interference has been a persistent problem for 1.1.1.1, which we’ve been <a href="https://blog.cloudflare.com/fixing-reachability-to-1-1-1-1-globally/"><u>working on</u></a> ever since we announced 1.1.1.1.</p><p>A second scenario can occur when a malicious actor is off-path, but is able to hijack 1.1.1.1 routing via BGP. These are scenarios we have discussed in a<a href="https://blog.cloudflare.com/bgp-hijack-detection/"> <u>previous blog post</u></a>, and <a href="https://manrs.org/2024/05/rpki-rov-deployment-reaches-major-milestone/"><u>increasing adoption of RPKI route origin validation (ROV)</u></a> makes BGP hijacks with high penetration harder. We looked at the historical BGP announcements involving 1.1.1.1, and have found no evidence that such routing hijacks took place.</p><p>Although we cannot be certain, so far we have seen no evidence that these certificates have been used to impersonate Cloudflare public DNS resolver 1.1.1.1 traffic. In later sections we discuss the steps we have taken to prevent such impersonation in the future, as well as concrete actions you can take to protect your own systems and users.</p>
    <div>
      <h3>A closer look at the unauthorized certificates attributes</h3>
      <a href="#a-closer-look-at-the-unauthorized-certificates-attributes">
        
      </a>
    </div>
    <p>All unauthorized certificates for 1.1.1.1 were valid for exactly one year and included other domain names. Most of these domain names are not registered, which indicates that the certificates were issued without proper domain control validation. This violates sections 3.2.2.4 and 3.2.2.5 of the CA/Browser Forum’s <a href="https://cabforum.org/working-groups/server/baseline-requirements/requirements/#3224-validation-of-domain-authorization-or-control"><u>Baseline Requirements</u></a>, and sections 3.2.2.3 and 3.2.2.4 of the <a href="https://rdc.fina.hr/RDC2015/CPWSA1-12-en.pdf"><u>Fina CA Certificate Policy</u></a>.</p><p>The full list of domain names we identified on the unauthorized certificates are as follows:</p>
            <pre><code>fina.hr
ssltest5
test.fina.hr
test.hr
test1.hr
test11.hr
test12.hr
test5.hr
test6
test6.hr
testssl.fina.hr
testssl.finatest.hr
testssl.hr
testssl1.finatest.hr
testssl2.finatest.hr</code></pre>
            <p>It’s also worth noting that the Subject attribute points to a fictional organisation <b>TEST D.D.</b>, as can be seen on this unauthorized certificate:</p>
            <pre><code>        Serial Number:
            a5:30:a2:9c:c1:a5:da:40:00:00:00:00:56:71:f2:4c
        Signature Algorithm: sha256WithRSAEncryption
        Issuer: C=HR, O=Financijska agencija, CN=Fina RDC 2015
        Validity
            Not Before: Nov  2 23:45:15 2024 GMT
            Not After : Nov  2 23:45:15 2025 GMT
        Subject: C=HR, O=TEST D.D., L=ZAGREB, CN=testssl.finatest.hr, serialNumber=VATHR-32343828408.306
        X509v3 extensions:
            X509v3 Subject Alternative Name:
                DNS:testssl.finatest.hr, DNS:testssl2.finatest.hr, IP Address:1.1.1.1</code></pre>
            
    <div>
      <h3>Incident timeline and impact</h3>
      <a href="#incident-timeline-and-impact">
        
      </a>
    </div>
    <p><i>All timestamps are UTC. All certificates are identified by their date of validity.</i></p><p>The <a href="https://crt.sh/?id=12116084225"><u>first certificate</u></a> was issued to be valid starting February 2024, and revoked 33 min later. 11 certificate issuances with common name 1.1.1.1 followed from February 2024 to August 2025. Public reports have been made on <a href="https://news.ycombinator.com/item?id=45089708"><u>Hacker News</u></a> and on the <a href="https://groups.google.com/g/certificate-transparency/c/we_8SNGqA3w/m/ILXqa0hzAgAJ"><u>certificate-transparency mailing list</u></a> early in September 2025, which Cloudflare responded to.</p><p>While responding to the incident, we identified the full list of misissued certificates, their revocation status, and which clients trust them.</p><p>The full timeline for the incident is as follows.</p><table><tr><td><p><b>Date &amp; Time (UTC)</b></p></td><td><p><b>Event Description</b></p></td></tr><tr><td><p>2024-02-18 11:07:33</p></td><td><p><a href="https://crt.sh/?id=12116084225"><u>First certificate issuance</u></a> revoked on 2024-02-18 11:40:00</p></td></tr><tr><td><p>2024-09-25 08:04:03</p></td><td><p><a href="https://crt.sh/?id=14681939427"><u>Issuance</u></a> revoked on 2024-11-06 07:36:05</p></td></tr><tr><td><p>2024-10-04 07:55:38</p></td><td><p><a href="https://crt.sh/?id=14793030836"><u>Issuance</u></a> revoked on 2024-10-04 07:56:56</p></td></tr><tr><td><p>2024-10-04 08:05:48</p></td><td><p><a href="https://crt.sh/?id=14793121895"><u>Issuance</u></a> revoked on 2024-11-06 07:39:55</p></td></tr><tr><td><p>2024-10-15 06:28:48</p></td><td><p><a href="https://crt.sh/?id=14939369380"><u>Issuance</u></a> revoked on 2024-11-06 07:35:36</p></td></tr><tr><td><p>2024-11-02 23:45:15</p></td><td><p><a href="https://crt.sh/?id=15190039061"><u>Issuance</u></a> revoked on 2024-11-02 23:48:42</p></td></tr><tr><td><p>2025-03-05 09:12:23</p></td><td><p><a href="https://crt.sh/?id=16939550348"><u>Issuance</u></a> revoked on 2025-03-05 09:13:22</p></td></tr><tr><td><p>2025-05-24 22:56:21</p></td><td><p><a href="https://crt.sh/?id=18603461241"><u>Issuance</u></a> revoked on 2025-09-04 06:13:27</p></td></tr><tr><td><p>2025-06-28 23:05:32</p></td><td><p><a href="https://crt.sh/?id=19318694206"><u>Issuance</u></a> revoked on 2025-07-18 07:01:27</p></td></tr><tr><td><p>2025-07-18 07:05:23</p></td><td><p><a href="https://crt.sh/?id=19749594221"><u>Issuance</u></a> revoked on 2025-07-18 07:09:45</p></td></tr><tr><td><p>2025-07-18 07:13:14</p></td><td><p><a href="https://crt.sh/?id=19749721864"><u>Issuance</u></a> revoked on 2025-09-04 06:30:36</p></td></tr><tr><td><p>2025-08-26 07:49:00</p></td><td><p><a href="https://crt.sh/?id=20582951233"><u>Last certificate issuance</u></a> revoked on 2025-09-04 06:33:20</p></td></tr><tr><td><p>2025-09-01 05:23:00</p></td><td><p><a href="https://news.ycombinator.com/item?id=45089708"><u>HackerNews submission</u></a> about a possible unauthorized issuance</p></td></tr><tr><td><p>2025-09-02 04:50:00</p></td><td><p>Report shared with us on HackerOne, but was mistriaged</p></td></tr><tr><td><p>2025-09-03 02:35:00</p></td><td><p>Second report shared with us on HackerOne, but also mistriaged.</p></td></tr><tr><td><p>2025-09-03 10:59:00</p></td><td><p><a href="https://groups.google.com/g/certificate-transparency/c/we_8SNGqA3w/m/ILXqa0hzAgAJ?utm_medium=email&amp;utm_source=footer"><u>Report sent</u></a> on the public <a><u>certificate-transparency@googlegroups.com</u></a> mailing picked up by the team.</p></td></tr><tr><td><p>2025-09-03 11:33:00</p></td><td><p>First response by Cloudflare on the mailing list about starting the investigation</p></td></tr><tr><td><p>2025-09-03 12:08:00</p></td><td><p>Incident declared</p></td></tr><tr><td><p>2025-09-03 12:16:00</p></td><td><p>Notification of an unauthorised issuance sent to Fina CA, Microsoft Root Store, and EU Trust service provider</p></td></tr><tr><td><p>2025-09-03 12:23:00</p></td><td><p>Cloudflare identifies an initial list of nine rogue certificates</p></td></tr><tr><td><p>2025-09-03 12:24:00</p></td><td><p>Outreach to Fina CA to inform them about the unauthorized issuance, requesting revocation</p></td></tr><tr><td><p>2025-09-03 12:26:00</p></td><td><p>Identify the number of requests served on 1.1.1.1 IP address, and associated names/services</p></td></tr><tr><td><p>2025-09-03 12:42:00</p></td><td><p>As a precautionary measure, began investigation to rule out the possibility of a BGP hijack for 1.1.1.1</p></td></tr><tr><td><p>2025-09-03 18:48:00</p></td><td><p>Second notification of the incident to Fina CA</p></td></tr><tr><td><p>2025-09-03 21:27:00</p></td><td><p>Microsoft Root Store notifies us that they are preventing further use of the identified unauthorized certificates by using their quick-revocation mechanism.</p></td></tr><tr><td><p>2025-09-04 06:13:27</p></td><td><p>Fina revoked all certificates.</p></td></tr><tr><td><p>2025-09-04 12:44:00</p></td><td><p>Cloudflare receives a response from Fina indicating “an error occurred during the issuance of the test certificates when entering the IP addresses and as such they were published on Certificate Transparency log servers. [...] Fina will eliminate the possibility of such an error recurring.”</p></td></tr></table>
    <div>
      <h3>Remediation and follow-up steps</h3>
      <a href="#remediation-and-follow-up-steps">
        
      </a>
    </div>
    <p>Cloudflare has invested from the very start in the Certificate Transparency ecosystem. Not only do we operate CT logs ourselves, we also run a CT monitor that we use to <a href="https://developers.cloudflare.com/ssl/edge-certificates/additional-options/certificate-transparency-monitoring/"><u>alert customers when certificates are mis-issued for their domains</u></a>.</p><p>It is therefore disappointing that we failed to properly monitor certificates for our own domain. We failed three times. The first time because 1.1.1.1 is an IP certificate and our system failed to alert on these. The second time because even if we were to receive certificate issuance alerts, as any of our customers can, we did not implement sufficient filtering. With the sheer number of names and issuances we manage it has not been possible for us to keep up with manual reviews. Finally, because of this noisy monitoring, we did not enable alerting for all of our domains. We are addressing all three shortcomings.</p><p>We double-checked all certificates issued for our names, including but not limited to 1.1.1.1, using certificate transparency, and confirmed that as of 3 September, the Fina CA issued certificates are the only unauthorized issuances. We contacted Fina, and the root programs we know that trust them, to ask for revocation and investigation. The certificates have been revoked.</p><p>Despite no indication of usage of these certificates so far, we take this incident extremely seriously. We have identified several steps we can take to address the risk of these sorts of problems occurring in the future, and we plan to start working on them immediately:</p><p><b>Alerting</b>: Cloudflare will improve alerts and escalation for issuance of certificates for missing Cloudflare owned domains including 1.1.1.1 certificates.</p><p><b>Transparency</b>: The issuance of these unauthorised 1.1.1.1 certificates were detected because Fina CA used Certificate Transparency. Transparency inclusion is not enforced by most DNS clients, which implies that this detection was a lucky one. We are working on bringing transparency to non-browser clients, in particular DNS clients that rely on TLS.</p><p><b>Bug Bounty</b>: Our procedure for triaging reports made through our vulnerability disclosure program was the cause for a delayed response. We are working to revise our triaging process to ensure such reports get the right visibility.</p><p><b>Monitoring</b>: During this incident, our team relied on <a href="https://crt.sh"><u>crt.sh</u></a> to provide us a convenient UI to explore CA issued certificates. We’d like to give a shout to the <a href="https://www.sectigo.com/"><u>Sectigo team</u></a> for maintaining this tool. Given Cloudflare is an active CT Monitor, we have started to build a dedicated UI to explore our data <a href="https://radar.cloudflare.com/certificate-transparency"><u>in Radar</u></a>. We are looking to enable exploration of certs with IP addresses as common names to Radar as well.</p>
    <div>
      <h3>What steps should you take?</h3>
      <a href="#what-steps-should-you-take">
        
      </a>
    </div>
    <p>This incident demonstrates the disproportionate impact that the current root store model can have. It is enough for a single certification authority going rogue for everyone to be at risk.</p><p>If you are an IT manager with a fleet of managed devices, you should consider whether you need to take direct action to revoke these unauthorized certificates. We provide the list in the timeline section above. As the certificates have since been revoked, it is possible that no direct intervention should be required; however, system-wide revocation is not instantaneous and automatic and hence we recommend checking.</p><p>If you are tasked to review the policy of a root store that includes Fina CA, you should take immediate actions to review their inclusion in your program. The issue that has been identified through the course of this investigation raises concerns, and requires a clear report and follow-up from the CA. In addition, to make it possible to detect future such incidents, you should consider having a requirement for all CAs in your root store to participate in Certificate Transparency. Without CT logs, problems such as the one we describe here are impossible to address before they result in impact to end users.</p><p>We are not suggesting that you should stop using DoH or DoT. DNS over UDP and TCP are unencrypted, which puts every single query and response at risk of tampering and unauthorised surveillance. However, we believe that DoH and DoT client security could be improved if clients required that server certificates be included in a certificate transparency log.</p>
    <div>
      <h3>Conclusion</h3>
      <a href="#conclusion">
        
      </a>
    </div>
    <p>This event is the first time we have observed a rogue issuance of a certificate used by our public DNS resolver 1.1.1.1 service. While we have no evidence this was malicious, we know that there might be future attempts that are.</p><p>We plan to accelerate how quickly we discover and alert on these types of issues ourselves. We know that we can catch these earlier, and we plan to do so.</p><p>The identification of these kinds of issues rely on an ecosystem of partners working together to support Certificate Transparency. We are grateful for the monitors who noticed and reported this issue.</p> ]]></content:encoded>
            <category><![CDATA[1.1.1.1]]></category>
            <category><![CDATA[DNS]]></category>
            <category><![CDATA[Resolver]]></category>
            <category><![CDATA[DoH]]></category>
            <category><![CDATA[TLS]]></category>
            <category><![CDATA[Certificate Authority]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[Certificate Transparency]]></category>
            <guid isPermaLink="false">6dgQ2aH6eirkXOANX0QikR</guid>
            <dc:creator>Joe Abley</dc:creator>
            <dc:creator>Thibault Meunier</dc:creator>
            <dc:creator>Vicky Shrestha</dc:creator>
            <dc:creator>Bas Westerbaan</dc:creator>
        </item>
        <item>
            <title><![CDATA[Resolving a Mutual TLS session resumption vulnerability]]></title>
            <link>https://blog.cloudflare.com/resolving-a-mutual-tls-session-resumption-vulnerability/</link>
            <pubDate>Fri, 07 Feb 2025 20:13:14 GMT</pubDate>
            <description><![CDATA[ Cloudflare patched a Mutual TLS (mTLS) vulnerability (CVE-2025-23419) reported via its Bug Bounty Program. The flaw in session resumption allowed client certificates to authenticate across different ]]></description>
            <content:encoded><![CDATA[ <p>On January 23, 2025, Cloudflare was notified via its <a href="https://www.cloudflare.com/en-gb/disclosure/"><u>Bug Bounty Program</u></a> of a vulnerability in Cloudflare’s <a href="https://www.cloudflare.com/en-gb/learning/access-management/what-is-mutual-tls/"><u>Mutual TLS</u></a> (mTLS) implementation. </p><p>The vulnerability affected customers who were using mTLS and involved a flaw in our session resumption handling. Cloudflare’s investigation revealed <b>no</b> evidence that the vulnerability was being actively exploited. And tracked as<a href="https://nvd.nist.gov/vuln/detail/CVE-2025-23419"> <u>CVE-2025-23419</u></a>, Cloudflare mitigated the vulnerability within 32 hours after being notified. Customers who were using Cloudflare’s API shield in conjunction with <a href="https://developers.cloudflare.com/waf/custom-rules/"><u>WAF custom rules</u></a> that validated the issuer's Subject Key Identifier (<a href="https://developers.cloudflare.com/ruleset-engine/rules-language/fields/reference/cf.tls_client_auth.cert_issuer_ski/"><u>SKI</u></a>) were not vulnerable. Access policies such as identity verification, IP address restrictions, and device posture assessments were also not vulnerable.</p>
    <div>
      <h2>Background</h2>
      <a href="#background">
        
      </a>
    </div>
    <p>The bug bounty report detailed that a client with a valid mTLS certificate for one Cloudflare zone could use the same certificate to resume a TLS session with another Cloudflare zone using mTLS, without having to authenticate the certificate with the second zone.</p><p>Cloudflare customers can implement mTLS through Cloudflare <a href="https://developers.cloudflare.com/api-shield/security/mtls/"><u>API Shield</u></a> with Custom Firewall Rules and the <a href="https://developers.cloudflare.com/cloudflare-one/identity/devices/access-integrations/mutual-tls-authentication/"><u>Cloudflare Zero Trust</u></a> product suite. Cloudflare establishes the TLS session with the client and forwards the client certificate to Cloudflare’s Firewall or Zero Trust products, where customer policies are enforced.</p><p>mTLS operates by extending the standard TLS handshake to require authentication from both sides of a connection - the client and the server. In a typical TLS session, a client connects to a server, which presents its <a href="https://www.cloudflare.com/application-services/products/ssl/">TLS certificate</a>. The client verifies the certificate, and upon successful validation, an encrypted session is established. However, with mTLS, the client also presents its own TLS certificate, which the server verifies before the connection is fully established. Only if both certificates are validated does the session proceed, ensuring bidirectional trust.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2FXDaK0R6cpH4IZwSlCyXk/e8f6764656d2672f9eadf4e60851614f/BLOG-2667_2.png" />
          </figure><p>mTLS is useful for <a href="https://developers.cloudflare.com/api-shield/security/mtls/"><u>securing API communications</u></a>, as it ensures that only legitimate and authenticated clients can interact with backend services. Unlike traditional authentication mechanisms that rely on credentials or <a href="https://www.cloudflare.com/en-gb/learning/access-management/token-based-authentication/"><u>tokens</u></a>, mTLS requires possession of a valid certificate and its corresponding private key.</p><p>To improve TLS connection performance, Cloudflare employs <a href="https://blog.cloudflare.com/tls-session-resumption-full-speed-and-secure/"><u>session resumption</u></a>. Session resumption speeds up the handshake process, reducing both latency and resource consumption. The core idea is that once a client and server have successfully completed a TLS handshake, future handshakes should be streamlined — assuming that fundamental parameters such as the cipher suite or TLS version remain unchanged.</p><p>There are two primary mechanisms for session resumption: session IDs and session tickets. With session IDs, the server stores the session context and associates it with a unique session ID. When a client reconnects and presents this session ID in its ClientHello message, the server checks its cache. If the session is still valid, the handshake is resumed using the cached state.</p><p>Session tickets function in a stateless manner. Instead of storing session data, the server encrypts the session context and sends it to the client as a session ticket. In future connections, the client includes this ticket in its ClientHello, which the server can then decrypt to restore the session, eliminating the need for the server to maintain session state.</p><p>A resumed mTLS session leverages previously established trust, allowing clients to reconnect to a protected application without needing to re-initiate an mTLS handshake.</p>
    <div>
      <h3>The mTLS resumption vulnerability</h3>
      <a href="#the-mtls-resumption-vulnerability">
        
      </a>
    </div>
    <p>In Cloudflare’s mTLS implementation, however, session resumption introduced an unintended behavior.  <a href="https://boringssl.googlesource.com/boringssl"><u>BoringSSL</u></a>, the TLS library that Cloudflare uses, will store the client certificate from the originating, full TLS handshake in the session. Upon resuming that session, the client certificate is not revalidated against the full chain of trust, and the original handshake's verification status is respected. To avoid this situation, BoringSSL provides an API to partition session caches/tickets between different “contexts” defined by the application. Unfortunately, Cloudflare’s use of this API was not correct, which allowed TLS sessions to be resumed when they shouldn’t have been. </p><p>To exploit this vulnerability, the security researcher first set up two zones on Cloudflare and configured them behind Cloudflare’s proxy with mTLS enabled. Once their domains were configured, the researcher authenticated to the first zone using a valid client certificate, allowing Cloudflare to issue a TLS session ticket against that zone. </p><p>The researcher then changed the TLS Server Name Indication (SNI) and HTTP Host header from the first zone (which they had authenticated with) to target the second zone (which they had <i>not</i> authenticated with). The researcher then presented the session ticket when handshaking with the second Cloudflare-protected mTLS zone. This resulted in Cloudflare resuming the session with the second zone and reporting verification status for the cached client certificate as successful,bypassing the mTLS authentication that would normally be required to initiate a session.</p><p>If you were using additional validation methods in your API Shield or Access policies – for example, checking the issuers SKI, identity verification, IP address restrictions, or device posture assessments – these controls continued to function as intended. However, due to the issue with TLS session resumption, the mTLS checks mistakenly returned a passing result without re-evaluating the full certificate chain.</p>
    <div>
      <h2>Remediation and next steps</h2>
      <a href="#remediation-and-next-steps">
        
      </a>
    </div>
    <p>We have disabled TLS session resumption for all customers that have mTLS enabled. As a result, Cloudflare will no longer allow resuming sessions that cache client certificates and their verification status.</p><p>We are exploring ways to bring back the performance improvements from TLS session resumption for mTLS customers.</p>
    <div>
      <h2>Further hardening</h2>
      <a href="#further-hardening">
        
      </a>
    </div>
    <p>Customers can further harden their mTLS configuration and add enhanced logging to detect future issues by using Cloudflare's <a href="https://developers.cloudflare.com/rules/transform/"><u>Transform Rules</u></a>, logging, and firewall features.</p><p>While Cloudflare has mitigated the issue by disabling session resumption for mTLS connections, customers may want to implement additional monitoring at their origin to enforce stricter authentication policies. All customers using mTLS can also enable additional request headers using our <a href="https://developers.cloudflare.com/rules/transform/managed-transforms/reference/#add-tls-client-auth-headers"><u>Managed Transforms</u></a> product. Enabling this feature allows us to pass additional metadata to your origin with the details of the client certificate that was used for the connection.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7eYFaZUrBYTESAZEQHsnHS/8bdb9135ab58648529cb8339c48ebb2b/BLOG-2667_3.png" />
          </figure><p>Enabling this feature allows you to see the following headers where mTLS is being utilized on a request.</p>
            <pre><code>{
  "headers": {
    "Cf-Cert-Issuer-Dn": "CN=Taskstar Root CA,OU=Taskstar\\, Inc.,L=London,ST=London,C=UK",
    "Cf-Cert-Issuer-Dn-Legacy": "/C=UK/ST=London/L=London/OU=Taskstar, Inc./CN=Taskstar Root CA",
    "Cf-Cert-Issuer-Dn-Rfc2253": "CN=Taskstar Root CA,OU=Taskstar\\, Inc.,L=London,ST=London,C=UK",
    "Cf-Cert-Issuer-Serial": "7AB07CC0D10C38A1B554C728F230C7AF0FF12345",
    "Cf-Cert-Issuer-Ski": "A5AC554235DBA6D963B9CDE0185CFAD6E3F55E8F",
    "Cf-Cert-Not-After": "Jul 29 10:26:00 2025 GMT",
    "Cf-Cert-Not-Before": "Jul 29 10:26:00 2024 GMT",
    "Cf-Cert-Presented": "true",
    "Cf-Cert-Revoked": "false",
    "Cf-Cert-Serial": "0A62670673BFBB5C9CA8EB686FA578FA111111B1B",
    "Cf-Cert-Sha1": "64baa4691c061cd7a43b24bccb25545bf28f1111",
    "Cf-Cert-Sha256": "528a65ce428287e91077e4a79ed788015b598deedd53f17099c313e6dfbc87ea",
    "Cf-Cert-Ski": "8249CDB4EE69BEF35B80DA3448CB074B993A12A3",
    "Cf-Cert-Subject-Dn": "CN=MB,OU=Taskstar Admins,O=Taskstar,L=London,ST=Essex,C=UK",
    "Cf-Cert-Subject-Dn-Legacy": "/C=UK/ST=Essex/L=London/O=Taskstar/OU=Taskstar Admins/CN=MB ",
    "Cf-Cert-Subject-Dn-Rfc2253": "CN=MB,OU=Taskstar Admins,O=Taskstar,L=London,ST=London,C=UK",
    "Cf-Cert-Verified": "true",
    "Cf-Client-Cert-Sha256": "083129c545d7311cd5c7a26aabe3b0fc76818495595cea92efe111150fd2da2",
    }
}
</code></pre>
            <p>Enterprise customers can also use our <a href="https://developers.cloudflare.com/logs/"><u>Cloudflare Log</u></a> products to add these headers via the Logs <a href="https://developers.cloudflare.com/logs/reference/custom-fields/"><u>Custom Fields</u></a> feature. For example:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3D864CsepB5U2wM1AWhYVu/ca7d3d1ca144bc4fb7ac7edddfdf5987/BLOG-2667_4.png" />
          </figure><p>This will add the following information to Cloudflare Logs.</p>
            <pre><code>"RequestHeaders": {
    "cf-cert-issuer-ski": "A5AC554235DBA6D963B9CDE0185CFAD6E3F55E8F",
    "cf-cert-sha256": "528a65ce428287e91077e4a79ed788015b598deedd53f17099c313e6dfbc87ea"
  },
</code></pre>
            <p>Customers already logging this information — either at their origin or via Cloudflare Logs — can retroactively check for unexpected certificate hashes or issuers that did not trigger any security policy.</p><p>Users are also able to use this information within their <a href="https://developers.cloudflare.com/learning-paths/application-security/firewall/custom-rules/"><u>WAF custom rules</u></a> to conduct additional checks. For example, checking the <a href="https://developers.cloudflare.com/ruleset-engine/rules-language/fields/reference/cf.tls_client_auth.cert_issuer_ski/"><u>Issuer's SKI</u></a> can provide an extra layer of security.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1YWZe9P1hhYEPJrWH4gpqi/b0a6f3c70a203032404c1ca0e2fc517c/BLOG-2667_5.png" />
          </figure><p>Customers who enabled this <a href="https://developers.cloudflare.com/api-shield/security/mtls/configure/#expression-builder"><u>additional check</u></a> were not vulnerable.</p>
    <div>
      <h2><b>Conclusion</b></h2>
      <a href="#conclusion">
        
      </a>
    </div>
    <p>We would like to thank Sven Hebrok, Felix Cramer, Tim Storm, Maximilian Radoy, and Juraj Somorovsky of Paderborn University who responsibly disclosed this issue via our <a href="https://hackerone.com/cloudflare?type=team"><u>HackerOne Bug Bounty Program</u></a>, allowing us to identify and mitigate the vulnerability. We welcome further submissions from our community of researchers to continually improve our products' security.</p><p>Finally, we want to apologize to our mTLS customers. Security is at the core of everything we do at Cloudflare, and we deeply regret any concerns this issue may have caused. We have taken immediate steps to resolve the vulnerability and have implemented additional safeguards to prevent similar issues in the future. </p>
    <div>
      <h2><b>Timeline </b></h2>
      <a href="#timeline">
        
      </a>
    </div>
    <p><i>All timestamps are in UTC</i></p><ul><li><p><b>2025-01-23 15:40</b> – Cloudflare is notified of a vulnerability in Mutual TLS and the use of session resumption.</p></li><li><p><b>2025-01-23 16:02 to 21:06</b> – Cloudflare validates Mutual TLS vulnerability and prepares a release to disable session resumption for Mutual TLS.</p></li><li><p><b>2025-01-23 21:26</b> – Cloudflare begins rollout of remediation.</p></li><li><p><b>2025-01-24 20:15</b> – Rollout completed. Vulnerability is remediated.</p></li></ul><p></p> ]]></content:encoded>
            <category><![CDATA[Vulnerabilities]]></category>
            <category><![CDATA[WAF]]></category>
            <category><![CDATA[Zero Trust]]></category>
            <category><![CDATA[SASE]]></category>
            <category><![CDATA[TLS]]></category>
            <category><![CDATA[Network Services]]></category>
            <guid isPermaLink="false">4gJhafUsmUjkevKu55304a</guid>
            <dc:creator>Matt Bullock</dc:creator>
            <dc:creator>Rushil Mehra</dc:creator>
            <dc:creator>Alessandro Ghedini</dc:creator>
        </item>
        <item>
            <title><![CDATA[A look at the latest post-quantum signature standardization candidates]]></title>
            <link>https://blog.cloudflare.com/another-look-at-pq-signatures/</link>
            <pubDate>Thu, 07 Nov 2024 14:00:00 GMT</pubDate>
            <description><![CDATA[ NIST has standardized four post-quantum signature schemes so far, and they’re not done yet: there are fourteen new candidates in the running for standardization. ]]></description>
            <content:encoded><![CDATA[ <p>On October 24, 2024, the National Institute of Standards and Technology (NIST) <a href="https://groups.google.com/a/list.nist.gov/g/pqc-forum/c/khAfIZPktRE/m/bBZWmET-AAAJ"><u>announced</u></a> that they’re advancing fourteen post-quantum signature schemes to the second round of the “<a href="https://csrc.nist.gov/projects/pqc-dig-sig"><u>signatures on ramp</u></a>” competition. “Post-quantum” means that these algorithms are designed to resist <a href="https://blog.cloudflare.com/the-quantum-menace/"><u>the attack of quantum computers</u></a>. NIST already standardized four post-quantum signature schemes (<a href="https://blog.cloudflare.com/nists-first-post-quantum-standards/"><u>ML-DSA, SLH-DSA</u></a>, <a href="https://csrc.nist.gov/News/2020/stateful-hash-based-signature-schemes-sp-800-208"><u>XMSS, and LMS</u></a>) and they are drafting a standard for a fifth (<a href="https://falcon-sign.info/"><u>Falcon</u></a>). Why do we need even more, you might ask? We’ll get to that.</p><p>A regular reader of the blog will know that this is not the first time we’ve taken measure of post-quantum signatures. In <a href="https://blog.cloudflare.com/sizing-up-post-quantum-signatures/"><u>2021</u></a> we took a first hard look, and reported on the performance impact we expect from large-scale measurements.  Since then, dozens of new post-quantum algorithms have been proposed. Many of them have been submitted to this new NIST competition. We discussed some of the more promising ones in our <a href="https://blog.cloudflare.com/pq-2024/"><u>early 2024 blog post</u></a>.</p><p>In this blog post, we will go over the fourteen schemes advanced to the second round of the on ramp and discuss their feasibility for use in TLS — the protocol that secures browsing the Internet. The defining feature of practically all of them, is that they require much more bytes on the wire. Back in 2021 we shared <a href="https://blog.cloudflare.com/sizing-up-post-quantum-signatures/"><u>experimental results</u></a> on the impact of these extra bytes. Today, we will share some surprising statistics on how TLS is used in practice. One is that today already almost half the data sent over more than half the QUIC connections are just for the certificates.</p><p>For a broader context and introduction to the post-quantum migration, check out our <a href="https://blog.cloudflare.com/pq-2024"><u>early 2024 blog post</u></a>. One take-away to mention here: there will be two migrations for TLS. First, we urgently need to migrate key agreement to <a href="https://www.cloudflare.com/learning/ssl/quantum/what-is-post-quantum-cryptography/">post-quantum cryptography</a> to protect against <a href="https://en.wikipedia.org/wiki/Harvest_now,_decrypt_later"><u>attackers that store encrypted communication today</u></a> in order to decrypt it in the future when a quantum computer is available. The industry is making good progress here: <a href="https://radar.cloudflare.com/adoption-and-usage#post-quantum-encryption-adoption"><u>18% of human requests</u></a> to websites using Cloudflare are <a href="https://blog.cloudflare.com/post-quantum-for-all/"><u>secured</u></a> using post-quantum key agreement. The second migration, to post-quantum signatures (certificates), is not as urgent: we will need to have this sorted by the time the quantum computer arrives. However, it will be a bigger challenge.</p>
    <div>
      <h2>The signatures in TLS</h2>
      <a href="#the-signatures-in-tls">
        
      </a>
    </div>
    <p>Before we have a look at the long list of post-quantum signature algorithms and their performance characteristics, let’s go through the signatures involved when browsing the Internet and their particular constraints.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/415VcZzABkhZjT60GRkQZM/f30ae24bd14e86534efd3e74d15eb5b5/image3.png" />
          </figure><p>When you visit a website, the browser establishes a TLS connection with the server for that website. The connection starts with a cryptographic handshake. During this handshake, to authenticate the connection, the server signs the transcript so far, and presents the browser with a TLS <i>leaf</i> certificate to prove that it’s allowed to serve the website. This <i>leaf </i>certificate is signed by a certification authority (CA). Typically, it’s not signed by the CA’s <i>root</i> certificate, but by an <i>intermediate</i> CA certificate, which in turn is signed by the root CA, or another intermediate. That’s not all: a leaf certificate has to include at least two <i>signed certificate timestamps</i> (SCTs). These SCTs are signatures created by <a href="https://blog.cloudflare.com/introducing-certificate-transparency-and-nimbus/"><u>certificate transparency (CT) logs</u></a> to attest they’ve been publicly logged. <a href="https://certificate.transparency.dev/howctworks/"><u>Certificate Transparency</u></a> is what enables you to look up a certificate on websites such <a href="http://crt.sh"><u>crt.sh</u></a> and <a href="https://www.merklemap.com/"><u>merklemap</u></a>. In the future three or more SCTs might be required. Finally, servers may also send an <a href="https://blog.cloudflare.com/high-reliability-ocsp-stapling/"><u>OCSP staple</u></a> to demonstrate a certificate hasn’t been revoked.</p><p>Thus, we’re looking at a minimum of five signatures (not counting the OCSP staple) and two public keys transmitted across the network to establish a new TLS connection.</p>
    <div>
      <h3>Tailoring</h3>
      <a href="#tailoring">
        
      </a>
    </div>
    <p>Only the handshake transcript signature is created <i>online</i>; the other signatures are “offline”. That is, they are created ahead of time. For these offline signatures, fast verification is much more important than fast signing. On the other hand, for the handshake signature, we want to minimize the sum of signing and verification time.</p><p>Only the public keys of the leaf and intermediate certificates are transmitted on the wire during the handshake, and for those we want to minimize the combined size of the signature and the public key. For the other signatures, the public key is not transmitted during the handshake, and thus a scheme with larger public keys would be tolerable, and preferable if it trades larger public keys for smaller signatures.</p>
    <div>
      <h2>The algorithms</h2>
      <a href="#the-algorithms">
        
      </a>
    </div>
    <p>Now that we’re up to speed, let’s have a look at the candidates that progressed (marked by 🤔 below), compared to the classical algorithms vulnerable to quantum attack (marked by ❌), and the post-quantum algorithms that are already standardized (✅) or soon will be (📝). Each submission proposes several variants. We list the most relevant variants to TLS from each submission. To explore all variants, check out <a href="https://research.cloudflare.com/outreach/academic-programs/interns/thom-wiggers/"><u>Thom Wigger</u></a>’s <a href="https://pqshield.github.io/nist-sigs-zoo/"><u>signatures zoo</u></a>.</p>
<div><table><thead>
  <tr>
    <th></th>
    <th></th>
    <th></th>
    <th><span>Sizes (bytes)</span></th>
    <th><span>CPU time (lower is better)</span></th>
  </tr></thead>
<tbody>
  <tr>
    <td><span>Family</span></td>
    <td><span>Name variant</span></td>
    <td></td>
    <td><span>Public key</span></td>
    <td><span>Signature</span></td>
    <td><span>Signing</span></td>
    <td><span>Verification</span></td>
  </tr>
  <tr>
    <td><span>Elliptic curves</span></td>
    <td><span>Ed25519</span></td>
    <td><span>❌</span></td>
    <td><span>32</span></td>
    <td><span>64</span></td>
    <td><span>0.15</span></td>
    <td><span>1.3</span></td>
  </tr>
  <tr>
    <td><span>Factoring</span></td>
    <td><span>RSA<small> 2048</small></span></td>
    <td><span>❌</span></td>
    <td><span>256</span></td>
    <td><span>256</span></td>
    <td><span>80</span></td>
    <td><span>0.4</span></td>
  </tr>
  <tr>
    <td><span>Lattices</span></td>
    <td><span>ML-DSA <small>44</small></span></td>
    <td><span>✅</span></td>
    <td><span>1,312</span></td>
    <td><span>2,420</span></td>
    <td><span>1 (baseline)</span></td>
    <td><span>1 (baseline)</span></td>
  </tr>
  <tr>
    <td><span>Symmetric</span></td>
    <td><span>SLH-DSA <small>128s</small></span></td>
    <td><span>✅</span></td>
    <td><span>32</span></td>
    <td><span>7,856</span></td>
    <td><span>14,000</span></td>
    <td><span>40</span></td>
  </tr>
  <tr>
    <td><span>SLH-DSA <small>128f</small></span></td>
    <td><span>✅</span></td>
    <td><span>32</span></td>
    <td><span>17,088</span></td>
    <td><span>720</span></td>
    <td><span>110</span></td>
  </tr>
  <tr>
    <td><span>LMS <small>M4_H20_W8</small></span></td>
    <td><span>✅</span></td>
    <td><span>48</span></td>
    <td><span>1,112</span></td>
    <td><span>2.9</span> ⚠️</td>
    <td><span>8.4</span></td>
  </tr>
  <tr>
    <td><span>Lattices</span></td>
    <td><span>Falcon <small>512</small></span></td>
    <td><span>📝</span></td>
    <td><span>897</span></td>
    <td><span>666</span></td>
    <td><span>3 ⚠️</span></td>
    <td><span>0.7</span></td>
  </tr>
  <tr>
    <td><span>Codebased</span></td>
    <td><span>CROSS <small>R-SDP(G)1 small</small></span></td>
    <td><span>🤔</span></td>
    <td><span>38</span></td>
    <td><span>7,956</span></td>
    <td><span>20</span></td>
    <td><span>35</span></td>
  </tr>
  <tr>
    <td><span>LESS <small>1s</small></span></td>
    <td><span>🤔</span></td>
    <td><span>97,484</span></td>
    <td><span>5,120</span></td>
    <td><span>620</span></td>
    <td><span>1800</span></td>
  </tr>
  <tr>
    <td><span>MPC in the head</span></td>
    <td><span>Mirath <small>Mirith Ia fast</small></span></td>
    <td><span>🤔</span></td>
    <td><span>129</span></td>
    <td><span>7,877</span></td>
    <td><span>25</span></td>
    <td><span>60</span></td>
  </tr>
  <tr>
    <td><span>MQOM <small>L1-gf251-fast</small></span></td>
    <td><span>🤔</span></td>
    <td><span>59</span></td>
    <td><span>7,850</span></td>
    <td><span>35</span></td>
    <td><span>85</span></td>
  </tr>
  <tr>
    <td><span>PERK <small>I-fast5</small></span></td>
    <td><span>🤔</span></td>
    <td><span>240</span></td>
    <td><span>8,030</span></td>
    <td><span>20</span></td>
    <td><span>40</span></td>
  </tr>
  <tr>
    <td><span>RYDE <small>128F</small></span></td>
    <td><span>🤔</span></td>
    <td><span>86</span></td>
    <td><span>7,446</span></td>
    <td><span>15</span></td>
    <td><span>40</span></td>
  </tr>
  <tr>
    <td><span>SDitH <small>gf251-L1-hyp</small></span></td>
    <td><span>🤔</span></td>
    <td><span>132</span></td>
    <td><span>8,496</span></td>
    <td><span>30</span></td>
    <td><span>80</span></td>
  </tr>
  <tr>
    <td><span>VOLE in the head</span></td>
    <td><span>FAEST <small>EM-128f</small></span></td>
    <td><span>🤔</span></td>
    <td><span>32</span></td>
    <td><span>5,696</span></td>
    <td><span>6</span></td>
    <td><span>18</span></td>
  </tr>
  <tr>
    <td><span>Lattices</span></td>
    <td><span>HAWK <small>512</small></span></td>
    <td><span>🤔</span></td>
    <td><span>1,024</span></td>
    <td><span>555</span></td>
    <td><span>0.25</span></td>
    <td><span>1.2</span></td>
  </tr>
  <tr>
    <td><span>Isogeny</span></td>
    <td><span>SQISign <small>I</small></span></td>
    <td><span>🤔</span></td>
    <td><span>64</span></td>
    <td><span>177</span></td>
    <td><span>17,000</span></td>
    <td><span>900</span></td>
  </tr>
  <tr>
    <td><span>Multivariate</span></td>
    <td><span>MAYO <small>one</small></span></td>
    <td><span>🤔</span></td>
    <td><span>1,168</span></td>
    <td><span>321</span></td>
    <td><span>1.4</span></td>
    <td><span>1.4</span></td>
  </tr>
  <tr>
    <td><span>MAYO <small>two</small></span></td>
    <td><span>🤔</span></td>
    <td><span>5,488</span></td>
    <td><span>180</span></td>
    <td><span>1.7</span></td>
    <td><span>0.8</span></td>
  </tr>
  <tr>
    <td><span>QR-UOV <small>I-(31,165,60,3)</small></span></td>
    <td><span>🤔</span></td>
    <td><span>23,657</span></td>
    <td><span>157</span></td>
    <td><span>75</span></td>
    <td><span>125</span></td>
  </tr>
  <tr>
    <td><span>SNOVA <small>(24,5,4)</small></span></td>
    <td><span>🤔</span></td>
    <td><span>1,016</span></td>
    <td><span>248</span></td>
    <td><span>0.9</span></td>
    <td><span>1.4</span></td>
  </tr>
  <tr>
    <td><span>SNOVA <small>(25,8,3)</small></span></td>
    <td><span>🤔</span></td>
    <td><span>2,320</span></td>
    <td><span>165</span></td>
    <td><span>0.9</span></td>
    <td><span>1.8</span></td>
  </tr>
  <tr>
    <td><span>SNOVA <small>(37,17,2)</small></span></td>
    <td><span>🤔</span></td>
    <td><span>9,842</span></td>
    <td><span>106</span></td>
    <td><span>1</span></td>
    <td><span>1.2</span></td>
  </tr>
  <tr>
    <td><span>UOV <small>Is-pkc</small></span></td>
    <td><span>🤔</span></td>
    <td><span>66,576</span></td>
    <td><span>96</span></td>
    <td><span>0.3</span></td>
    <td><span>2.3</span></td>
  </tr>
  <tr>
    <td><span>UOV <small>Ip-pkc</small></span></td>
    <td><span>🤔</span></td>
    <td><span>43,576</span></td>
    <td><span>128</span></td>
    <td><span>0.3</span></td>
    <td><span>0.8</span></td>
  </tr>
</tbody></table></div><p>Some notes about the table. It compares selected variants of the submissions progressed to the second round of the NIST PQC signature on ramp with earlier existing traditional and post-quantum schemes at the security level of AES-128. CPU times are taken from the <a href="https://pqshield.github.io/nist-sigs-zoo/"><u>signatures zoo</u></a>, which collected them from the submission documents and some later advances. CPU performance varies significantly by platform and implementation, and should only be taken as a rough indication. We are early in the competition, and the on-ramp schemes will evolve: some will improve drastically (both in compute and size), whereas others will regress to counter new attacks. Check out <a href="https://pqshield.github.io/nist-sigs-zoo/"><u>the zoo</u></a> for the latest numbers. We marked Falcon signing with a <i>⚠️</i>, as Falcon signing is hard to implement in a fast and timing side-channel secure manner. LMS signing has a ⚠️, as secure LMS signing requires keeping a state and the listed signing time assumes a 32MB cache. This will be discussed later on.</p><p>These are a lot of algorithms, and we didn’t even list all variants. One thing is clear: none of them perform as well as classical elliptic curve signatures across the board. Let’s start with NIST’s 2022 picks.</p>
    <div>
      <h3>ML-DSA, SLH-DSA, and Falcon</h3>
      <a href="#ml-dsa-slh-dsa-and-falcon">
        
      </a>
    </div>
    <p>The most viable general purpose post-quantum signature scheme standardized today is the lattice-based <b>ML-DSA</b> (<a href="https://nvlpubs.nist.gov/nistpubs/fips/nist.fips.204.pdf"><u>FIPS 204</u></a>), which started its life as <a href="https://pq-crystals.org/dilithium/index.shtml"><u>Dilithium</u></a>. It’s light on the CPU and reasonably straightforward to implement. The big downside is that its signatures and public keys are large: 2.4kB and 1.3kB respectively. Here and for the balance of the blog post, we will only consider the variants at the AES-128 security level unless stated otherwise. Adding ML-DSA, adds 14.7kB to the TLS handshake (two 1312-byte public keys plus five 2420-byte signatures).</p><p><b>SLH-DSA</b> (<a href="https://nvlpubs.nist.gov/nistpubs/fips/nist.fips.205.pdf"><u>FIPS 205</u></a>, née <a href="https://sphincs.org/"><u>SPHINCS</u><u><sup>+</sup></u></a>) looks strictly worse, adding 39kB and significant computational overhead for both signing and verification. The advantage of SLH-DSA, being solely based on hashes, is that its security is much better understood than ML-DSA. The lowest security level of SLH-DSA is generally more trusted than the highest security levels of many other schemes.</p><p><a href="https://falcon-sign.info/"><b><u>Falcon</u></b></a> (to be renamed <a href="https://www.nist.gov/news-events/news/2024/08/nist-releases-first-3-finalized-post-quantum-encryption-standards"><u>FN-DSA</u></a>) seems much better than SLH-DSA and ML-DSA if you look only at the numbers in the table. There is a catch though. For fast signing, Falcon requires fast floating-point arithmetic, which turns out to be <a href="https://blog.cloudflare.com/nist-post-quantum-surprise/#digital-signatures"><u>difficult to implement securely</u></a>. Signing can be performed securely with emulated floating-point arithmetic, but that makes it roughly twenty times slower. This makes Falcon ill-suited for online signatures. Furthermore, the signing procedure of Falcon is complicated to implement. On the other hand, Falcon verification is simple and doesn’t require floating-point arithmetic.</p><p>Leaning into Falcon’s strength, by using ML-DSA for the handshake signature, and Falcon for the rest, we’re only adding 7.3kB (at security level of AES-128).</p><p>There is one more difficulty with Falcon worth mentioning: it’s missing a middle security level. That means that if Falcon-512 (which we considered so far) turns out to be weaker than expected, then the next one up is Falcon-1024, which has double signature and public key size. That amounts to adding about 11kB.</p>
    <div>
      <h3>Stateful hash-based signatures</h3>
      <a href="#stateful-hash-based-signatures">
        
      </a>
    </div>
    <p>The very first post-quantum signature algorithms standardized are the stateful hash-based <a href="https://datatracker.ietf.org/doc/html/rfc8391"><u>XMSS</u><u><sup>(MT)</sup></u></a> and <a href="https://datatracker.ietf.org/doc/html/rfc8554#page-45"><u>LMS/HSS</u></a>. These are hash-based signatures, similar to SLH-DSA, and so we have a lot of trust in their security. They come with a big drawback: when creating a keypair you prepare a finite number of <i>signature slots</i>. For the variant listed in the table, there are about one million slots. Each slot can only be used once. If by accident a slot is used twice, then anyone can (<a href="https://eprint.iacr.org/2016/1042"><u>probably</u></a>) use those two signatures to forge any new signature from that slot and break into the connection the certificate is supposed to protect. Remembering which slots have been used, is the <i>state</i> in <i>stateful</i> hash-based signature. Certificate authorities might be able to keep the state, but for general use, Adam Langley calls keeping the state a <a href="https://www.imperialviolet.org/2013/07/18/hashsig.html"><u>huge foot-cannon</u></a>.</p><p>There are more quirks to keep in mind for stateful hash-based signatures. To start, during key generation, each slot needs to be prepared. Preparing each slot takes approximately the same amount of time as verifying a signature. Preparing all million takes a couple of hours on a single core. For intermediate certificates of a popular certificate authority, a million slots are not enough. Indeed, Let’s Encrypt issues more than <a href="https://letsencrypt.org/stats/"><u>four million certificates per day</u></a>. Instead of increasing the number of slots directly, we can use an extra intermediate. This is what XMSS<sup>MT</sup> and HSS do internally. A final quirk of stateful hash-based signatures is that their security is bottlenecked on non-repudiation: the listed LMS instance has 192 bits of security against forgery, but only 96 bits against the signer themselves creating a single signature that verifies two different messages.</p><p>Even when stateful hash-based signatures or Falcon can be used, we are still adding a lot of bytes on the wire. From <a href="https://blog.cloudflare.com/sizing-up-post-quantum-signatures/"><u>earlier experiments</u></a> we know that that will impact performance significantly. We summarize those findings later in this blog post, and share some new data. The short of it: it would be nice to have a post-quantum signature scheme that outperforms Falcon, or at least outperforms ML-DSA and is easier to deploy. This is one of the reasons NIST is running the second competition.</p><p>With that in mind, let’s have a look at the candidates.</p>
    <div>
      <h3>Structured lattice alternatives</h3>
      <a href="#structured-lattice-alternatives">
        
      </a>
    </div>
    <p>With only performance in mind, it is surprising that half of the candidates do worse than ML-DSA. There is a good reason for it: NIST is worried that we’re putting all our eggs in the structured lattices basket. SLH-DSA is an alternative to lattices today, but it doesn’t perform well enough for many applications. As such, NIST <a href="https://csrc.nist.gov/csrc/media/Projects/pqc-dig-sig/documents/call-for-proposals-dig-sig-sept-2022.pdf"><u>would primarily like to standardize</u></a> another general purpose signature algorithm that is not based on structured lattices, and that outperforms SLH-DSA. We will briefly touch upon these schemes here.</p>
    <div>
      <h4>Code-based</h4>
      <a href="#code-based">
        
      </a>
    </div>
    <p><a href="https://www.cross-crypto.com/"><u>CROSS</u></a> and <a href="https://www.less-project.com/#:~:text=LESS%20(Linear%20Equivalence%20Signature%20Scheme,the%20Linear%20Code%20Equivalence%20problem."><u>LESS</u></a> are two<b> code-based signature</b> schemes. <b>CROSS</b> is based on a variant of the traditional syndrome decoding problem. Its signatures are about as large as SLH-DSA, but its edge over SLH-DSA is the much better signing times. <b>LESS</b> is based on the novel <a href="https://eprint.iacr.org/2023/847"><u>linear equivalence problem</u></a>. It only outperforms SLH-DSA on signature size, requiring larger public keys in return. For use in TLS, the high verification times of LESS are especially problematic. Given that LESS is based on a new approach, it will be interesting to see how much it can improve going forward.</p>
    <div>
      <h4>Multi-party computation in the head</h4>
      <a href="#multi-party-computation-in-the-head">
        
      </a>
    </div>
    <p>Five of the submissions (<a href="https://pqc-mira.org/"><u>Mira</u></a><a href="https://pqc-mirith.org/"><u>th</u></a>, <a href="https://mqom.org/"><u>MQOM</u></a>, <a href="https://pqc-perk.org/"><u>PERK</u></a>, <a href="https://pqc-ryde.org/"><u>RYDE</u></a>, <a href="https://sdith.org/"><u>SDitH</u></a>) use the <b>Multi-Party Computation in the Head</b> (MPCitH) paradigm.</p><p>It has been exciting to see the developments in this field. To explain a bit about it, let’s go back to <a href="https://microsoft.github.io/Picnic/"><u>Picnic</u></a>. Picnic was an MPCitH submission to the previous NIST PQC competition. In essence, its private key is a random key <i>x</i>, and its public key is the hash <i>H(x)</i>. A signature is a zero-knowledge proof demonstrating that the signer knows <i>x</i>. So far, it’s pretty similar in shape to other signature schemes that use zero knowledge proofs. The difference is in how that proof is created. We have to talk about multi-party computation (MPC) first. MPC starts with splitting the key <i>x</i> into shares, using <a href="https://en.wikipedia.org/wiki/Shamir%27s_secret_sharing"><u>Shamir secret sharing</u></a> for instance, and giving each party one share. No single party knows the value of <i>x</i> itself, but they can recover it by recombining. The insight of MPC is that these parties (with some communication) can perform arbitrary computation on the data they shared. In particular, they can compute a secret share of <i>H(x)</i>. Now, we can use that to make a zero-knowledge proof as follows. The signer simulates all parties in the multi-party protocol to compute and recombine <i>H(x)</i>. The signer then reveals part of the intermediate values of the computation using <a href="https://en.wikipedia.org/wiki/Fiat%E2%80%93Shamir_heuristic"><u>Fiat–Shamir</u></a>: enough so that none of the parties could have cheated on any of the steps, but not enough that it allows the verifier to figure out <i>x</i> themselves.</p><p>For <i>H</i>, Picnic uses <a href="https://lowmc.github.io/"><u>LowMC</u></a>, a block cipher for which it’s easy to do the multi-party computation. The initial submission of Picnic performed poorly compared to SLH-DSA with 32kB signatures. For the second round, Picnic was improved considerably, boasting 12kB signatures. SLH-DSA won out with smaller signatures, and more conservative security assumptions: Picnic relies on LowMC which didn’t receive as much study as the hashes on which SLH-DSA is based.</p><p>Back to the MPCitH candidates that progressed. All of them have variants (listed in the table) with similar or better signature sizes as SLH-DSA, while outperforming SLH-DSA considerably in signing time. There are variants with even smaller signatures, but their verification performance is significantly higher. The difference between the MPCitH candidates is the underlying <a href="https://en.wikipedia.org/wiki/Trapdoor_function"><u>trapdoor</u></a> they use. In Picnic the trapdoor was LowMC. For both RYDE and SDiTH, the trapdoors used are based on variants of <a href="https://en.wikipedia.org/wiki/Decoding_methods#Syndrome_decoding"><u>syndrome decoding</u></a>, and could be classified as code-based cryptography.</p><p>Over the years, MPCitH schemes have seen remarkable improvements in performance, and we don’t seem to have reached the end of it yet. There is still some way to go before these schemes would be competitive in TLS: signature size needs to be reduced without sacrificing the currently borderline acceptable verification performance. On top of that, not all underlying trapdoors of the various schemes have seen enough scrutiny.</p>
    <div>
      <h4>FAEST</h4>
      <a href="#faest">
        
      </a>
    </div>
    <p><a href="https://faest.info/"><u>FAEST</u></a> is a peek into the future. It’s similar to the MPCitH candidates in that its security reduces to an underlying trapdoor. It is quite different from those in that FAEST’s underlying trapdoor is AES. That means that, given the security analysis of FAEST is correct, it’s on the same footing as SLH-DSA. Despite the conservative trapdoor, FAEST beats the MPCitH candidates in performance. It also beats SLH-DSA on all metrics.</p><p>At the AES-128 security level, FAEST’s signatures are larger than ML-DSA. For those that want to hedge against improvements in lattice attacks, and would only consider higher security levels of ML-DSA, FAEST becomes an attractive alternative. ML-DSA-65 has a combined public key and signature size of 5.2kB, which is similar to FAEST EM-128f. ML-DSA-65 still has a slight edge in performance.</p><p>FAEST is based on the 2023 <a href="https://eprint.iacr.org/2023/996.pdf"><u>VOLE in the Head</u></a> paradigm. These are new ideas, and it seems likely their full potential has not been realized yet. It is likely that FAEST will see improvements.</p><p>The VOLE in the Head techniques can and probably will be adopted by some of the MPCitH submissions. It will be interesting to see how far VOLEitH can be pushed when applied to less conservative trapdoors. Surpassing ML-DSA seems in reach, but Falcon? We will see.</p><p>Now, let’s move on to the submissions that surpass ML-DSA today.</p>
    <div>
      <h3>HAWK</h3>
      <a href="#hawk">
        
      </a>
    </div>
    <p><a href="https://hawk-sign.info/"><u>HAWK</u></a> is similar to Falcon, but improves upon it in a few key ways. Most importantly, it doesn’t rely on floating point arithmetic. Furthermore, its signing procedure is simpler and much faster. This makes HAWK suitable for online signatures. Using HAWK adds 4.8kB. Apart from size and speed, it’s beneficial to rely on only a single scheme: using multiple schemes increases the attack surface for algorithmic weaknesses and implementation mistakes.</p><p>Similar to Falcon, HAWK is missing a middle security level. Using HAWK-1024 doubles sizes (9.6kB).</p><p>There is one downside to HAWK over Falcon: HAWK relies on a new security assumption, the <a href="https://eprint.iacr.org/2021/1332.pdf"><u>lattice isomorphism problem</u></a>.</p>
    <div>
      <h3>SQISign</h3>
      <a href="#sqisign">
        
      </a>
    </div>
    <p><a href="https://sqisign.org/"><u>SQISign</u></a> is based on <a href="https://blog.cloudflare.com/sidh-go/"><u>isogenies</u></a>. Famously, SIKE, another isogeny-based scheme in the previous competition, got <a href="https://eprint.iacr.org/2022/975.pdf"><u>broken badly</u></a> late into the competition. SQISign is based on a different problem, though. SQISign is remarkable for having very small signatures and public keys: it even beats RSA-2048. The glaring downside is that it is computationally very expensive to compute and verify a signature. Isogeny-based signature schemes is a very active area of research with many advances over the years.</p><p>It seems unlikely that any future SQISign variant will sign fast enough for the TLS handshake signature. Furthermore, SQISign signing seems to be hard to implement in a timing side-channel secure manner. What about the other signatures of TLS? The bottleneck is verification time. It would be acceptable for SQISign to have larger signatures, if that allows it to have faster verification time.</p>
    <div>
      <h3>UOV</h3>
      <a href="#uov">
        
      </a>
    </div>
    <p><a href="https://www.uovsig.org/"><u>UOV</u></a> (unbalanced oil and vinegar) is an old multivariate scheme with large public keys (67kB), but small signatures (96 bytes). Furthermore, it has excellent signing and verification performance. These interesting size tradeoffs make it quite suited for use cases where the public key is known in advance.</p><p>If we use UOV in TLS for the SCTs and root CA, whose public keys are not transmitted when setting up the connection, together with ML-DSA for the others, we’re looking at 7.2kB. That’s a clear improvement over using ML-DSA everywhere, and a tad better than combining ML-DSA with Falcon.</p><p>When combining UOV with HAWK instead of ML-DSA, we’re looking at adding only 3.4kB. That’s better again, but only a marginal improvement over using HAWK everywhere (4.8kB). The relative advantage of UOV improves if the certificate transparency ecosystem moves towards requiring more SCTs.</p><p>For SCTs, the size of UOV public keys seems acceptable, as there are not that many certificate transparency logs at the moment. Shipping a UOV public key for hundreds of root CAs is more painful, but within reason. Even with <a href="https://blog.cloudflare.com/pq-2024/#leaving-out-intermediate-certificates"><u>intermediate suppression</u></a>, using UOV in each of the thousands of intermediate certificates does not make sense.</p>
    <div>
      <h3>Structured multivariate</h3>
      <a href="#structured-multivariate">
        
      </a>
    </div>
    <p>Since the original UOV, over the decades, many attempts have been made to add additional structure UOV, to get a better balance between the size of the signature and public key. Unfortunately many of these <i>structured multivariate</i> schemes, which include GeMMS and Rainbow, have been broken.</p><p>Let’s have a look at the multivariate candidates. The most interesting variant of <b>QR-UOV</b> for TLS has 24kB public keys and 157 byte signatures. The current verification times are unacceptably high, but there seems to be plenty of room for an improved implementation. There is also a variant with a 12kB public key, but its verification time needs to come down even further. In any case, the combined size QR-UOV’s public key and signatures remain large enough that it’s not a competitor of ML-DSA or Falcon. Instead, QR-UOV competes with UOV, where UOV’s public keys are unwieldy. Although QR-UOV hasn’t seen a direct attack yet, a similar scheme has recently been <a href="https://link.springer.com/chapter/10.1007/978-3-031-62746-0_9"><u>weakened</u></a> and another <a href="https://link.springer.com/chapter/10.1007/978-3-030-44223-1_18"><u>broken</u></a>.</p><p>Finally, we get to<b> </b><a href="https://snova.pqclab.org/"><b><u>SNOVA</u></b></a> and <a href="https://pqmayo.org/"><b><u>MAYO</u></b></a>. Although they’re based on a different technique, they have a lot of properties in common. To start, they have the useful property that they allow for a granular tradeoff between public key and signature size. This allows us to use a different variant optimized for whether we’re transmitting the public in the connection or not. Using MAYO<sub>one</sub> for the leaf and intermediate, and MAYO<sub>two</sub> for the others, adds 3.5kB. Similarly with SNOVA, we add 2.8kB. On top of that, both schemes have excellent signing and verification performance.</p><p>The elephant in the room is the security. During the end of the first round, a new <a href="https://www.jstage.jst.go.jp/article/jsiaml/15/0/15_53/_article"><u>generic attack</u></a> on underdefined multivariate systems prompted the MAYO team to <a href="https://groups.google.com/a/list.nist.gov/g/pqc-forum/c/jEKfDYUgdec/m/0UP_GNKSAwAJ"><u>tweak their parameters</u></a> slightly. SNOVA has been hit a bit harder by three attacks (<a href="https://dl.acm.org/doi/10.1145/3659467.3659900"><u>1</u></a>, <a href="https://eprint.iacr.org/2024/1297"><u>2</u></a>, <a href="https://eprint.iacr.org/2024/1770.pdf"><u>3</u></a>), but so far it seems that SNOVA’s parameters can be adjusted to compensate.</p><p>Ok, we had a look at all the candidates. What did we learn? There are some very promising algorithms that will reduce the number of bytes required on the wire compared to ML-DSA and Falcon. None of the practical ones will prevent us from adding any extra bytes to TLS. So, given that we must add some bytes: how many extra bytes are too many?</p>
    <div>
      <h2>How many added bytes are too many for TLS?</h2>
      <a href="#how-many-added-bytes-are-too-many-for-tls">
        
      </a>
    </div>
    <p>On average, around 15 million TLS connections are established with Cloudflare per second. Upgrading each to ML-DSA, would take 1.8Tbps, which is 0.6% of our current total network capacity. No problem so far. The question is how these extra bytes affect performance.</p><p>Back in 2021, we <a href="https://blog.cloudflare.com/sizing-up-post-quantum-signatures/"><u>ran a large-scale experiment</u></a> to measure the impact of big post-quantum certificate chains on connections to Cloudflare’s network over the open Internet. There were two important results. First, we saw a steep increase in the rate of client and middlebox failures when we added more than 10kB to existing certificate chains. Secondly, when adding less than 9kB, the slowdown in TLS handshake time would be approximately 15%. We felt the latter is workable, but far from ideal: such a slowdown is noticeable and people might hold off deploying post-quantum certificates before it’s too late.</p><p>Chrome is more cautious and set 10% as their target for maximum TLS handshake time regression. They <a href="https://dadrian.io/blog/posts/pqc-signatures-2024/#fnref:3"><u>report</u></a> that deploying post-quantum key agreement has already incurred a 4% slowdown in TLS handshake time, for the extra 1.1kB from server-to-client and 1.2kB from client-to-server. That slowdown is proportionally larger than the 15% we found for 9kB, but that could be explained by slower upload speeds than download speeds.</p><p>There has been pushback against the focus on TLS handshake times. One argument is that session resumption alleviates the need for sending the certificates again. A second argument is that the data required to visit a typical website dwarfs the additional bytes for post-quantum certificates. One example is this <a href="https://www.amazon.science/publications/the-impact-of-data-heavy-post-quantum-tls-1-3-on-the-time-to-last-byte-of-real-world-connections"><u>2024 publication</u></a>, where Amazon researchers have simulated the impact of large post-quantum certificates on data-heavy TLS connections. They argue that typical connections transfer multiple requests and hundreds of kilobytes, and for those the TLS handshake slowdown disappears in the margin.</p><p>Are session resumption and hundreds of kilobytes over a connection typical though? We’d like to share what we see. We focus on QUIC connections, which are likely initiated by browsers or browser-like clients. Of all QUIC connections with Cloudflare that carry at least one HTTP request, 37% are <a href="https://blog.cloudflare.com/even-faster-connection-establishment-with-quic-0-rtt-resumption/"><u>resumptions</u></a>, meaning that key material from a previous TLS connection is reused, avoiding the need to transmit certificates. The median number of bytes transferred from server-to-client over a resumed QUIC connection is 4.4kB, while the average is 395kB. For non-resumptions the median is 7.8kB and average is 551kB. This vast difference between median and average indicates that a small fraction of data-heavy connections skew the average. In fact, only 15.8% of all QUIC connections transfer more than 100kB.</p><p>The median certificate chain today (with compression) is <a href="https://datatracker.ietf.org/doc/html/draft-ietf-tls-cert-abridge-02#section-4"><u>3.2kB</u></a>. That means that almost 40% of all data transferred from server to client on more than half of the non-resumed QUIC connections are just for the certificates, and this only gets worse with post-quantum algorithms. For the majority of QUIC connections, using ML-DSA as a drop-in replacement for classical signatures would more than double the number of transmitted bytes over the lifetime of the connection.</p><p>It sounds quite bad if the vast majority of data transferred for a typical connection is just for the post-quantum certificates. It’s still only a proxy for what is actually important: the effect on metrics relevant to the end-user, such as the browsing experience (e.g. <a href="https://web.dev/articles/optimize-lcp"><u>largest contentful paint</u></a>) and the amount of data those certificates take from a user’s monthly data cap. We will continue to investigate and get a better understanding of the impact.</p>
    <div>
      <h2>Zooming out</h2>
      <a href="#zooming-out">
        
      </a>
    </div>
    <p>That was a lot — let’s step back.</p><p>It’s great to see how much better the post-quantum signature algorithms are today in almost every family than they were in <a href="https://blog.cloudflare.com/sizing-up-post-quantum-signatures/"><u>2021</u></a>. The improvements haven’t slowed down either. Many of the algorithms that do not improve over ML-DSA for TLS today could still do so in the third round. Looking back, we are also cautioned: several algorithms considered in 2021 have since been broken.</p><p>From an implementation and performance perspective for TLS today, HAWK, SNOVA, and MAYO are all clear improvements over ML-DSA and Falcon. They are also very new, and presently we cannot depend on them without a <a href="https://blog.cloudflare.com/pq-2024/#way-forward"><u>plan B</u></a>. UOV has been around a lot longer. Due to its large public key, it will not work on its own, but be a very useful complement to another general purpose signature scheme.</p><p>Even with the best performers out of the competition, the way we see TLS connections used today, suggest that drop-in post-quantum certificates will have a big impact on at least half of them.</p><p>In the meantime, we can also make plan B our plan A: there are several ways in which we can reduce the number of signatures used in TLS. We can leave out intermediate certificates (<a href="https://datatracker.ietf.org/doc/html/draft-kampanakis-tls-scas-latest"><u>1</u></a>, <a href="https://datatracker.ietf.org/doc/draft-ietf-tls-cert-abridge/"><u>2</u></a>, <a href="https://datatracker.ietf.org/doc/html/draft-davidben-tls-trust-expr-04#name-intermediate-elision"><u>3</u></a>). Another is to use a KEM <a href="https://kemtls.org/"><u>instead of a signature</u></a> for handshake authentication. We can even get rid of all the offline signatures with a more <a href="https://datatracker.ietf.org/doc/html/draft-davidben-tls-merkle-tree-certs-03"><u>ambitious redesign</u></a> for the <a href="https://www.youtube.com/watch?v=f8unMB2Qjho"><u>vast majority</u></a> of visits: a post-quantum Internet with fewer bytes on the wire! We’ve discussed these ideas at more length in a <a href="https://blog.cloudflare.com/pq-2024/#way-forward"><u>previous blog post</u></a>.</p><p>So what does this mean for the coming years? We will continue to work with browsers to understand the end user impact of large drop-in post-quantum certificates. When certificate authorities support them (our guess: 2026), we will add support for ML-DSA certificates <a href="https://blog.cloudflare.com/post-quantum-crypto-should-be-free/"><u>for free</u></a>. This will be opt-in until cryptographically relevant quantum computers are imminent, to prevent undue performance regression. In the meantime, we will continue to pursue larger changes to the WebPKI, so that we can bring full post-quantum security to the Internet without performance compromise.</p><p>We’ve talked a lot about certificates, but what we need to care about today is encryption. Along with many across industry, including the major browsers, we have deployed the post-quantum key agreement X25519MLKEM768 across the board, and you can make sure your connections with Cloudflare are already secured against harvest-now/decrypt-later. Visit <a href="http://pq.cloudflareresearch.com"><u>pq.cloudflareresearch.com</u></a> to learn how.</p> ]]></content:encoded>
            <category><![CDATA[Post-Quantum]]></category>
            <category><![CDATA[Research]]></category>
            <category><![CDATA[Cryptography]]></category>
            <category><![CDATA[TLS]]></category>
            <guid isPermaLink="false">3mOPXbiTgeQHBChx4vUuMs</guid>
            <dc:creator>Bas Westerbaan</dc:creator>
            <dc:creator>Luke Valenta</dc:creator>
        </item>
        <item>
            <title><![CDATA[New standards for a faster and more private Internet]]></title>
            <link>https://blog.cloudflare.com/new-standards/</link>
            <pubDate>Wed, 25 Sep 2024 13:00:00 GMT</pubDate>
            <description><![CDATA[ Cloudflare's customers can now take advantage of Zstandard (zstd) compression, offering 42% faster compression than Brotli and 11.3% more efficiency than GZIP. We're further optimizing performance for our customers with HTTP/3 prioritization and BBR congestion control, and enhancing privacy through Encrypted Client Hello (ECH). ]]></description>
            <content:encoded><![CDATA[ <p>As the Internet grows, so do the demands for speed and security. At Cloudflare, we’ve spent the last 14 years simplifying the adoption of the latest web technologies, ensuring that our users stay ahead without the complexity. From being the first to offer <a href="https://www.cloudflare.com/application-services/products/ssl/">free SSL certificates</a> through <a href="https://blog.cloudflare.com/introducing-universal-ssl/"><u>Universal SSL</u></a> to quickly supporting innovations like <a href="https://blog.cloudflare.com/introducing-tls-1-3"><u>TLS 1.3</u></a>, <a href="https://blog.cloudflare.com/introducing-cloudflares-automatic-ipv6-gatewa/"><u>IPv6</u></a>, and <a href="https://blog.cloudflare.com/http-3-from-root-to-tip/"><u>HTTP/3</u></a>, we've consistently made it easy for everyone to harness cutting-edge advancements.</p><p>One of the most exciting recent developments in web performance is Zstandard (zstd) — a new compression algorithm that we have found compresses data 42% faster than Brotli while maintaining almost the same compression levels. Not only that, but Zstandard reduces file sizes by 11.3% compared to GZIP, all while maintaining comparable speeds. As compression speed and efficiency directly impact latency, this is a game changer for improving user experiences across the web.</p><p>We’re also re-starting the <a href="https://blog.cloudflare.com/announcing-encrypted-client-hello/"><u>rollout of Encrypted Client Hello (ECH)</u></a>, a <a href="https://datatracker.ietf.org/doc/draft-ietf-tls-esni/"><u>new proposed standard </u></a>that prevents networks from snooping on which websites a user is visiting. <a href="https://blog.cloudflare.com/encrypted-client-hello/"><u>Encrypted Client Hello (ECH) is a successor to ESNI</u></a> and masks the <a href="https://www.cloudflare.com/en-gb/learning/ssl/what-is-sni/"><u>Server Name Indication (SNI)</u></a> that is used to negotiate a TLS handshake. This means that whenever a user visits a website on Cloudflare that has ECH enabled, no one except for the user, Cloudflare, and the website owner will be able to determine which website was visited. Cloudflare is a big proponent of privacy for everyone and is excited about the prospects of bringing this technology to life.</p><p>In this post, we also further explore our work measuring the impact of HTTP/3 prioritization, and the development of Bottleneck Bandwidth and Round-trip propagation time (BBR) congestion control to further optimize network performance.</p>
    <div>
      <h2>Introducing Zstandard compression</h2>
      <a href="#introducing-zstandard-compression">
        
      </a>
    </div>
    <p><a href="https://github.com/facebook/zstd"><u>Zstandard</u></a>, an advanced compression algorithm, was developed by <a href="https://engineering.fb.com/2018/12/19/core-infra/zstandard/"><u>Yann Collet at Facebook</u></a> and open sourced in August 2016 to manage large-scale data processing.  It has gained popularity in recent years due to its impressive compression ratios and speed. The protocol was included in <a href="https://chromestatus.com/feature/6186023867908096"><u>Chromium-based browsers</u></a> and <a href="https://connect.mozilla.org/t5/ideas/add-support-for-zstd-compression/idi-p/52155"><u>Firefox</u></a> in March 2024 as a <a href="https://caniuse.com/zstd"><u>supported</u></a> compression algorithm. </p><p>Today, we are excited to announce that Zstandard compression between Cloudflare and browsers is now available to everyone. </p><p>Our testing shows that Zstandard compresses data up to 42% faster than <a href="https://github.com/google/brotli"><u>Brotli</u></a> while achieving nearly equivalent data compression. Additionally, Zstandard outperforms <a href="https://datatracker.ietf.org/doc/html/rfc1952"><u>GZIP</u></a> by approximately 11.3% in compression efficiency, all while maintaining similar compression speeds. This means Zstandard can compress files to the same size as Brotli but in nearly half the time, speeding up your website without sacrificing performance.

This is exciting because compression speed and file size directly impacts latency. When a browser requests a resource from the origin server, the server needs time to compress the data before it’s sent over the network. A faster compression algorithm, like Zstandard, reduces this initial processing time. By also reducing the size of files transmitted over the Internet, better compression means downloads take less time to complete, websites load quicker, and users ultimately get a better experience.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6HSjbMJGtBI4GJlBp2Jf35/e2f971157f078636c6702f40f2c03a70/image2.png" />
            
            </figure>
    <div>
      <h3>Why is compression so important?</h3>
      <a href="#why-is-compression-so-important">
        
      </a>
    </div>
    <p>Website performance is crucial to the success of online businesses. <a href="https://www2.deloitte.com/content/dam/Deloitte/ie/Documents/Consulting/Milliseconds_Make_Millions_report.pdf"><u>Study</u></a> after <a href="https://www.thinkwithgoogle.com/_qs/documents/4290/c676a_Google_MobileSiteSpeed_Playbook_v2.1_digital_4JWkGQT.pdf"><u>study</u></a> has shown that an increased load time <a href="https://www.cloudflare.com/learning/performance/more/website-performance-conversion-rates/"><u>directly affects sales</u></a>. In highly competitive markets, the performance of a website is crucial for success. Just like a physical shop situated in a remote area faces challenges in attracting customers, a slow website encounters similar difficulties in attracting traffic. 

Think about buying a piece of flat pack furniture such as a bookshelf. Instead of receiving the bookshelf fully assembled, which would be expensive and cumbersome to transport, you receive it in a compact, flat box with all the components neatly organized, ready for assembly. The parts are carefully arranged to take up the least amount of space, making the package much smaller and easier to handle. When you get the item, you simply follow the instructions to assemble it to its proper state. </p><p>This is similar to how data compression works. The data is “disassembled” and packed tightly to reduce its size before being transmitted. Once it reaches its destination, it’s “reassembled” to its original form. This compression process reduces the amount of data that needs to be sent, saving bandwidth, reducing costs, and speeding up the transfer, just like how flat pack furniture reduces shipping costs and simplifies delivery logistics.</p><p>However, with compression, there is a tradeoff: time to compress versus the overall compression ratio. A compression ratio is a measure of how much a file's size is reduced during compression. For example, a 10:1 compression ratio means that the compressed file is one-tenth the size of the original. Just like assembling flat-pack furniture takes time and effort, achieving higher compression ratios often requires more processing time. While a higher compression ratio significantly reduces file size — making data transmission faster and more efficient — it may take longer to compress and decompress the data. Conversely, quicker compression methods might produce larger files, leading to faster processing but at the cost of greater bandwidth usage. Balancing these factors is key to optimizing performance in data transmission.</p><p><a href="https://w3techs.com/technologies/details/ce-compression"><u>W3 Technologies</u></a> reports that as of September 12, 2024, 88.6% of websites rely on compression to optimize speed and reduce bandwidth usage. <a href="https://datatracker.ietf.org/doc/html/rfc1952"><u>GZIP</u></a>, introduced in 1996, remains the default algorithm for many, used by 57.0% of sites due to its reasonable compression ratios and fast compression speeds. <a href="https://datatracker.ietf.org/doc/html/rfc7932"><u>Brotli</u></a>, released by Google in 2016, delivers better <a href="https://blog.cloudflare.com/results-experimenting-brotli/"><u>compression ratios</u></a>, leading to smaller file sizes, especially for static assets like JavaScript and CSS, and is used by 45.5% of websites. However, this also means that 11.4% of websites still operate without any compression, missing out on crucial performance improvements.</p><p>As the Internet and its supporting infrastructure have evolved, so have user demands for faster, more efficient performance. This growing need for higher efficiency without compromising speed is where Zstandard comes into play.</p>
    <div>
      <h3>Enter Zstandard</h3>
      <a href="#enter-zstandard">
        
      </a>
    </div>
    <p>Zstandard offers higher compression ratios comparable to GZIP, but with significantly faster compression and decompression speeds than Brotli. This makes it ideal for real-time applications that require both speed and relatively high compression ratios.</p><p>To understand Zstandard's advantages, it's helpful to know about <a href="https://blog.cloudflare.com/cloudflare-fights-cancer/"><u>Zlib</u></a>. Zlib was developed in the mid-1990s based on the <a href="https://en.wikipedia.org/wiki/DEFLATE"><u>DEFLATE</u></a> compression algorithm, which combines <a href="https://www.cloudflare.com/en-gb/learning/performance/glossary/what-is-image-compression/"><u>LZ77 and Huffman coding</u></a> to reduce file sizes. While Zlib has been a compression standard since the mid-1990s and is used in Cloudflare’s <a href="https://blog.cloudflare.com/cloudflare-fights-cancer/"><u>open-source</u></a> GZIP implementation, its design is limited by a 32 KB sliding window — a constraint from the memory limitations of that era. This makes Zlib less efficient on modern hardware, which can access far more memory.</p><p>Zstandard enhances Zlib by leveraging modern innovations and hardware capabilities. Unlike Zlib’s fixed 32 KB window, Zstandard has no strict memory constraints and can theoretically address terabytes of memory. However,  in practice, it typically uses much less, around 1 MB at lower compression levels. This flexibility allows Zstandard to buffer large amounts of data, enabling it to identify and compress repeating patterns more effectively. Zstandard also employs <a href="https://engineering.fb.com/2016/08/31/core-infra/smaller-and-faster-data-compression-with-zstandard/#:~:text=Repcode%20modeling,within%20zlib/gzip."><u>repcode modeling</u></a> to efficiently compress structured data with repetitive sequences, further reducing file sizes and enhancing its suitability for modern compression needs.</p><p>Zstandard is optimized for modern CPUs, which can execute multiple tasks simultaneously using multiple Arithmetic Logic Units (ALUs) that are used to perform mathematical tasks. Zstandard achieves this by processing data in parallel streams, dividing it into multiple parts that are processed concurrently. <a href="https://chromium.googlesource.com/external/github.com/klauspost/compress/+/refs/heads/master/huff0/"><u>The Huffman decoder, Huff0</u></a>, can decode multiple symbols in parallel on a single CPU core, and when combined with multi-threading, this leads to substantial speed improvements during both compression and decompression.</p><p>Zstandard’s branchless design is a crucial innovation that enhances CPU efficiency, especially in modern processors. To understand its significance, consider how CPUs execute instructions.</p><p>Modern CPUs use pipelining, where different stages of an instruction are processed simultaneously—like a production line—keeping all parts of the processor busy. However, when CPUs encounter a branch, such as an 'if-else' decision, they must make a <a href="https://blog.cloudflare.com/branch-predictor/"><u>branch prediction</u></a> to guess the next step. If the prediction is wrong, the pipeline must be cleared and restarted, causing slowdowns.</p><p>Zstandard avoids this issue by eliminating conditional branching. Without relying on branch predictions, it ensures the CPU can execute instructions continuously, keeping the pipeline full and avoiding performance bottlenecks.</p><p>A key feature of Zstandard is its use of <a href="https://www.rfc-editor.org/rfc/rfc8478.html#section-4.1"><u>Finite State Entropy (FSE)</u></a>, an advanced compression method that encodes data more efficiently based on probability. FSE, built on the <a href="https://en.wikipedia.org/wiki/Asymmetric_numeral_systems"><u>Asymmetric Numeral System (ANS)</u></a>, allows Zstandard to use fractional bits for encoding, unlike traditional Huffman coding, which only uses whole bits. This allows heavily repeated data to be compressed more tightly without sacrificing efficiency.</p>
    <div>
      <h3>Zstandard findings</h3>
      <a href="#zstandard-findings">
        
      </a>
    </div>
    <p>In the third quarter of 2024, we conducted extensive tests on our new Zstandard compression module, focusing on a 24-hour period where we switched the default compression algorithm from Brotli to Zstandard across our Free plan traffic. This experiment spanned billions of requests, covering a wide range of file types and sizes, including HTML, CSS, and JavaScript. The results were very promising, with significant improvements in both compression speed and file size reduction, leading to faster load times and more efficient bandwidth usage.</p>
    <div>
      <h4>Compression ratios</h4>
      <a href="#compression-ratios">
        
      </a>
    </div>
    <p>In terms of compression efficiency, Zstandard delivers impressive results. Below are the average compression ratios we observed during our testing.</p><table><tr><td><p><b>Compression Algorithm</b></p></td><td><p><b>Average Compression Ratio</b></p></td></tr><tr><td><p>GZIP</p></td><td><p>2.56</p></td></tr><tr><td><p>Zstandard</p></td><td><p>2.86</p></td></tr><tr><td><p>Brotli</p></td><td><p>3.08</p></td></tr></table>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2EA8KkP7M3j4KiEArzmXVT/2a93b972f531f02f6b253e231f73ff40/image5.png" />
            
            </figure><p>As the table shows, Zstandard achieves an average compression ratio of <b>2.86:1</b>, which is notably higher than gzip's <b>2.56:1</b> and close to Brotli’s <b>3.08:1</b>. While Brotli slightly edges out Zstandard in terms of pure compression ratio, what is particularly exciting is that we are only using Zstandard’s default compression level of 3 (out of 22) on our traffic. In the fourth quarter of 2024, we plan to experiment with higher compression levels and multithreading capabilities to further enhance Zstandard’s performance and optimize results even more.</p>
    <div>
      <h4>Compression speeds</h4>
      <a href="#compression-speeds">
        
      </a>
    </div>
    <p>What truly sets Zstandard apart is its speed. Below are the average times to compress data from our traffic-based tests measured in milliseconds:</p><table><tr><td><p><b>Compression Algorithm</b></p></td><td><p><b>Average Time to Compress (ms)</b></p></td></tr><tr><td><p>GZIP</p></td><td><p>0.872</p></td></tr><tr><td><p>Zstandard</p></td><td><p>0.848</p></td></tr><tr><td><p>Brotli</p></td><td><p>1.544</p></td></tr></table>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/15AxPyO1PmyV6hDRRjBznu/9cf16cfdc146afddbf9a3332da29629a/image10.png" />
            
            </figure><p>Zstandard not only compresses data efficiently, but it also does so <b>42% faster</b> than Brotli, with an average compression time of <b>0.848 ms</b> compared to Brotli’s <b>1.544 ms</b>. It even outperforms gzip, which compresses at <b>0.872 ms</b> on average.</p><p>From our results, we have found Zstandard strikes an excellent balance between achieving a high compression ratio and maintaining fast compression speed, making it particularly well-suited for dynamic content such as HTML and non-cacheable sensitive data. Zstandard can compress these responses from the origin quickly and efficiently, saving time compared to Brotli while providing better compression ratios than GZIP.</p>
    <div>
      <h3>Implementing Zstandard at Cloudflare</h3>
      <a href="#implementing-zstandard-at-cloudflare">
        
      </a>
    </div>
    <p>To implement Zstandard compression at Cloudflare, we needed to build it into our Nginx-based service which already handles GZIP and Brotli compression. Nginx is modular by design, with each module performing a specific function, such as compressing a response. Our custom Nginx module leverages Nginx's function 'hooks' — specifically, the header filter and body filter — to implement Zstandard compression.</p>
    <div>
      <h4>Header filter</h4>
      <a href="#header-filter">
        
      </a>
    </div>
    <p>The header filter allows us to access and modify response headers. For example, Cloudflare only compresses responses above a certain size (50 bytes for Zstandard), which is enforced with this code:</p>
            <pre><code>if (r-&gt;headers_out.content_length_n != -1 &amp;&amp;
    r-&gt;headers_out.content_length_n &lt; conf-&gt;min_length) {
    return ngx_http_next_header_filter(r);
}</code></pre>
            <p>Here, we check the "Content-Length" header. If the content length is less than our minimum threshold, we skip compression and let Nginx execute the next module.</p><p>We also need to ensure the content is not already compressed by checking the "Content-Encoding" header:</p>
            <pre><code>if (r-&gt;headers_out.content_encoding &amp;&amp;
    r-&gt;headers_out.content_encoding-&gt;value.len) {
    return ngx_http_next_header_filter(r);
}</code></pre>
            <p>If the content is already compressed, the module is bypassed, and Nginx proceeds to the next header filter.</p>
    <div>
      <h4>Body filter</h4>
      <a href="#body-filter">
        
      </a>
    </div>
    <p>The body filter hook is where the actual processing of the response body occurs. In our case, this involves compressing the data with the Zstandard encoder and streaming the compressed data back to the client. Since responses can be very large, it's not feasible to buffer the entire response in memory, so we manage internal memory buffers carefully to avoid running out of memory.</p><p>The Zstandard library is well-suited for streaming compression and provides the <code>ZSTD_compressStream2</code> function:</p>
            <pre><code>ZSTDLIB_API size_t ZSTD_compressStream2(ZSTD_CCtx* cctx,
                                        ZSTD_outBuffer* output,
                                        ZSTD_inBuffer* input,
                                        ZSTD_EndDirective endOp);</code></pre>
            <p>This function can be called repeatedly with chunks of input data to be compressed. It accepts input and output buffers and an "operation" parameter (<code>ZSTD_EndDirective endOp</code>) that controls whether to continue feeding data, flush the data, or finalize the compression process.</p><p>Nginx uses a "flush" flag on memory buffers to indicate when data can be sent. Our module uses this flag to set the appropriate Zstandard operation:</p>
            <pre><code>switch (zstd_operation) {
    case ZSTD_e_continue: {
        if (flush) {
            zstd_operation = ZSTD_e_flush;
        }
    }
}
</code></pre>
            <p>This logic allows us to switch from the "ZSTD_e_continue" operation, which feeds more input data into the encoder, to "ZSTD_e_flush", which extracts compressed data from the encoder.</p>
    <div>
      <h4>Compression cycle</h4>
      <a href="#compression-cycle">
        
      </a>
    </div>
    <p>The compression module operates in the following cycle:</p><ol><li><p>Receive uncompressed data.</p></li><li><p>Locate an internal buffer to store compressed data.</p></li><li><p>Compress the data with Zstandard.</p></li><li><p>Send the compressed data back to the client.</p></li></ol><p>Once a buffer is filled with compressed data, it’s passed to the next Nginx module and eventually sent to the client. When the buffer is no longer in use, it can be recycled, avoiding unnecessary memory allocation. This process is managed as follows:</p>
            <pre><code>if (free) {
    // A free buffer is available, so use it
    buffer = free;
} else if (buffers_used &lt; maximum_buffers) {
    // No free buffers, but we're under the limit, so allocate a new one
    buffer = create_buf();
} else {
    // No free buffers and can't allocate more
    err = no_memory;
}
</code></pre>
            
    <div>
      <h4>Handling backpressure</h4>
      <a href="#handling-backpressure">
        
      </a>
    </div>
    <p>If no buffers are available, it can lead to backpressure — a situation where the Zstandard module generates compressed data faster than the client can receive it. This causes data to become "stuck" inside Nginx, halting further compression due to memory constraints. In such cases, we stop compression and send an empty buffer to the next Nginx module, allowing Nginx to attempt to send the data to the client again. When successful, this frees up memory buffers that our module can reuse, enabling continued streaming of the compressed response without buffering the entire response in memory.</p>
    <div>
      <h3>What's next? Compression dictionaries</h3>
      <a href="#whats-next-compression-dictionaries">
        
      </a>
    </div>
    <p>The future of Internet compression lies in the use of <a href="https://datatracker.ietf.org/doc/draft-ietf-httpbis-compression-dictionary/"><u>compression dictionaries</u></a>. Both Brotli and Zstandard support dictionaries, offering up to <a href="https://developer.chrome.com/blog/shared-dictionary-compression"><u>90% improvement</u></a> on compression levels compared to using static dictionaries. </p><p>Compression dictionaries store common patterns or sequences of data, allowing algorithms to compress information more efficiently by referencing these patterns rather than repeating them. This concept is akin to how an iPhone's predictive text feature works. For example, if you frequently use the phrase "On My Way," you can customize your iPhone’s dictionary to recognize the abbreviation "OMW" and automatically expand it to "On My Way" when you type it, saving the user from typing six extra letters.</p><table><tr><td><p>O</p></td><td><p>M</p></td><td><p>W</p></td><td><p></p></td><td><p></p></td><td><p></p></td><td><p></p></td><td><p></p></td><td><p></p></td></tr><tr><td><p>O</p></td><td><p>n</p></td><td><p>
</p></td><td><p>M</p></td><td><p>y</p></td><td><p>
</p></td><td><p>W</p></td><td><p>a</p></td><td><p>y</p></td></tr></table><p>Traditionally, compression algorithms use a static dictionary defined by its RFC that is shared between clients and origin servers. This static dictionary is designed to be broadly applicable, balancing size and compression effectiveness for general use. However, Zstandard and Brotli support custom dictionaries, tailored specifically to the content being sent to the client. For example, Cloudflare could create a specialized dictionary that focuses on frequently used terms like “Cloudflare”. This custom dictionary would compress these terms more efficiently, and a browser using the same dictionary could decode them accurately, leading to significant improvements in compression and performance.</p><p>In the future, we will enable users to leverage origin-generated dictionaries for Zstandard and Brotli to enhance compression. Another exciting area we're exploring is the use of AI to create these dictionaries dynamically without them needing to be generated at the origin. By analyzing data streams in real-time, Cloudflare could develop context-aware dictionaries tailored to the specific characteristics of the data being processed. This approach would allow users to significantly improve both compression ratios and processing speed for their applications.</p>
    <div>
      <h3>Compression Rules for everyone</h3>
      <a href="#compression-rules-for-everyone">
        
      </a>
    </div>
    <p>Today we’re also excited to announce the introduction of <a href="https://developers.cloudflare.com/rules/compression-rules/"><u>Compression Rules</u></a> for all our customers. By default, Cloudflare will automatically compress certain content types based on their <a href="https://developers.cloudflare.com/speed/optimization/content/brotli/content-compression"><u>Content-Type headers</u></a>. Customers can use compression rules to optimize how and what Cloudflare compresses. This feature was previously exclusive to our Enterprise plans.

Compression Rules is built on the same robust framework as our other rules products, such as Origin Rules, Custom Firewall Rules, and Cache Rules, with additional fields for Media Type and Extension Type. This allows you to easily specify the content you wish to compress, providing granular control over your site’s performance optimization.</p><p>Compression rules are now available on all our pay-as-you-go plans and will be added to free plans in October 2024. This feature was previously exclusive to our Enterprise customers. In the table below, you’ll find the updated limits, including an increase to 125 Compression Rules for Enterprise plans, aligning with our other rule products' quotas.</p><table><tr><td><p><b>Plan Type</b></p></td><td><p><b>Free*</b></p></td><td><p><b>Pro</b></p></td><td><p><b>Business</b></p></td><td><p><b>Enterprise</b></p></td></tr><tr><td><p>Available Compression Rules</p></td><td><p>10</p></td><td><p>25</p></td><td><p>50</p></td><td><p>125</p></td></tr></table>
    <div>
      <h3>Using Compression Rules to enable Zstandard</h3>
      <a href="#using-compression-rules-to-enable-zstandard">
        
      </a>
    </div>
    <p>To integrate our Zstandard module into our platform, we also added support for it within our Compression Rules framework. This means that customers can now specify Zstandard as their preferred compression method, and our systems will automatically enable the Zstandard module in Nginx, disabling other compression modules when necessary.</p><p>The <code>Accept-Encoding</code> header determines which compression algorithms a client supports. If a browser supports Zstandard (<code>zstd</code>), and both Cloudflare and the zone have enabled the feature, then Cloudflare will return a Zstandard compressed response.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/tRIwu8JItGU0zyVmeTp2e/232af3ea43893022e5879e8361ed42b7/image4.png" />
            
            </figure><p>If the client does not support Zstandard, then Cloudflare will automatically fall back to Brotli, GZIP, or serve the content uncompressed where no compression algorithm is supported, ensuring compatibility. 

To enable Zstandard for your entire site or specifically filter on certain file types, all Cloudflare users can deploy a simple compression rule.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2WiodFBfD02mlPASLABkz5/e9a66b552ba46e423f352e502792b398/image3.png" />
            
            </figure><p>Further details and examples of what can be accomplished with Compression Rules can be found in our <a href="https://developers.cloudflare.com/rules/compression-rules/"><u>developer documentation</u></a>.</p><p>Currently, we support Zstandard, Brotli, and GZIP as compression algorithms for traffic sent to clients, and support GZIP and Brotli (since <a href="https://blog.cloudflare.com/this-is-brotli-from-origin/"><u>2023</u></a>) compressed data from the origin. We plan to implement full end-to-end support for Zstandard in 2025, offering customers another effective way to reduce their egress costs.</p><p>Once Zstandard is enabled, you can view your browser’s <a href="https://developer.chrome.com/docs/devtools/network"><u>Network Activity</u></a> log to check the content-encoding headers of the response.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1CujaUsXiEFee79Ny27Zks/1f7ef23910d4bad47c203ad311866951/image11.png" />
            
            </figure><p></p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5JIvK4tNjlxd7uFlWyEJoc/e426053d2c895c980f4c1370379c7b2e/image1.png" />
            
            </figure>
    <div>
      <h3>Enable Zstandard now!</h3>
      <a href="#enable-zstandard-now">
        
      </a>
    </div>
    <p>Zstandard is now available to all Cloudflare customers through <a href="https://dash.cloudflare.com/?to=/:account/:zone/rules/compression-rules"><u>Compression Rules</u></a> on our Enterprise and pay as you go plans, with free plans gaining access in October 2024. Whether you're optimizing for speed or aiming to reduce bandwidth, <a href="https://dash.cloudflare.com/?to=/:account/:zone/rules/compression-rules"><u>Compression Rules</u></a> give all customers granular control over their site's performance.</p>
    <div>
      <h2>Encrypted Client Hello (ECH)</h2>
      <a href="#encrypted-client-hello-ech">
        
      </a>
    </div>
    
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4BFVDlJK6Mxn0P1yk5UdHK/e348084642480eeaec9861a76e32d5f2/image9.png" />
          </figure><p>While performance is crucial for delivering a fast user experience, ensuring privacy is equally important in today’s Internet landscape. As we optimize for speed with Zstandard, Cloudflare is also working to protect users' sensitive information from being exposed during data transmission. With web traffic growing more complex and interconnected, it's critical to keep both performance and privacy in balance. This is where technologies like Encrypted Client Hello (ECH) come into play, securing connections without sacrificing speed.</p><p>Ten years ago, we embarked on a mission to create a more secure and encrypted web. At the time, much of the Internet remained unencrypted, leaving user data vulnerable to interception. On September 27, 2014, we took a major step forward by enabling HTTPS for free for all Cloudflare customers. Overnight, we doubled the size of the encrypted web. This set the stage for a more secure Internet, ensuring that encryption was not a privilege limited by budget but a right accessible to everyone.</p><p>Since then, both Cloudflare and the broader community have helped encrypt more of the Internet. Projects like <a href="https://letsencrypt.org/"><u>Let's Encrypt</u></a> launched to make certificates free for everyone. Cloudflare invested to encrypt more of the connection, and future-proof that encryption from coming technologies like <a href="https://blog.cloudflare.com/post-quantum-for-all/"><u>quantum computers</u></a>. We've always believed that it was everyone's right, regardless of your budget, to have an encrypted Internet at no cost.</p><p>One of the last major challenges has been securing the SNI (Server Name Identifier), which remains exposed in plaintext during the TLS handshake. This is where Encrypted Client Hello (ECH) comes in, and today, we are proud to announce that we're closing that gap. </p><p>Cloudflare announced support for <a href="https://blog.cloudflare.com/announcing-encrypted-client-hello/"><u>Encrypted Client Hello (ECH)</u></a> in 2023 and has continued to enhance its implementation in collaboration with our Internet browser partners. During a TLS handshake, one of the key pieces of information exchanged is the <a href="https://www.cloudflare.com/learning/ssl/what-is-sni/"><u>Server Name Indication (SNI)</u></a>, which is used to initiate a secure connection. Unfortunately, the SNI is sent in plaintext, meaning anyone can read it. Imagine hand-delivering a letter — anyone following you can see where you're delivering it, even if they don’t know the contents. With ECH, it is like sending the same confidential letter to a P.O. Box. You place your sensitive letter in a sealed inner envelope with the actual address. Then, you put that envelope into a larger, standard envelope addressed to a public P.O. Box, trusted to securely forward your intended recipient. The larger envelope containing the non-sensitive information is visible to everyone, while the inner envelope holds the confidential details, such as the actual address and recipient. Just as the P.O. Box maintains the anonymity of the true recipient’s address, ECH ensures that the SNI remains protected. </p><p>While encrypting the SNI is a primary motivation for ECH, its benefits extend further. ECH encrypts the entire Client Hello, ensuring user privacy and enabling TLS to evolve without exposing sensitive connection data. By securing the full handshake, ECH allows for flexible, future-proof encryption designs that safeguard privacy as the Internet continues to grow.</p>
    <div>
      <h3>How ECH works</h3>
      <a href="#how-ech-works">
        
      </a>
    </div>
    <p>Encrypted Client Hello (ECH) introduces a layer of privacy by dividing the ClientHello message into two distinct parts: a ClientHelloOuter and a ClientHelloInner. </p><ul><li><p><b>ClientHelloOuter</b>: This part remains unencrypted and contains innocuous values for sensitive TLS extensions. It sets the SNI to Cloudflare’s public name, currently set to cloudflare-ech.com. Cloudflare manages this domain and possesses the necessary certificates to handle TLS negotiations for it.</p></li><li><p><b>ClientHelloInner</b>: This part is encrypted with a public key and includes the actual server name the client wants to visit, along with other sensitive TLS extensions. The encryption scheme ensures that this sensitive data can only be decrypted by the client-facing server, which in our case is Cloudflare.</p></li></ul><p>During the TLS handshake, the ClientHelloOuter reveals only the public name (e.g., cloudflare-ech.com), while the encrypted ClientHelloInner carries the real server name. As a result, intermediaries observing the traffic will only see cloudflare-ech.com in plaintext, concealing the actual destination.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5C7EUMmlYp3gmvfvnbeKul/ea62a3c648cd859bbb46bb6fe0761645/image13.png" />
          </figure><p>The design of ECH effectively addresses many challenges in securely deploying handshake encryption, thanks to the collaborative efforts within the <a href="https://datatracker.ietf.org/doc/draft-ietf-tls-esni/"><u>IETF community</u></a>. The key to ECH’s success is its integration with other IETF standards, including the new <a href="https://datatracker.ietf.org/doc/html/rfc9460"><u>HTTPS DNS resource record</u></a>, which enables HTTPS endpoints to advertise different TLS capabilities and simplifies key distribution. By using <a href="https://blog.cloudflare.com/dns-encryption-explained/"><u>Encrypted DNS</u></a> methods, browsers and clients can anonymously query these HTTPS records. These records contain the ECH parameters needed to initiate a secure connection. </p><p>ECH leverages the <a href="https://blog.cloudflare.com/hybrid-public-key-encryption/"><u>Hybrid Public Key Encryption (HPKE)</u></a> standard, which streamlines the handshake encryption process, making it more secure and easier to implement. Before initiating a layer 4 connection, the user’s browser makes a DNS request for an HTTPS record, and zones with ECH enabled will include an ECH configuration in the HTTPS record containing an encryption public key and some associated metadata. For example, looking at the zone cloudflare-ech.com, you can see the following record returned:</p>
            <pre><code>dig cloudflare-ech.com https +short


1 . alpn="h3,h2" ipv4hint=104.18.10.118,104.18.11.118 ech=AEX+DQBB2gAgACD1W1B+GxY3nZ53Rigpsp0xlL6+80qcvZtgwjsIs4YoOwAEAAEAAQASY2xvdWRmbGFyZS1lY2guY29tAAA= ipv6hint=2606:4700::6812:a76,2606:4700::6812:b76</code></pre>
            <p>Aside from the public key used by the client to encrypt ClientHelloInner and other <a href="https://www.ietf.org/archive/id/draft-ietf-tls-esni-20.html#name-encrypted-clienthello-confi"><u>parameters</u></a> that specify the ECH configuration, the configured public name is also present.</p>
            <pre><code>Y2xvdWRmbGFyZS1lY2guY29t</code></pre>
            <p>When the string is decoded it reveals:</p>
            <pre><code>cloudflare-ech.com</code></pre>
            <p>This public name is then used by the client in the ClientHelloOuter.</p>
    <div>
      <h3>Practical implications</h3>
      <a href="#practical-implications">
        
      </a>
    </div>
    <p>With ECH, any observer monitoring the traffic between the client and Cloudflare will see only uniform TLS handshakes that appear to be directed towards <code>cloudflare-ech.com</code>, regardless of the actual website being accessed. For instance, if a user visits <code>example.com</code>, intermediaries will not discern this specific destination but will only see <code>cloudflare-ech.com</code> in the visible handshake data. </p>
    <div>
      <h3>The problem with middleboxes</h3>
      <a href="#the-problem-with-middleboxes">
        
      </a>
    </div>
    <p>In a basic HTTPS connection, a browser (client) establishes a TLS connection directly with an origin server to send requests and download content. However, many connections on the Internet do not go directly from a browser to the server but instead pass through some form of proxy or middlebox (often referred to as a "monster-in-the-middle" or MITM). This routing through intermediaries can occur for various reasons, both benign and malicious.</p><p>One common type of HTTPS interceptor is the TLS-terminating forward proxy. This proxy sits between the client and the destination server, transparently forwarding and potentially modifying traffic. To perform this task, the proxy terminates the TLS connection from the client, decrypts the traffic, and then re-encrypts and forwards it to the destination server over a new TLS connection. To avoid browser certificate validation errors, these forward proxies typically require users to install a root certificate on their devices. This root certificate allows the proxy to generate and present a trusted certificate for the destination server, a process often managed by network administrators in corporate environments, as seen with <a href="https://developers.cloudflare.com/cloudflare-one/connections/connect-devices/warp/"><u>Cloudflare WARP</u></a>. These services can help prevent sensitive company data from being transmitted to unauthorized destinations, safeguarding confidentiality.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2xfA7PB61KxnbZ96XRLahh/975153cfcd831f67f47f638ec9578cdb/image8.png" />
          </figure><p>However, TLS-terminating forward proxies may not be equipped to handle Encrypted Client Hello (ECH) correctly, especially if the MITM proxy and the client facing ECH server belong to different entities.  Because the MITM proxy will terminate the TLS connection without being ECH aware, it may provide a valid certificate for the public name (in our case, cloudflare-ech.com) without being able to decrypt the ClientHelloInner or provide a new public key for the client to use. In this case, the client considers ECH to be disabled, which means you lose out on both ECH and pay the cost of an extra round trip. </p><p>We also observed that specific Cloudflare setups, such as <a href="https://developers.cloudflare.com/dns/cname-flattening/"><u>CNAME Flattening</u></a> and <a href="https://developers.cloudflare.com/cloudflare-for-platforms/cloudflare-for-saas/saas-customers/how-it-works/"><u>Orange-to-Orange configurations</u></a>, could cause ECH to break. This issue arose because the end destination for these connections did not support TLS 1.3, preventing ECH from being processed correctly. Fortunately, in close collaboration with our browser partners, we implemented a fallback in our <a href="https://boringssl.googlesource.com/boringssl/+/d274b1bacdca36f3941bf78e43dc38acf676a1a8"><u>BoringSSL</u></a> implementation that handles TLS terminations. This fallback allows browsers to retry connections over TLS 1.2 without ECH, ensuring that a connection can be established and not break.</p><p>As a result of these improvements, we have enabled ECH by default for all Free plans, while all other plan types can manually enable it through their <a href="https://dash.cloudflare.com/?to=/:account/:zone/ssl-tls/edge-certificates#ech-card"><u>Cloudflare dashboard</u></a> or via the API. We are excited to support ECH at scale, enhancing the privacy and security of users' browsing activities. ECH plays a crucial role in safeguarding online interactions from potential eavesdroppers and maintaining the confidentiality of web activities.</p>
    <div>
      <h2>HTTP/3 Prioritization and QUIC congestion control</h2>
      <a href="#http-3-prioritization-and-quic-congestion-control">
        
      </a>
    </div>
    <p>Two other areas we are investing in to improve performance for all our customers are <a href="https://blog.cloudflare.com/better-http-3-prioritization-for-a-faster-web/"><u>HTTP/3 Prioritization</u></a> and QUIC congestion control. </p><p>HTTP/3 Prioritization focuses on efficiently managing the order in which web assets are loaded, thereby improving web performance by ensuring critical assets are delivered faster. HTTP/3 Prioritization uses Extensible Priorities to simplify prioritization with two parameters: urgency (ranging from 0-7) and a true/false value indicating whether the resource can be processed progressively. This allows resources like HTML, CSS, and images to be prioritized based on importance.</p><p>On the other hand, QUIC congestion control aims to optimize the flow of data, preventing network bottlenecks and ensuring smooth, reliable transmission even under heavy traffic conditions. </p><p>Both of these improvements significantly impact how Cloudflare’s network serves requests to clients. Before deploying these technologies across our global network, which handles peak traffic volumes of over 80 million requests per second, we first developed a reliable method to measure their impact through rigorous experimentation.</p>
    <div>
      <h3>Measuring impact</h3>
      <a href="#measuring-impact">
        
      </a>
    </div>
    <p>Accurately measuring the impact of features implemented by Cloudflare for our customers is crucial for several reasons. These measurements ensure that optimizations related to performance, security, or reliability deliver the intended benefits without introducing new issues. Precise measurement validates the effectiveness of these changes, allowing Cloudflare to assess improvements in metrics such as load times, user experience, and overall site security. One of the best ways to measure performance changes is through aggregated real-world data.</p><p><a href="https://developers.cloudflare.com/pages/how-to/web-analytics/"><u>Cloudflare Web Analytics</u></a> offers free, privacy-first analytics for your website, helping you understand the performance of your web pages as experienced by your visitors. Real User Metrics (RUM) is a vital tool in web performance optimization, capturing data from real users interacting with a website, providing insights into site performance under real-world conditions. RUM tracks various metrics directly from the user's device, including load times, resource usage, and user interactions. This data is essential for understanding the actual user experience, as it reflects the diverse environments and conditions under which the site is accessed.</p><p>A key performance indicator measured through RUM is <a href="https://web.dev/articles/vitals#core-web-vitals"><u>Core Web Vitals (CWV)</u></a>, a set of metrics defined by Google that quantify crucial aspects of user experience on the web. CWV focuses on three main areas: loading performance, interactivity, and visual stability. The specific metrics include Largest Contentful Paint (LCP), which measures loading performance; First Input Delay (FID), which gauges interactivity; and Cumulative Layout Shift (CLS), which assesses visual stability. By using the CWV measurement in RUM, developers can monitor and optimize their applications to ensure a smoother, faster, and more stable user experience and track the impact of any changes they release.</p><p>Over the last three months we have developed the capability to include valuable information in Server-Timing response headers. When a page that uses Cloudflare Web Analytics is loaded in a browser, the privacy-first client-side script from Web Analytics collects browser metrics and server-timing headers, then sends back this performance data. This data is ingested, aggregated, and made available for querying. The server-timing header includes Layer 4 information, such as Round-Trip Time (RTT) and protocol type (TCP or QUIC). Combined with Core Web Vitals data, this allows us to determine whether an optimization has positively impacted a request compared to a control sample. This capability enables us to release large-scale changes such as HTTP/3 Prioritization or BBR with a clear understanding of their impact across our global network.</p><p>An example of this header contains several key properties that provide valuable information about the network performance as observed by the server:</p>
            <pre><code>server-timing: cfL4;desc="?proto=TCP&amp;rtt=7337&amp;sent=8&amp;recv=8&amp;lost=0&amp;retrans=0&amp;sent_bytes=3419&amp;recv_bytes=832&amp;delivery_rate=548023&amp;cwnd=25&amp;unsent_bytes=0&amp;cid=94dae6b578f91145&amp;ts=225</code></pre>
            <ul><li><p><b>proto</b>: Indicates the transport protocol used</p></li><li><p><b>rtt</b>: Round-Trip Time (RTT), representing the duration of the network round trip as measured by the layer 4 connection using a smoothing algorithm.</p></li><li><p><b>sent</b>: Number of packets sent.</p></li><li><p><b>recv</b>: Number of packets received.</p></li><li><p><b>lost</b>: Number of packets lost.</p></li><li><p><b>retrans</b>: Number of retransmitted packets.</p></li><li><p><b>sent_bytes</b>: Total number of bytes sent.</p></li><li><p><b>recv_bytes</b>: Total number of bytes received.</p></li><li><p><b>delivery_rate</b>: Rate of data delivery, an instantaneous measurement in bytes per second.</p></li><li><p><b>cwnd</b>: Congestion Window, an instantaneous measurement of packet or byte count depending on the protocol.</p></li><li><p><b>unsent_bytes</b>: Number of bytes not yet sent.</p></li><li><p><b>cid</b>: A 16-byte hexadecimal opaque connection ID.</p></li><li><p><b>ts</b>: Timestamp in milliseconds, representing when the data was captured.</p></li></ul><p>This real-time collection of performance data via RUM and Server-Timing headers allows Cloudflare to make data-driven decisions that directly enhance user experience. By continuously analyzing these detailed network and performance insights, we can ensure that future optimizations, such as HTTP/3 Prioritization or BBR deployment, are delivering tangible benefits for our customers.</p>
    <div>
      <h3>Enabling HTTP/3 Prioritization for all plans</h3>
      <a href="#enabling-http-3-prioritization-for-all-plans">
        
      </a>
    </div>
    <p>As part of our focus on improving <a href="https://www.cloudflare.com/learning/performance/what-is-observability/">observability</a> through the integration of the server-timing header, we implemented several minor changes to optimize QUIC handshakes. Notably, we observed positive improvements in our telemetry due to the Layer 4 observability enhancements provided by the server-timing header. These internal findings coincided with third-party measurements, which showed similar improvements in handshake performance.</p><p>In the fourth quarter of 2024, we will apply the same experimental methodology to the HTTP/3 Prioritization support announced during Speed Week 2023. <a href="https://blog.cloudflare.com/better-http-3-prioritization-for-a-faster-web/"><u>HTTP/3 Prioritization</u></a> is designed to enhance the efficiency and speed of loading web pages by intelligently managing the order in which web assets are delivered to users. This is crucial because modern web pages are composed of numerous elements — such as images, scripts, and stylesheets — that vary in importance. Proper prioritization ensures that critical elements, like primary content and layout, load first, delivering a faster and more seamless browsing experience.</p><p>We will use this testing framework to measure performance improvements before enabling the feature across all plan types. This process allows us not only to quantify the benefits but, most importantly, to ensure there are no performance regressions.</p>
    <div>
      <h3>Congestion control</h3>
      <a href="#congestion-control">
        
      </a>
    </div>
    <p>Following the completion of the HTTP/3 Prioritization experiments we will then begin testing different congestion control algorithms, specifically focusing on <a href="https://cloud.google.com/blog/products/networking/tcp-bbr-congestion-control-comes-to-gcp-your-internet-just-got-faster"><u>BBR</u></a> (Bottleneck Bandwidth and Round-trip propagation time) version 3. Congestion control is a crucial mechanism in network communication that aims to optimize data transfer rates while avoiding network congestion. When too much data is sent too quickly over a network, it can lead to congestion, causing packet loss, delays, and reduced overall performance. Think of a busy highway during rush hour. If too many cars (data packets) flood the highway at once, traffic jams occur, slowing everyone down.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1kR4Ekkg2eUPzO4Lj8CrjY/d002875b8f77f782d13bc0ef199ba931/image6.png" />
          </figure><p>Congestion control algorithms act like traffic managers, regulating the flow of data to prevent these “traffic jams,” ensuring that data moves smoothly and efficiently across the network. Each side of a connection runs an algorithm in real time, dynamically adjusting the flow of data based on the current and predicted network conditions.

BBR is an advanced congestion control algorithm, initially developed by Google. BBR seeks to estimate the actual available bandwidth and the minimum round-trip time (RTT) to determine the optimal data flow. This approach allows BBR to maintain high throughput while minimizing latency, leading to more efficient and stable network performance.</p><p><a href="https://github.com/google/bbr/blob/v3/README.md"><u>BBR v3</u></a>, the latest iteration, builds on the strengths of its predecessors BBRv1 and BBRv2 by further refining its bandwidth estimation techniques and enhancing its adaptability to varying network conditions. We found BBR v3 to be faster in several cases compared to our previous implementation of <a href="https://datatracker.ietf.org/doc/html/rfc8312"><u>CUBIC</u></a>. Most importantly, it reduced loss and retransmission rates in our <a href="https://blog.cloudflare.com/introducing-oxy/"><u>Oxy</u></a> proxy implementation.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/61VpPQXJWTHnrnlb7dz2um/f1aecca8fea1bbeced3074d46389c3db/image7.png" />
            
            </figure><p>With these promising results, we are excited to test various congestion control algorithms including BBRv3 for <a href="https://github.com/cloudflare/quiche"><u>quiche</u></a>, our QUIC implementation, across our HTTP/3 traffic. Combining the layer 4 server-timing information with experiments in this area will enable us to explicitly control and measure the impact on real-world metrics.</p>
    <div>
      <h2>The future</h2>
      <a href="#the-future">
        
      </a>
    </div>
    <p>The future of the Internet relies on continuous innovation to meet the growing demands for speed, security, and scalability. Technologies like Zstandard for compression, BBR for congestion control, HTTP/3 prioritization, and Encrypted Client Hello are setting new standards for performance and privacy. By implementing these protocols, web services can achieve faster page load times, more efficient bandwidth usage, and stronger protections for user data.</p><p>These advancements don't just offer incremental improvements, they provide a significant leap forward in optimizing the user experience and safeguarding online interactions. At Cloudflare, we are committed to making these technologies accessible to everyone, empowering businesses to deliver better, faster, and more secure services. </p><p>Stay tuned for more developments as we continue to push the boundaries of what's possible on the web and if you’re passionate about building and implementing the latest Internet innovations, we’re <a href="https://www.cloudflare.com/careers/jobs/"><u>hiring</u></a>!</p> ]]></content:encoded>
            <category><![CDATA[Birthday Week]]></category>
            <category><![CDATA[Privacy]]></category>
            <category><![CDATA[Compression]]></category>
            <category><![CDATA[TLS]]></category>
            <guid isPermaLink="false">6YLU1FUKD4lioSrbpOnb5r</guid>
            <dc:creator>Matt Bullock</dc:creator>
            <dc:creator>Maciej Lechowski</dc:creator>
            <dc:creator>Rushil Mehra</dc:creator>
        </item>
        <item>
            <title><![CDATA[Introducing Automatic SSL/TLS: securing and simplifying origin connectivity]]></title>
            <link>https://blog.cloudflare.com/introducing-automatic-ssl-tls-securing-and-simplifying-origin-connectivity/</link>
            <pubDate>Thu, 08 Aug 2024 14:05:00 GMT</pubDate>
            <description><![CDATA[ This new Automatic SSL/TLS setting will maximize and simplify the encryption modes Cloudflare uses to communicate with origin servers by using the SSL/TLS Recommender. ]]></description>
            <content:encoded><![CDATA[ 
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4YIQCIdM9Td1RJfdCkg3o5/6fc5cd824f819658e00007c61f69ce71/1885-1-Hero.png" />
          </figure><p>During Birthday Week 2022, we <a href="https://blog.cloudflare.com/securing-origin-connectivity"><u>pledged</u></a> to provide our customers with the most secure connection possible from Cloudflare to their origin servers automatically. I’m thrilled to announce we will begin rolling this experience out to customers who have the <a href="https://blog.cloudflare.com/ssl-tls-recommender"><u>SSL/TLS Recommender</u></a> enabled on <b>August 8, 2024. </b>Following this, remaining Free and Pro customers can use this feature beginning <b>September 16, 2024</b> with Business and Enterprise customers to follow<b>.</b></p><p>Although it took longer than anticipated to roll out, our priority was to achieve an automatic configuration both transparently and without risking any site downtime. Taking this additional time allowed us to balance enhanced security with seamless site functionality, especially since origin server security configuration and capabilities are beyond Cloudflare's direct control. The new Automatic SSL/TLS setting will maximize and simplify the <a href="https://developers.cloudflare.com/ssl/origin-configuration/ssl-modes/"><u>encryption modes</u></a> Cloudflare uses to communicate with origin servers by using the <a href="https://blog.cloudflare.com/ssl-tls-recommender"><u>SSL/TLS Recommender</u></a>. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/53WNT2fwr0HuN2L0M5PKnJ/c005f100b455fd699d32d2f602ebf447/1885-2.png" />
          </figure><p>We first talked about this process in <a href="https://blog.cloudflare.com/introducing-universal-ssl"><u>2014</u></a>: at that time, securing connections was hard to configure, prohibitively expensive, and required specialized knowledge to set up correctly. To help alleviate these pains, Cloudflare introduced Universal SSL, which allowed web properties to obtain a <a href="https://www.cloudflare.com/application-services/products/ssl/"><u>free SSL/TLS certificate</u></a> to enhance the security of connections between browsers and Cloudflare. </p><p>This worked well and was easy because Cloudflare could manage the certificates and connection security from incoming browsers. As a result of that work, the number of encrypted HTTPS connections on the entire Internet <a href="https://blog.cloudflare.com/introducing-universal-ssl#:~:text=we%27ll%20have%20doubled%20that"><u>doubled</u></a> at that time. However, the connections made from Cloudflare to origin servers still required <i>manual</i> configuration of the encryption <a href="https://developers.cloudflare.com/ssl/origin-configuration/ssl-modes/"><u>modes</u></a> to let Cloudflare know the capabilities of the origin. </p><p>Today we’re excited to begin the sequel to Universal SSL and make security between Cloudflare and origins automatic and easy for everyone.</p>
    <div>
      <h2>History of securing origin-facing connections</h2>
      <a href="#history-of-securing-origin-facing-connections">
        
      </a>
    </div>
    <p>Ensuring that more bytes flowing across the Internet are automatically encrypted strengthens the barrier against interception, throttling, and censorship of Internet traffic by third parties. </p><p>Generally, two communicating parties (often a client and server) establish a secure connection using the <a href="https://www.cloudflare.com/learning/ssl/what-happens-in-a-tls-handshake/"><u>TLS</u></a> protocol. For a simplified breakdown: </p><ul><li><p>The client advertises the list of encryption parameters it supports (along with some metadata) to the server.</p></li><li><p>The server responds back with its own preference of the chosen encryption parameters. It also sends a digital certificate so that the client can authenticate its identity.</p></li><li><p>The client validates the server identity, confirming that the server is who it says it is.</p></li><li><p>Both sides agree on a <a href="https://www.cloudflare.com/learning/ssl/what-is-asymmetric-encryption/#:~:text=What%20is-,symmetric,-encryption%3F"><u>symmetric</u></a> secret key for the session that is used to encrypt and decrypt all transmitted content over the connection.</p></li></ul><p>Because Cloudflare acts as an intermediary between the client and our customer’s origin server, two separate TLS connections are established. One between the user’s browser and our network, and the other from our network to the origin server. This allows us to manage and optimize the security and performance of both connections independently.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6s0NxfVR5tCXuAzhI8pYdw/f1f48be437de48bf1b60495647016fbb/1885-3.png" />
          </figure><p>Unlike securing connections between clients and Cloudflare, the security capabilities of origin servers are not under our direct control. For example, we can manage the <a href="https://www.cloudflare.com/en-gb/learning/ssl/what-is-an-ssl-certificate/"><u>certificate</u></a> (the file used to verify identity and provide context on establishing encrypted connections) between clients and Cloudflare because it’s our job in that connection to provide it to clients, but when talking to origin servers, Cloudflare <i>is</i> the client.</p><p>Customers need to <a href="https://developers.cloudflare.com/ssl/origin-configuration/origin-ca/"><u>acquire and provision</u></a> an origin certificate on their host. They then have to configure Cloudflare to expect the new certificate from the origin when opening a connection. Needing to manually configure connection security across multiple different places requires effort and is prone to human error. </p><p>This issue was discussed in the original <a href="https://blog.cloudflare.com/introducing-universal-ssl"><u>Universal SSL blog</u></a>:</p><blockquote><p><i>For a site that did not have SSL before, we will default to our </i><a href="https://support.cloudflare.com/hc/en-us/articles/200170416-What-do-the-SSL-options-Off-Flexible-SSL-Full-SSL-Full-SSL-Strict-mean-"><i><u>Flexible SSL mode</u></i></a><i>, which means traffic from browsers to Cloudflare will be encrypted, but traffic from Cloudflare to a site's origin server will not. We strongly recommend site owners install a certificate on their web servers so we can encrypt traffic to the origin … Once you've installed a certificate on your web server, you can enable the </i><a href="https://support.cloudflare.com/hc/en-us/articles/200170416-What-do-the-SSL-options-Off-Flexible-SSL-Full-SSL-Full-SSL-Strict-mean-"><i><u>Full or Strict SSL modes</u></i></a><i> which encrypt origin traffic and provide a higher level of security.</i></p></blockquote><p>Over the years Cloudflare has introduced numerous products to help customers configure how Cloudflare should talk to their origin. These products include a <a href="https://blog.cloudflare.com/universal-ssl-encryption-all-the-way-to-the-origin-for-free/"><u>certificate authority</u></a> to help customers obtain a certificate to verify their origin server’s identity and encryption capabilities, <a href="https://developers.cloudflare.com/ssl/origin-configuration/authenticated-origin-pull/"><u>Authenticated Origin Pulls</u></a> that ensures only HTTPS (encrypted) requests from Cloudflare will receive a response from the origin server, and <a href="https://developers.cloudflare.com/cloudflare-one/connections/connect-networks/"><u>Cloudflare Tunnels</u></a> that can be configured to proactively establish secure and private tunnels to the nearest Cloudflare data center. Additionally, the <a href="https://datatracker.ietf.org/doc/html/rfc8555/"><u>ACME</u></a> protocol and its corresponding <a href="https://certbot.eff.org/"><u>Certbot</u></a> tooling make it easier than ever to obtain and manage publicly-trusted certificates on customer origins. While these technologies help customers configure how Cloudflare should communicate with their origin server, they still require manual configuration changes on the origin and to Cloudflare settings. </p><p>Ensuring certificates are configured appropriately on origin servers and informing Cloudflare about how we should communicate with origins can be anxiety-inducing because misconfiguration can lead to downtime if something isn’t deployed or configured correctly. </p><p>To simplify this process and help identify the most secure options that customers could be using without any misconfiguration risk, <b>Cloudflare introduced the </b><a href="https://blog.cloudflare.com/ssl-tls-recommender"><b><u>SSL/TLS Recommender</u></b></a><b> in 2021.</b> The Recommender works by probing customer origins with different SSL/TLS settings to provide a recommendation whether the SSL/TLS <a href="https://developers.cloudflare.com/ssl/origin-configuration/ssl-modes/"><u>encryption mode</u></a> for the web property can be improved. The Recommender has been in production for three years and has consistently managed to provide high quality origin-security recommendations for Cloudflare’s customers. </p><p>The SSL/TLS Recommender system serves as the brain of the automatic origin connection service that we are announcing today. </p>
    <div>
      <h2>How does SSL/TLS Recommendation work?</h2>
      <a href="#how-does-ssl-tls-recommendation-work">
        
      </a>
    </div>
    <p>The Recommender works by actively comparing content on web pages that have been downloaded using different SSL/TLS modes to see if it is safe and risk-free to update the <a href="https://developers.cloudflare.com/ssl/origin-configuration/ssl-modes/"><u>mode</u></a> Cloudflare uses to connect to origin servers.</p><p>Cloudflare currently offers five SSL/TLS modes:</p><ol><li><p><b>Off</b>: No encryption is used for traffic between browsers and Cloudflare or between Cloudflare and origins. Everything is cleartext HTTP.</p></li><li><p><b>Flexible</b>: Traffic from browsers to Cloudflare can be encrypted via HTTPS, but traffic from Cloudflare to the origin server is not. This mode is common for origins that do not support TLS, though upgrading the origin configuration is recommended whenever possible. A guide for upgrading is available <a href="https://developers.cloudflare.com/ssl/origin-configuration/ssl-modes/full/#required-setup"><u>here</u></a>.</p></li><li><p><b>Full</b>: Cloudflare matches the browser request protocol when connecting to the origin. If the browser uses HTTP, Cloudflare connects to the origin via HTTP; if HTTPS, Cloudflare uses HTTPS without validating the origin’s certificate. This mode is common for origins that use self-signed or otherwise invalid certificates.</p></li><li><p><b>Full (Strict)</b>: Similar to Full Mode, but with added validation of the origin server’s certificate, which can be issued by a public CA like Let’s Encrypt or by Cloudflare Origin CA.</p></li><li><p><b>Strict (SSL-only origin pull)</b>: Regardless of whether the browser-to-Cloudflare connection uses HTTP or HTTPS, Cloudflare always connects to the origin over HTTPS with certificate validation.</p></li></ol><table><tr><th><p>
</p></th><th><p><b>HTTP from visitor</b></p></th><th><p><b>HTTPS from visitor</b></p></th></tr><tr><td><p><b>Off</b></p></td><td><p>HTTP to origin</p></td><td><p>HTTP to origin</p></td></tr><tr><td><p><b>Flexible</b></p></td><td><p>HTTP to origin</p></td><td><p>HTTP to origin</p></td></tr><tr><td><p><b>Full</b></p></td><td><p>HTTP to origin</p></td><td><p>HTTPS without cert validation to origin</p></td></tr><tr><td><p><b>Full (strict)</b></p></td><td><p>HTTP to origin</p></td><td><p>HTTPS with cert validation to origin</p></td></tr><tr><td><p><b>Strict (SSL-only origin pull)</b></p></td><td><p>HTTPS with cert validation to origin</p></td><td><p>HTTPS with cert validation to origin</p></td></tr></table><p>
The SSL/TLS Recommender works by crawling customer sites and collecting links on the page (like any web crawler). The Recommender downloads content over both HTTP and HTTPS, making GET requests to avoid modifying server resources. It then uses a content similarity algorithm, adapted from the research paper "<a href="https://www.cs.umd.edu/~dml/papers/https_tma20.pdf"><u>A Deeper Look at Web Content Availability and Consistency over HTTP/S"</u></a> (TMA Conference 2020), to determine if content matches. If the content does match, the Recommender makes a determination for whether the <a href="https://developers.cloudflare.com/ssl/origin-configuration/ssl-modes/"><u>SSL/TLS mode</u></a> can be increased without misconfiguration risk. </p><p>The recommendations are currently delivered to customers via email. </p><p>When the Recommender is making security recommendations, it errs on the side of maintaining current site functionality to avoid breakage and usability issues. If a website is non-functional, blocks all bots, or has SSL/TLS-specific Page Rules or Configuration Rules, the Recommender may not complete its scans and provide a recommendation. It was designed to maximize <a href="https://www.cloudflare.com/application-services/solutions/domain-protection-services/">domain security</a>, but will not help resolve website or domain functionality issues.</p><p>The crawler uses the user agent "<code>Cloudflare-SSLDetector</code>" and is included in Cloudflare’s list of known <a href="https://bots-directory.cfdata.org/bot/cloudflare-ssl-detector"><u>good bots</u></a>. It ignores <code>robots.txt</code> (except for rules specifically targeting its user agent) to ensure accurate recommendations.</p><p>When downloading content from your origin server over both HTTP and HTTPS and comparing the content, the Recommender understands the current SSL/TLS encryption mode that your website uses and what risk there might be to the site functionality if the recommendation is followed.</p>
    <div>
      <h2>Using SSL/TLS Recommender to automatically manage SSL/TLS settings </h2>
      <a href="#using-ssl-tls-recommender-to-automatically-manage-ssl-tls-settings">
        
      </a>
    </div>
    <p>Previously, signing up for the SSL/TLS Recommender provided a good experience for customers, but only resulted in an email recommendation in the event that a zone’s current SSL/TLS modes could be updated. To Cloudflare, this was a positive signal that customers wanted their websites to have more secure connections to their origin servers – over 2 million domains have enabled the SSL/TLS Recommender. However, we found that a significant number of users would not complete the next step of pushing the button to inform Cloudflare that we could communicate over the upgraded settings. <b>Only 30% of the recommendations that the system provided were followed. </b></p><p>With the system designed to increase security while avoiding any breaking changes, we wanted to provide an option for customers to allow the Recommender to help upgrade their site security, without requiring further manual action from the customer. <b>Therefore, we are introducing a new option for managing SSL/TLS configuration on Cloudflare: Automatic SSL/TLS. </b></p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/21q0D6rvhXHQxRe2ko4ITA/d5ca2f9a7139a2f55a16ca8bcf783ee0/1885-4.png" />
          </figure><p></p><p>Automatic SSL/TLS uses the SSL/TLS Recommender to make the determination as to what encryption mode is the most secure and safest for a website to be set to. If there is a <b>more secure</b> option for your website (based on your origin certification or capabilities), Automatic SSL/TLS will find it and apply it for your domain. The other option, <b>Custom SSL/TLS,</b> will work exactly like the setting the encryption mode does today. If you know what setting you want, just select it using Custom SSL/TLS, and we’ll use it. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/jFTSsmxG2WH0FqTklAJwb/eff9f692cdec3d199d32996bb0111441/1885-5.png" />
          </figure><p></p><p>Automatic SSL/TLS is currently meant to service an entire website, which typically works well for those with a single origin. For those concerned that they have more complex setups which use multiple origin servers with different security capabilities, don’t worry. Automatic SSL/TLS will still avoid breaking site functionality by looking for the best setting that works for all origins serving a part of the site’s traffic. </p><p>If customers want to segment the SSL/TLS mode used to communicate with the numerous origins that service their domain, they can achieve this by using <a href="https://developers.cloudflare.com/rules/configuration-rules/"><u>Configuration Rules</u></a>. These rules allow you to set more precise modes that Cloudflare should respect (based on path or subdomain or even IP address) to maximize the security of the domain based on your desired Rules criteria. If your site uses SSL/TLS-specific settings in a Configuration Rule or Page rule, those settings will <b>override the zone-wide Automatic and Custom settings.</b></p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6PCXOjFBtEucRUOP3BoMGQ/6ba2700c18cf4c49782bdf2d0ee33435/1885-6.png" />
          </figure><p></p><p>The goal of Automatic SSL/TLS<b> </b>is to simplify and maximize the origin-facing security for customers on Cloudflare. We want this to be the new default for all websites on Cloudflare, but we understand that not everyone wants this new default, and we will respect your decision for how Cloudflare should communicate with your origin server. If you block the Recommender from completing its crawls, the origin server is non-functional or can’t be crawled, or if you want to opt out of this default and just continue using the same encryption mode you are using today, we will make it easy for you to tell us what you prefer. </p>
    <div>
      <h2>How to onboard to Automatic SSL/TLS</h2>
      <a href="#how-to-onboard-to-automatic-ssl-tls">
        
      </a>
    </div>
    <p>To improve the security settings for everyone by default, we are making the following default changes to how Cloudflare configures the SSL/TLS level for all zones: </p><p>Starting on <b>August 8, 2024</b> websites with the <b>SSL/TLS Recommender currently enabled</b> will have the Automatic SSL/TLS setting enabled by default. Enabling does not mean that the Recommender will begin scanning and applying new settings immediately though. There will be a <b><u>one-month grace period</u></b> before the first scans begin and the recommended settings are applied. Enterprise (ENT) customers will get a <b><u>six-week grace period</u></b>. Origin scans will start getting scheduled by <b>September 9, 2024, for non-Enterprise </b>customers<b> </b>and<b> September 23rd for ENT customers with the SSL Recommender enabled</b>. This will give customers the ability to opt out by removing Automatic SSL/TLS and selecting the Custom mode that they want to use instead.</p><p>Further, during the second week of September <b>all new zones signing up for Cloudflare</b> will start seeing the Automatic SSL/TLS setting enabled by default.</p><p>Beginning <b>September 16, 2024, </b>remaining <b>Free and Pro</b> customers will start to see the new Automatic SSL/TLS setting. They will also have a one-month grace period to opt out before the scans start taking effect. </p><p>Customers in the cohort having the new Automatic SSL/TLS setting applied will receive an email communication regarding the date that they are slated for this migration as well as a banner on the dashboard that mentions this transition as well. If they do not wish for Cloudflare to change anything in their configurations, the process for opt-out of this migration is outlined below. </p><p>Following the successful migration of Free and Pro customers, we will proceed to Business and Enterprise customers with a similar cadence. These customers will get email notifications and information in the dashboard when they are in the migration cohort.</p><p>The Automatic SSL/TLS setting will not impact users that are already in Strict or Full (strict) mode nor will it impact websites that have opted-out. </p>
    <div>
      <h2>Opting out</h2>
      <a href="#opting-out">
        
      </a>
    </div>
    <p>There are a number of reasons why someone might want to configure a lower-than-optimal security setting for their website. Some may want to set a lower security setting for testing purposes or to debug some behavior. Whatever the reason, the options to opt-out of the Automatic SSL/TLS setting during the migration process are available in the dashboard and API.</p><p>To opt-out, simply select <b>Custom SSL/TLS</b> in the dashboard (instead of the enabled Automatic SSL/TLS) and we will continue to use the previously set encryption mode that you were using prior to the migration. Automatic and Custom SSL/TLS modes can be found in the <b>Overview</b> tab of the SSL/TLS section of the dashboard. To enable your preferred mode, select <b>configure</b>.  </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4meNmREGaXd1FJfxUKr5NN/bdbe1e07a2121d2f9ec2a11e64c77b7f/1885-7.png" />
          </figure><p></p><p>If you want to opt-out via the API you can make this API call on or before the grace period expiration date. </p>
            <pre><code>    curl --request PATCH \
        --url https://api.cloudflare.com/client/v4/zones/&lt;insert_zone_tag_here&gt;/settings/ssl_automatic_mode \
        --header 'Authorization: Bearer &lt;insert_api_token_here&gt;' \
        --header 'Content-Type: application/json' \
        --data '{"value":"custom"}'
</code></pre>
            <p></p><p>If an opt-out is triggered, there will not be a change to the currently configured SSL/TLS setting. You are also able to change the security level at any time by going to the SSL/TLS section of the dashboard and choosing the Custom setting you want (similar to how this is accomplished today). </p><p>If at a later point you’d like to opt-in to Automatic SSL/TLS, that option is available by changing your setting from Custom to Automatic.</p>
    <div>
      <h2>What if I want to be more secure now?</h2>
      <a href="#what-if-i-want-to-be-more-secure-now">
        
      </a>
    </div>
    <p>We will begin to roll out this change to customers with the SSL/TLS Recommender enabled on <b>August 8, 2024</b>. If you want to enroll in that group, we recommend enabling the Recommender as soon as possible. </p><p>If you read this and want to make sure you’re at the highest level of backend security already, we recommend <a href="https://developers.cloudflare.com/ssl/origin-configuration/ssl-modes/full-strict/"><u>Full (strict)</u></a> or <a href="https://developers.cloudflare.com/ssl/origin-configuration/ssl-modes/ssl-only-origin-pull/"><u>Strict mode</u></a>. Directions on how to make sure you’re correctly configured in either of those settings are available <a href="https://developers.cloudflare.com/ssl/origin-configuration/ssl-modes/full-strict/#required-setup"><u>here</u></a> and <a href="https://developers.cloudflare.com/ssl/origin-configuration/ssl-modes/ssl-only-origin-pull/#required-setup"><u>here</u></a>. </p><p>If you prefer to wait for us to automatically upgrade your connection to the maximum encryption mode your origin supports, please watch your inbox for the date we will begin rolling out this change for you.</p> ]]></content:encoded>
            <category><![CDATA[SSL]]></category>
            <category><![CDATA[TLS]]></category>
            <category><![CDATA[Network Services]]></category>
            <category><![CDATA[Encryption]]></category>
            <category><![CDATA[Research]]></category>
            <guid isPermaLink="false">2lhAhlWMei6M2NkhzAuULC</guid>
            <dc:creator>Alex Krivit</dc:creator>
            <dc:creator>Suleman Ahmad</dc:creator>
            <dc:creator>J Evans</dc:creator>
            <dc:creator>Yawar Jamal</dc:creator>
        </item>
        <item>
            <title><![CDATA[Avoiding downtime: modern alternatives to outdated certificate pinning practices]]></title>
            <link>https://blog.cloudflare.com/why-certificate-pinning-is-outdated/</link>
            <pubDate>Mon, 29 Jul 2024 13:00:37 GMT</pubDate>
            <description><![CDATA[ Outages caused by certificate pinning is increasing. Learn why certificate pinning hasn’t kept up with modern standards and find alternatives to improve security while reducing management overhead ]]></description>
            <content:encoded><![CDATA[ <p>In today’s world, technology is quickly evolving and some practices that were once considered the gold standard are quickly becoming outdated. At Cloudflare, we stay close to industry changes to ensure that we can provide the best solutions to our customers. One practice that we’re continuing to see in use that no longer serves its original purpose is certificate pinning. In this post, we’ll dive into certificate pinning, the consequences of using it in today’s <a href="https://www.cloudflare.com/en-gb/learning/ssl/how-does-public-key-encryption-work/">Public Key Infrastructure (PKI)</a> world, and alternatives to pinning that offer the same level of security without the management overhead.   </p><p>PKI exists to help issue and manage <a href="https://www.cloudflare.com/learning/ssl/what-is-an-ssl-certificate/">TLS certificates</a>, which are vital to keeping the Internet secure – they ensure that users access the correct applications or servers and that data between two parties stays encrypted. The mis-issuance of a certificate can pose great risk. For example, if a malicious party is able to issue a TLS certificate for your bank’s website, then they can potentially impersonate your bank and intercept that traffic to get access to your bank account. To prevent a mis-issued certificate from intercepting traffic, the server can give a certificate to the client and say “only trust connections if you see this certificate and drop any responses that present a different certificate” – this practice is called certificate pinning. </p><p>In the early 2000s, it was common for banks and other organizations that handle sensitive data to pin certificates to clients. However, over the last 20 years, TLS certificate issuance has evolved and changed, and new solutions have been developed to help customers achieve the security benefit they receive through certificate pinning, with simpler management, and without the risk of disruption.</p><p>Cloudflare’s mission is to help build a better Internet, which is why our teams are always focused on <a href="https://www.cloudflare.com/application-services/solutions/domain-protection-services/">keeping domains secure and online</a>.</p>
    <div>
      <h2>Why certificate pinning is causing more outages now than it did before</h2>
      <a href="#why-certificate-pinning-is-causing-more-outages-now-than-it-did-before">
        
      </a>
    </div>
    <p>Certificate pinning is not a new practice, so why are we emphasizing the need to stop using it <i>now</i>? The reason is that the PKI ecosystem is moving towards becoming more agile, flexible, and secure. As a part of this change, <a href="https://www.ssl.com/article/what-is-a-certificate-authority-ca/">certificate authorities (CAs)</a> are starting to rotate certificates and their intermediates, certificates that bridge the root certificate and the domain certificate, more frequently to <a href="https://www.cloudflare.com/application-services/solutions/certificate-lifecycle-management/">improve security and encourage automation</a>.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6bcPbw10tESD1qHK6cP9yE/ac3f3df18f10a8a4595386a49a7ecd07/image3-12.png" />
            
            </figure><p>These more frequent certificate rotations are problematic from a pinning perspective because certificate pinning relies on the exact matching of certificates. When a certificate is rotated, the new certificate might not match the pinned certificate on the client side. If the pinned certificate is not updated to reflect the contents of the rotated certificate, the client will reject the new certificate, even if it’s valid and issued by the same CA. This mismatch leads to a failure in establishing a secure connection, causing the domain or application to become inaccessible until the pinned certificate is updated.</p><p>Since the start of 2024, we have seen the number of customer reported outages caused by certificate pinning significantly increase. (As of this writing, we are part way through July and Q3 has already seen as many outages as the last three quarters of 2023 combined.)</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1U4AFw29oL1GCcen93rnYr/5dea495d82170011725de472cfd7f98a/image4-4.png" />
            
            </figure><p>We can attribute this rise to two major events: Cloudflare migrating away from using DigiCert as a certificate authority and Google and Let’s Encrypt intermediate CA rotations.</p><p>Before migrating customer’s certificates away from using DigiCert as the CA, Cloudflare sent multiple notifications to customers to inform them that they should update or remove their certificate pins, so that the migration does not impact their domain’s availability.</p><p>However, what we’ve learned is that almost all customers that were impacted by the change were unaware that they had a certificate pin in place. One of the consequences of using certificate pinning is that the “set and forget” mentality doesn’t work, now that certificates are being rotated more frequently. Instead, changes need to be closely monitored to ensure that a regular certificate renewal doesn't cause an outage. This goes to show that to implement certificate pinning successfully, customers need a robust system in place to track certificate changes and implement them.</p><p>We built our <a href="https://developers.cloudflare.com/ssl/edge-certificates/universal-ssl/">Universal SSL</a> pipeline to be resilient and redundant, ensuring that we can always <a href="https://www.cloudflare.com/application-services/products/ssl/">issue and renew a TLS certificate</a> on behalf of our customers, <a href="/introducing-backup-certificates">even in the event of a compromise or revocation</a>. CAs are starting to make changes like more frequent certificate rotations to encourage a move towards a more secure ecosystem. Now, it’s up to domain owners to stop implementing legacy practices like certificate pinning, which cause breaking changes, and instead start adopting modern standards that aim to provide the same level of security, but without the management overhead or risk.</p>
    <div>
      <h2>Modern standards &amp; practices are making the need for certificate pinning obsolete</h2>
      <a href="#modern-standards-practices-are-making-the-need-for-certificate-pinning-obsolete">
        
      </a>
    </div>
    
    <div>
      <h3>Shorter certificate lifetimes</h3>
      <a href="#shorter-certificate-lifetimes">
        
      </a>
    </div>
    <p>Over the last few years, we have seen certificate lifetimes quickly decrease. Before 2011, a certificate could be valid for up to 96 months (eight years) and now, in 2024, the maximum validity period for a certificate is 1 year. We’re seeing this trend continue to develop, with Google Chrome <a href="https://www.chromium.org/Home/chromium-security/root-ca-policy/moving-forward-together/">pushing</a> for shorter CA, intermediate, and certificate lifetimes, advocating for 3 month certificate validity as the new standard.</p><p>This push improves security and redundancy of the entire PKI ecosystem in several ways. First, it reduces the scope of a compromise by limiting the amount of time that a malicious party could control a TLS certificate or private key. Second, it reduces reliance on certificate revocation, a process that lacks standardization and enforcement by clients, browsers, and CAs. Lastly, it encourages automation as a replacement for legacy certificate practices that are time-consuming and error-prone.</p><p>Cloudflare is moving towards only using CAs that follow the <a href="https://datatracker.ietf.org/doc/html/rfc8555">ACME</a> (Automated Certificate Management Environment) protocol, which by default, issues certificates with 90 day validity periods. We have already started to roll this out to Universal SSL certificates and have removed the ability to issue 1-year certificates as a part of our <a href="https://developers.cloudflare.com/ssl/reference/migration-guides/digicert-update/">reduced usage of DigiCert</a>.</p>
    <div>
      <h3>Regular rotation of intermediate certificates</h3>
      <a href="#regular-rotation-of-intermediate-certificates">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/38pDF476Nnjx7acXBN5eTM/ece6376485bb5232e4574d05c7294564/image2-8.png" />
            
            </figure><p>The CAs that Cloudflare partners with, <a href="https://letsencrypt.org/">Let’s Encrypt</a> and <a href="https://pki.goog/">Google Trust Services</a>, are starting to rotate their intermediate CAs more frequently. This increased rotation is beneficial from a security perspective because it limits the lifespan of intermediate certificates, reducing the window of opportunity for attackers to exploit a compromised intermediate. Additionally, regular rotations make it easier to revoke an intermediate certificate if it becomes compromised, enhancing the overall security and resiliency of the PKI ecosystem.</p><p>Both Let’s Encrypt and Google Trust Services changed their intermediate chains in June 2024. In addition, <a href="https://community.letsencrypt.org/t/lets-encrypt-new-intermediate-certificates/209498">Let’s Encrypt has started to balance their certificate issuance across 10 intermediates</a> (5 RSA and 5 ECDSA).</p><p>Cloudflare customers using <a href="https://www.cloudflare.com/advanced-certificate-manager/">Advanced Certificate Manager</a> have the ability to choose their issuing CA. The issue is that even if Cloudflare uses the same CA for a certificate renewal, there is no guarantee that the same certificate chain (root or intermediate) will be used to issue the renewed certificate. As a result, if pinning is used, a successful renewal could cause a full outage for a domain or application.</p><p>This happens because certificate pinning relies on the exact matching of certificates. When an intermediate certificate is rotated or changed, the new certificate might not match the pinned certificate on the client side. As a result, the client will reject the renewed certificate, even if it’s a valid certificate issued by the same CA. This mismatch leads to a failure on the client side, causing the domain to become inaccessible until the pinned certificate is updated to reflect the new intermediate certificate. This risk of an unexpected outage is a major downside of continuing to use certificate pinning, especially as CAs increasingly update their intermediates as a security measure.</p>
    <div>
      <h3>Increased use of certificate transparency</h3>
      <a href="#increased-use-of-certificate-transparency">
        
      </a>
    </div>
    <p><a href="/introducing-certificate-transparency-and-nimbus">Certificate transparency (CT) logs</a> provide a standardized framework for monitoring and auditing the issuance of TLS certificates. CT logs help detect misissued or malicious certificates and Cloudflare customers can set up <a href="/introducing-certificate-transparency-monitoring">CT monitoring</a> to receive notifications about any certificates issued for their domain. This provides a better mechanism for detecting certificate mis-issuance, reducing the need for pinning.</p>
    <div>
      <h3>Why modern standards make certificate pinning obsolete</h3>
      <a href="#why-modern-standards-make-certificate-pinning-obsolete">
        
      </a>
    </div>
    <p>Together, these practices – shorter certificate lifetimes, regular rotations of intermediate certificates, and increased use of certificate transparency – address the core security concerns that certificate pinning was initially designed to mitigate. Shorter lifetimes and frequent rotations limit the impact of compromised certificates, while certificate transparency allows for real time monitoring and detection of misissued certificates. These advancements are automated, scalable, and robust and eliminate the need for the manual and error-prone process of certificate pinning. By adopting these modern standards, organizations can achieve a higher level of security and resiliency without the management overhead and risk associated with certificate pinning.</p>
    <div>
      <h2>Reasons behind continued use of certificate pinning</h2>
      <a href="#reasons-behind-continued-use-of-certificate-pinning">
        
      </a>
    </div>
    <p>Originally, certificate pinning was designed to prevent <a href="/monsters-in-the-middleboxes/">monster-in-the-middle (MITM)</a> attacks by associating a hostname with a specific TLS certificate, ensuring that a client could only access an application if the server presented a certificate issued by the domain owner.</p><p>Certificate pinning was traditionally used to secure IoT (Internet of Things) devices, mobile applications, and APIs. IoT devices are usually equipped with more limited processing power and are oftentimes unable to perform software updates. As a result, they’re less likely to perform things like certificate revocation checks to ensure that they’re using a valid certificate. As a result, it’s common for these devices to come pre-installed with a set of trusted certificate pins to ensure that they can maintain a high level of security. However, the increased frequency of certificate changes poses significant risk, as many devices have immutable software, preventing timely updates to certificate pins during renewals.</p><p>Similarly, certificate pinning has been employed to secure Android and iOS applications, ensuring that only the correct certificates are served. Despite this, both <a href="https://developer.apple.com/news/?id=g9ejcf8y">Apple</a> and <a href="https://developer.android.com/privacy-and-security/security-ssl">Google</a> warn developers against the use of certificate pinning due to the potential for failures if not implemented correctly.</p>
    <div>
      <h2>Understanding the trade-offs of different certificate pinning implementations</h2>
      <a href="#understanding-the-trade-offs-of-different-certificate-pinning-implementations">
        
      </a>
    </div>
    <p>While certificate pinning can be applied at various levels of the certificate chain, offering different levels of granularity and security, we don’t recommend it due to the challenges and risks associated with its use.</p>
    <div>
      <h3>Pinning certificates at the root certificate</h3>
      <a href="#pinning-certificates-at-the-root-certificate">
        
      </a>
    </div>
    <p>Pinning the root certificate instructs a client to only trust certificates issued by that specific Certificate Authority (CA).</p><p><b>Advantages</b>:</p><ul><li><p><b>Simplified management:</b> Since root certificates have long lifetimes (&gt;10 years) and rarely change, pinning at the root reduces the need to frequently update certificate pins, making this the easiest option in terms of management overhead.</p></li></ul><p><b>Disadvantages:</b></p><ul><li><p><b>Broader trust:</b> Most Certificate Authorities (CAs) issue their certificates from one root, so pinning a root CA certificate enables the client to trust every certificate issued from that CA. However, this broader trust can be problematic. If the root CA is compromised, every certificate issued by that CA is also compromised, which significantly increases the potential scope and impact of a security breach. This broad trust can therefore create a single point of failure, making the entire ecosystem more vulnerable to attacks.</p></li><li><p><b>Neglected Maintenance:</b> Root certificates are infrequently rotated, which can lead to a “set and forget” mentality when pinning them. Although it's rare, CAs do <a href="https://letsencrypt.org/2023/07/10/cross-sign-expiration.html">change their roots</a> and when this happens, a correctly renewed certificate will break the pin, causing an outage. Since these pins are rarely updated, resolving such outages can be time-consuming as engineering teams try to identify and locate where the outdated pins have been set.</p></li></ul>
    <div>
      <h3>Pinning certificates at the intermediate certificate</h3>
      <a href="#pinning-certificates-at-the-intermediate-certificate">
        
      </a>
    </div>
    <p>Pinning an intermediate certificate instructs a client to only trust certificates issued by a specific intermediate CA, issued from a trusted root CA. With lifetimes ranging from 3 to 10 years, intermediate certificates offer a better balance between security and flexibility.</p><p><b>Advantages:</b></p><ul><li><p><b>Better security:</b> Narrows down the trust to certificates issued by a specific intermediate CA.</p></li><li><p><b>Simpler management:</b> With intermediate CA lifetimes spanning a few years, certificate pins require less frequent updates, reducing the maintenance burden.</p></li></ul><p><b>Disadvantages:</b></p><ul><li><p><b>Broad trust:</b> While pinning on an intermediate certificate is more restrictive than pinning on a root certificate, it still allows for a broad range of certificates to be trusted.</p></li><li><p><b>Maintenance:</b> Intermediate certificates are rotated more frequently than root certificates, requiring more regular updates to pinned certificates.</p></li><li><p><b>Less predictability:</b> With CAs like Let’s Encrypt issuing their certificates from varying intermediates, it’s no longer possible to predict which certificate chain will be used during a renewal, making it more likely for a certificate renewal to break a certificate pin and cause an outage.</p></li></ul>
    <div>
      <h3>Pinning certificates at the leaf certificate</h3>
      <a href="#pinning-certificates-at-the-leaf-certificate">
        
      </a>
    </div>
    <p>Pinning the leaf certificate instructs a client to only trust that specific certificate. Although this option offers the highest level of security, it also poses the greatest risk of causing an outage during a certificate renewal.</p><p><b>Advantages:</b></p><ul><li><p><b>High security:</b> Provides the highest level of security by ensuring that only a specific certificate is trusted, minimizing the risk of a monster-in-the-middle attack.</p></li></ul><p><b>Disadvantages:</b></p><ul><li><p><b>Risky:</b> Requires careful management of certificate renewals to prevent outages.</p></li><li><p><b>Management burden:</b> Leaf certificates have shorter lifetimes, with 90 days becoming the standard, requiring constant updates to the certificate pin to avoid a breaking change during a renewal.</p></li></ul>
    <div>
      <h2>Alternatives to certificate pinning</h2>
      <a href="#alternatives-to-certificate-pinning">
        
      </a>
    </div>
    <p>Given the risks and challenges associated with certificate pinning, we recommend the following as more effective and modern alternatives to achieve the same level of security (preventing a mis-issued certificate from intercepting traffic) that certificate pinning aims to provide.</p>
    <div>
      <h3>Shorter certificate lifetimes</h3>
      <a href="#shorter-certificate-lifetimes">
        
      </a>
    </div>
    <p>Using shorter certificate lifetimes ensures that certificates are regularly renewed and replaced, reducing the risk of misuse of a compromised certificate by limiting the window of opportunity for attackers.</p><p>By default, Cloudflare issues 90-day certificates for customers. Customers using Advanced Certificate Manager can request TLS certificates with lifetimes as short as 14 days.</p>
    <div>
      <h3>CAA records to restrict which CAs can issue certificates for a domain</h3>
      <a href="#caa-records-to-restrict-which-cas-can-issue-certificates-for-a-domain">
        
      </a>
    </div>
    <p><a href="https://developers.cloudflare.com/ssl/edge-certificates/caa-records/">Certification Authority Authorization</a> (CAA) DNS records allow domain owners to specify which CAs are allowed to issue certificates for their domain. This adds an extra layer of security by restricting issuance to trusted authorities, providing a similar benefit as pinning a root CA certificate. For example, if you’re using Google Trust Services as your CA, you can add the following CAA DNS record to ensure that only that CA issues certificates for your domain:</p>
            <pre><code>example.com         CAA 0 issue "pki.goog"</code></pre>
            <p>By default, <a href="https://developers.cloudflare.com/ssl/edge-certificates/caa-records/#caa-records-added-by-cloudflare">Cloudflare sets CAA records</a> on behalf of customers to ensure that certificates can be issued from the CAs that Cloudflare partners with. Customers can choose to further restrict that list of CAs by adding their own CAA records.</p>
    <div>
      <h3>Certificate Transparency &amp; monitoring</h3>
      <a href="#certificate-transparency-monitoring">
        
      </a>
    </div>
    <p>Certificate Transparency (CT) provides the ability to monitor and audit certificate issuances. By logging certificates in publicly accessible CT logs, organizations are able to monitor, detect, and respond to misissued certificates at the time they are issued.</p><p>Cloudflare customers can set up <a href="https://developers.cloudflare.com/ssl/edge-certificates/additional-options/certificate-transparency-monitoring/">CT Monitoring</a> to receive notifications when certificates are issued in their domain. Today, we notify customers using the product about all certificates issued for their domains. In the future, we will allow customers to filter those notifications, so that they are only notified when an external party that isn’t Cloudflare issues a certificate for the owner’s domain. This product is available to all customers and can be enabled with the click of a button.</p>
    <div>
      <h3>Multi-vantage point Domain Control Validation (DCV) checks to prevent mis-issuances</h3>
      <a href="#multi-vantage-point-domain-control-validation-dcv-checks-to-prevent-mis-issuances">
        
      </a>
    </div>
    <p>For a CA to issue a certificate, the domain owner needs to prove ownership of the domain by serving Domain Control Validation (DCV) records. While uncommon, one attack vector of DCV validation allows an actor to perform BGP hijacking to spoof the DNS response and trick a CA into mis-issuing a certificate. To prevent this form of attack, CAs have started to perform DCV validation checks from multiple locations to ensure that a certificate is only issued when a full quorum is met.</p><p>Cloudflare has developed <a href="/secure-certificate-issuance">its own solution</a> that CAs can use to perform multi vantage point DCV checks. In addition, Cloudflare partners with CAs like Let’s Encrypt who continue to develop these checks to support <a href="https://community.letsencrypt.org/t/lets-encrypt-is-adding-two-new-remote-perspectives-for-domain-validation/214123/3">new locations</a>, reducing the risk of a certificate mis-issuance.</p>
    <div>
      <h3>Specifying ACME account URI in CAA records</h3>
      <a href="#specifying-acme-account-uri-in-caa-records">
        
      </a>
    </div>
    <p>A new enhancement to the <a href="https://datatracker.ietf.org/doc/html/rfc8555">ACME protocol</a> allows certificate requesting parties to specify an ACME account URI, the ID of the ACME account that will be requesting the certificates, in CAA records to tighten control over the certificate issuance process. This ensures that only certificates issued through an authorized ACME account are trusted, adding another layer of verification to certificate issuance. Let’s Encrypt <a href="https://letsencrypt.org/docs/caa/">supports</a> this extension to CAA records which allows users with a Let’s Encrypt certificate to set a CAA DNS record, such as the following:</p>
            <pre><code>example.com         CAA 0 issue "letsencrypt.org;accounturi=https://acme-v02.api.letsencrypt.org/acme/acct/&lt;account_id&gt;"</code></pre>
            <p>With this record, Let’s Encrypt subscribers can ensure that only Let’s Encrypt can issue certificates for their domain and that these certificates were only issued through their ACME account.</p><p>Cloudflare will look to support this enhancement automatically for customers in the future.</p>
    <div>
      <h2>Conclusion</h2>
      <a href="#conclusion">
        
      </a>
    </div>
    <p>Years ago, certificate pinning was a valuable tool for enhancing security, but over the last 20 years, it has failed to keep up with new advancements in the certificate ecosystem. As a result, instead of providing the intended security benefit, it has increased the number of outages caused during a successful certificate renewal. With new enhancements in certificate issuance standards and certificate transparency, we’re encouraging our customers and the industry to move towards adopting those new standards and deprecating old ones.</p><p>If you’re a Cloudflare customer and are required to pin your certificate, the only way to ensure a zero-downtime renewal is to upload your own custom certificates. We recommend using the <a href="/staging-tls-certificate-every-deployment-safe-deployment">staging network</a> to test your certificate renewal to ensure you have updated your certificate pin.</p> ]]></content:encoded>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[Network Services]]></category>
            <category><![CDATA[Certificate Pinning]]></category>
            <category><![CDATA[TLS]]></category>
            <category><![CDATA[SSL]]></category>
            <category><![CDATA[Certificate Transparency]]></category>
            <guid isPermaLink="false">60UVrfwDr6RkKdtneMF0ph</guid>
            <dc:creator>Dina Kozlov</dc:creator>
        </item>
        <item>
            <title><![CDATA[How we ensure Cloudflare customers aren't affected by Let's Encrypt's certificate chain change]]></title>
            <link>https://blog.cloudflare.com/shortening-lets-encrypt-change-of-trust-no-impact-to-cloudflare-customers/</link>
            <pubDate>Fri, 12 Apr 2024 13:00:09 GMT</pubDate>
            <description><![CDATA[ Let’s Encrypt’s cross-signed chain will be expiring in September. This will affect legacy devices with outdated trust stores (Android versions 7.1.1 or older). To prevent this change from impacting customers, Cloudflare will shift Let’s Encrypt certificates upon renewal to use a different CA ]]></description>
            <content:encoded><![CDATA[ <p></p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7xrPBUpEjsmaBkRYhxiRIr/c95f42fffef36c7d595422b4724c8665/Untitled--3--1.png" />
            
            </figure><p><a href="https://letsencrypt.org/">Let’s Encrypt</a>, a publicly trusted certificate authority (CA) that Cloudflare uses to issue TLS certificates, has been relying on two distinct certificate chains. One is cross-signed with <a href="https://www.identrust.com/">IdenTrust</a>, a globally trusted CA that has been around since 2000, and the other is Let’s Encrypt’s own root CA, ISRG Root X1. Since Let’s Encrypt launched, <a href="https://letsencrypt.org/certificates/#root-certificates">ISRG Root X1</a> has been steadily gaining its own device compatibility.</p><p>On September 30, 2024, Let’s Encrypt’s certificate chain cross-signed with IdenTrust will <a href="https://letsencrypt.org/2023/07/10/cross-sign-expiration.html">expire</a>. After the cross-sign expires, servers will no longer be able to serve certificates signed by the cross-signed chain. Instead, all Let’s Encrypt certificates will use the ISRG Root X1 CA.</p><p>Most devices and browser versions released after 2016 will not experience any issues as a result of the change since the ISRG Root X1 will already be installed in those clients’ trust stores. That's because these modern browsers and operating systems were built to be agile and flexible, with upgradeable trust stores that can be updated to include new certificate authorities.</p><p>The change in the certificate chain will impact legacy devices and systems, such as devices running Android version 7.1.1 (released in 2016) or older, as those exclusively rely on the cross-signed chain and lack the ISRG X1 root in their trust store. These clients will encounter TLS errors or warnings when accessing domains secured by a Let’s Encrypt certificate. We took a look at the data ourselves and found that, of all Android requests, 2.96% of them come from devices that will be affected by the change. That’s a substantial portion of traffic that will lose access to the Internet. We’re committed to keeping those users online and will modify our certificate pipeline so that we can continue to serve users on older devices without requiring any manual modifications from our customers.</p>
    <div>
      <h3>A better Internet, for everyone</h3>
      <a href="#a-better-internet-for-everyone">
        
      </a>
    </div>
    <p>In the past, we invested in efforts like <a href="/sha-1-deprecation-no-browser-left-behind/">“No Browsers Left Behind”</a> to help ensure that we could continue to support clients as SHA-1 based algorithms were being deprecated. Now, we’re applying the same approach for the upcoming Let’s Encrypt change.</p><p>We have made the decision to remove Let’s Encrypt as a certificate authority from all flows where Cloudflare dictates the CA, impacting Universal SSL customers and those using SSL for SaaS with the “default CA” choice.</p><p>Starting in June 2024, one certificate lifecycle (90 days) before the cross-sign chain expires, we’ll begin migrating Let’s Encrypt certificates that are up for renewal to use a different CA, one that ensures compatibility with older devices affected by the change. That means that going forward, customers will only receive Let’s Encrypt certificates if they explicitly request Let’s Encrypt as the CA.</p><p>The change that Let's Encrypt is making is a necessary one. For us to move forward in supporting new standards and protocols, we need to make the Public Key Infrastructure (PKI) ecosystem more agile. By retiring the cross-signed chain, Let’s Encrypt is pushing devices, browsers, and clients to support adaptable trust stores.</p><p>However, we’ve observed <a href="/sha-1-deprecation-no-browser-left-behind/">changes like this in the past</a> and while they push the adoption of new standards, they disproportionately impact users in economically disadvantaged regions, where access to new technology is limited.</p><p>Our mission is to help build a better Internet and that means supporting users worldwide. We previously published a <a href="/upcoming-lets-encrypt-certificate-chain-change-and-impact-for-cloudflare-customers">blog post about the Let’s Encrypt change</a>, asking customers to switch their certificate authority if they expected any impact. However, determining the impact of the change is challenging. Error rates due to trust store incompatibility are primarily logged on clients, reducing the visibility that domain owners have. In addition, while there might be no requests incoming from incompatible devices today, it doesn’t guarantee uninterrupted access for a user tomorrow.</p><p>Cloudflare’s certificate pipeline has evolved over the years to be resilient and flexible, allowing us to seamlessly adapt to changes like this without any negative impact to our customers.  </p>
    <div>
      <h3>How Cloudflare has built a robust TLS certificate pipeline</h3>
      <a href="#how-cloudflare-has-built-a-robust-tls-certificate-pipeline">
        
      </a>
    </div>
    <p>Today, Cloudflare manages tens of millions of certificates on behalf of customers. For us, a successful pipeline means:</p><ol><li><p>Customers can always obtain a TLS certificate for their domain</p></li><li><p>CA related issues have zero impact on our customer’s ability to obtain a certificate</p></li><li><p>The best security practices and modern standards are utilized</p></li><li><p>Optimizing for future scale</p></li><li><p>Supporting a wide range of clients and devices</p></li></ol><p>Every year, we introduce new optimizations into our certificate pipeline to maintain the highest level of service. Here’s how we do it…</p>
    <div>
      <h3>Ensuring customers can always obtain a TLS certificate for their domain</h3>
      <a href="#ensuring-customers-can-always-obtain-a-tls-certificate-for-their-domain">
        
      </a>
    </div>
    <p>Since the launch of Universal SSL in 2014, Cloudflare has been responsible for issuing and serving a TLS certificate for every domain that’s protected by our network. That might seem trivial, but there are a few steps that have to successfully execute in order for a domain to receive a certificate:</p><ol><li><p>Domain owners need to complete <a href="https://developers.cloudflare.com/ssl/edge-certificates/changing-dcv-method/">Domain Control Validation</a> for every certificate issuance and renewal.</p></li><li><p>The certificate authority needs to verify the Domain Control Validation tokens to issue the certificate.</p></li><li><p><a href="https://developers.cloudflare.com/ssl/edge-certificates/troubleshooting/caa-records/">CAA records</a>, which dictate which CAs can be used for a domain, need to be checked to ensure only authorized parties can issue the certificate.</p></li><li><p>The certificate authority must be available to issue the certificate.</p></li></ol><p>Each of these steps requires coordination across a number of parties — domain owners, CDNs, and certificate authorities. At Cloudflare, we like to be in control when it comes to the success of our platform. That’s why we make it our job to ensure each of these steps can be successfully completed.</p><p>We ensure that every certificate issuance and renewal requires minimal effort from our customers. To get a certificate, a domain owner has to complete Domain Control Validation (DCV) to prove that it does in fact own the domain. Once the certificate request is initiated, the CA will return DCV tokens which the domain owner will need to place in a DNS record or an HTTP token. If you’re using Cloudflare as your DNS provider, Cloudflare completes DCV on your behalf by automatically placing the TXT token returned from the CA into your DNS records. Alternatively, if you use an external DNS provider, we offer the option to <a href="/introducing-dcv-delegation/">Delegate DCV</a> to Cloudflare for automatic renewals without any customer intervention.</p><p>Once DCV tokens are placed, Certificate Authorities (CAs) verify them. CAs conduct this <a href="/secure-certificate-issuance">verification from multiple vantage points</a> to prevent spoofing attempts. However, since these checks are done from multiple countries and ASNs (Autonomous Systems), they may trigger a Cloudflare WAF rule which can cause the DCV check to get blocked. We made sure to update our WAF and security engine to recognize that these requests are coming from a CA to ensure they're never blocked so DCV can be successfully completed.</p><p>Some customers have CA preferences, due to internal requirements or compliance regulations. To prevent an unauthorized CA from issuing a certificate for a domain, the domain owner can create a Certification Authority Authorization (CAA) DNS record, specifying which CAs are allowed to issue a certificate for that domain. To ensure that customers can always obtain a certificate, we check the CAA records before requesting a certificate to know which CAs we should use. If the CAA records block all of the <a href="https://developers.cloudflare.com/ssl/reference/certificate-authorities/">CAs that are available</a> in Cloudflare’s pipeline and the customer has not uploaded a certificate from the CA of their choice, then we add CAA records on our customers’ behalf to ensure that they can get a certificate issued. Where we can, we optimize for preference. Otherwise, it’s our job to prevent an outage by ensuring that there’s always a TLS certificate available for the domain, even if it does not come from a preferred CA.</p><p>Today, Cloudflare is not a publicly trusted certificate authority, so we rely on the CAs that we use to be highly available. But, 100% uptime is an unrealistic expectation. Instead, our pipeline needs to be prepared in case our CAs become unavailable.</p>
    <div>
      <h4>Ensuring that CA-related issues have zero impact on our customer’s ability to obtain a certificate</h4>
      <a href="#ensuring-that-ca-related-issues-have-zero-impact-on-our-customers-ability-to-obtain-a-certificate">
        
      </a>
    </div>
    <p>At Cloudflare, we like to think ahead, which means preventing incidents before they happen. It’s not uncommon for CAs to become unavailable — sometimes this happens because of an outage, but more commonly, CAs have maintenance periods every so often where they become unavailable for some period of time.</p><p>It’s our job to ensure CA redundancy, which is why we always have multiple CAs ready to issue a certificate, ensuring high availability at all times. If you've noticed different CAs issuing your Universal SSL certificates, that's intentional. We evenly distribute the load across our CAs to avoid any single point of failure. Plus, we keep a close eye on latency and error rates to detect any issues and automatically switch to a different CA that's available and performant. You may not know this, but one of our CAs has around 4 scheduled maintenance periods every month. When this happens, our automated systems kick in seamlessly, keeping everything running smoothly. This works so well that our internal teams don’t get paged anymore because everything <i>just works.</i></p>
    <div>
      <h4>Adopting best security practices and modern standards  </h4>
      <a href="#adopting-best-security-practices-and-modern-standards">
        
      </a>
    </div>
    <p>Security has always been, and will continue to be, Cloudflare’s top priority, and so maintaining the highest security standards to safeguard our customer’s data and private keys is crucial.</p><p>Over the past decade, the <a href="https://cabforum.org/">CA/Browser Forum</a> has advocated for reducing certificate lifetimes from 5 years to 90 days as the industry norm. This shift helps minimize the risk of a key compromise. When certificates are renewed every 90 days, their private keys remain valid for only that period, reducing the window of time that a bad actor can make use of the compromised material.</p><p>We fully embrace this change and have made 90 days the default certificate validity period. This enhances our security posture by ensuring regular key rotations, and has pushed us to develop tools like DCV Delegation that promote <a href="https://www.cloudflare.com/application-services/solutions/certificate-lifecycle-management/">automation</a> around frequent certificate renewals, without the added overhead. It’s what enables us to offer certificates with validity periods as low as two weeks, for customers that want to rotate their private keys at a high frequency without any concern that it will lead to certificate renewal failures.</p><p>Cloudflare has always been at the forefront of new protocols and standards. <a href="https://www.cloudflare.com/press-releases/2014/cloudflare-offers-the-industrys-first-universal-ssl-for-free/">It’s no secret</a> that when we support a new protocol, adoption skyrockets. This month, we will be adding <a href="/ecdsa-the-digital-signature-algorithm-of-a-better-internet">ECDSA</a> support for certificates issued from <a href="https://pki.goog/">Google Trust Services</a>. With <a href="https://www.cloudflare.com/learning/dns/dnssec/ecdsa-and-dnssec/">ECDSA</a>, you get the same level of security as RSA but with smaller keys. Smaller keys mean smaller certificates and less data passed around to establish a TLS connection, which results in quicker connections and faster loading times.</p>
    <div>
      <h4>Optimizing for future scale</h4>
      <a href="#optimizing-for-future-scale">
        
      </a>
    </div>
    <p>Today, Cloudflare issues almost 1 million certificates per day. With the recent shift towards shorter certificate lifetimes, we continue to improve our pipeline to be more robust. But even if our pipeline can handle the significant load, we still need to rely on our CAs to be able to scale with us. With every CA that we integrate, we instantly become one of their biggest consumers. We hold our CAs to high standards and push them to improve their infrastructure to scale. This doesn’t just benefit Cloudflare’s customers, but it helps the Internet by requiring CAs to handle higher volumes of issuance.</p><p>And now, with Let’s Encrypt shortening their chain of trust, we’re going to add an additional improvement to our pipeline — one that will ensure the best device compatibility for all.</p>
    <div>
      <h4>Supporting all clients — legacy and modern</h4>
      <a href="#supporting-all-clients-legacy-and-modern">
        
      </a>
    </div>
    <p>The upcoming Let’s Encrypt change will prevent legacy devices from making requests to domains or applications that are protected by a Let’s Encrypt certificate. We don’t want to cut off Internet access from any part of the world, which means that we’re going to continue to provide the best device compatibility to our customers, despite the change.</p><p>Because of all the recent enhancements, we are able to reduce our reliance on Let’s Encrypt without impacting the reliability or quality of service of our certificate pipeline. One certificate lifecycle (90 days) before the change, we are going to start shifting certificates to use a different CA, one that’s compatible with the devices that will be impacted. By doing this, we’ll mitigate any impact without any action required from our customers. The only customers that will continue to use Let’s Encrypt are ones that have specifically chosen Let’s Encrypt as the CA.</p>
    <div>
      <h3>What to expect of the upcoming Let’s Encrypt change</h3>
      <a href="#what-to-expect-of-the-upcoming-lets-encrypt-change">
        
      </a>
    </div>
    <p>Let’s Encrypt’s cross-signed chain will <a href="https://letsencrypt.org/2023/07/10/cross-sign-expiration.html">expire</a> on September 30th, 2024. Although Let’s Encrypt plans to stop issuing certificates from this chain on June 6th, 2024, Cloudflare will continue to serve the cross-signed chain for all Let’s Encrypt certificates until September 9th, 2024.</p><p>90 days or one certificate lifecycle before the change, we are going to start shifting Let’s Encrypt certificates to use a different certificate authority. We’ll make this change for all products where Cloudflare is responsible for the CA selection, meaning this will be automatically done for customers using Universal SSL and SSL for SaaS with the “default CA” choice.</p><p>Any customers that have specifically chosen Let’s Encrypt as their CA will receive an email notification with a list of their Let’s Encrypt certificates and information on whether or not we’re seeing requests on those hostnames coming from legacy devices.</p><p>After September 9th, 2024, Cloudflare will serve all Let’s Encrypt certificates using the ISRG Root X1 chain. Here is what you should expect based on the certificate product that you’re using:</p>
    <div>
      <h4>Universal SSL</h4>
      <a href="#universal-ssl">
        
      </a>
    </div>
    <p>With <a href="https://developers.cloudflare.com/ssl/edge-certificates/universal-ssl/">Universal SSL</a>, Cloudflare chooses the CA that is used for the domain’s certificate. This gives us the power to choose the best certificate for our customers. <b>If you are using Universal SSL, there are no changes for you to make to prepare for this change</b>. Cloudflare will automatically shift your certificate to use a more compatible CA.</p>
    <div>
      <h4>Advanced Certificates</h4>
      <a href="#advanced-certificates">
        
      </a>
    </div>
    <p>With <a href="https://developers.cloudflare.com/ssl/edge-certificates/advanced-certificate-manager/">Advanced Certificate Manager</a>, customers specifically choose which CA they want to use. If Let’s Encrypt was specifically chosen as the CA for a certificate, we will respect the choice, because customers may have specifically chosen this CA due to internal requirements, or because they have implemented certificate pinning, which we highly discourage.</p><p>If we see that a domain using an Advanced certificate issued from Let’s Encrypt will be impacted by the change, then we will send out email notifications to inform those customers which certificates are using Let’s Encrypt as their CA and whether or not those domains are receiving requests from clients that will be impacted by the change. Customers will be responsible for changing the CA to another provider, if they chose to do so.</p>
    <div>
      <h4>SSL for SaaS</h4>
      <a href="#ssl-for-saas">
        
      </a>
    </div>
    <p>With <a href="https://developers.cloudflare.com/cloudflare-for-platforms/cloudflare-for-saas/security/certificate-management/">SSL for SaaS</a>, customers have two options: using a default CA, meaning Cloudflare will choose the issuing authority, or specifying which CA to use.</p><p>If you’re leaving the CA choice up to Cloudflare, then we will automatically use a CA with higher device compatibility.</p><p>If you’re specifying a certain CA for your custom hostnames, then we will respect that choice. We will send an email out to SaaS providers and platforms to inform them which custom hostnames are receiving requests from legacy devices. Customers will be responsible for changing the CA to another provider, if they chose to do so.</p>
    <div>
      <h4>Custom Certificates</h4>
      <a href="#custom-certificates">
        
      </a>
    </div>
    <p>If you directly integrate with Let’s Encrypt and use <a href="https://developers.cloudflare.com/ssl/edge-certificates/custom-certificates/">Custom Certificates</a> to upload your Let’s Encrypt certs to Cloudflare then your certificates will be bundled with the cross-signed chain, as long as you choose the bundle method “<a href="https://developers.cloudflare.com/ssl/edge-certificates/custom-certificates/bundling-methodologies/#compatible">compatible</a>” or “<a href="https://developers.cloudflare.com/ssl/edge-certificates/custom-certificates/bundling-methodologies/#modern">modern</a>” and upload those certificates before September 9th, 2024. After September 9th, we will bundle all Let’s Encrypt certificates with the ISRG Root X1 chain. With the “<a href="https://developers.cloudflare.com/ssl/edge-certificates/custom-certificates/bundling-methodologies/#user-defined">user-defined</a>” bundle method, we always serve the chain that’s uploaded to Cloudflare. If you upload Let’s Encrypt certificates using this method, you will need to ensure that certificates uploaded after September 30th, 2024, the date of the CA expiration, contain the right certificate chain.</p><p>In addition, if you control the clients that are connecting to your application, we recommend updating the trust store to include the <a href="https://letsencrypt.org/certificates/#root-certificates">ISRG Root X1</a>. If you use certificate pinning, remove or update your pin. In general, we discourage all customers from pinning their certificates, as this usually leads to issues during certificate renewals or CA changes.</p>
    <div>
      <h2>Conclusion</h2>
      <a href="#conclusion">
        
      </a>
    </div>
    <p>Internet standards will continue to evolve and improve. As we support and embrace those changes, we also need to recognize that it’s our responsibility to keep users online and to maintain Internet access in the parts of the world where new technology is not readily available. By using Cloudflare, you always have the option to choose the setup that’s best for your application.</p><p>For additional information regarding the change, please refer to our <a href="https://developers.cloudflare.com/ssl/reference/migration-guides/lets-encrypt-chain/">developer documentation</a>.</p> ]]></content:encoded>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[Application Services]]></category>
            <category><![CDATA[TLS]]></category>
            <category><![CDATA[SSL]]></category>
            <category><![CDATA[Certificate Authority]]></category>
            <category><![CDATA[Deep Dive]]></category>
            <guid isPermaLink="false">ymep6DaFvevM4m2AIdw5F</guid>
            <dc:creator>Dina Kozlov</dc:creator>
        </item>
        <item>
            <title><![CDATA[Upcoming Let’s Encrypt certificate chain change and impact for Cloudflare customers]]></title>
            <link>https://blog.cloudflare.com/upcoming-lets-encrypt-certificate-chain-change-and-impact-for-cloudflare-customers/</link>
            <pubDate>Thu, 14 Mar 2024 14:00:23 GMT</pubDate>
            <description><![CDATA[ Let’s Encrypt’s cross-signed chain will be expiring. To prepare for, Cloudflare will issue certs from Let’s Encrypt’s ISRG X1 chain. This change impacts legacy devices with outdated trust stores. ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Let’s Encrypt, a publicly trusted certificate authority (CA) that Cloudflare uses to issue TLS certificates, has been relying on two distinct certificate chains. One is cross-signed with IdenTrust, a globally trusted CA that has been around since 2000, and the other is Let’s Encrypt’s own root CA, ISRG Root X1. Since Let’s Encrypt launched, ISRG Root X1 has been steadily gaining its own device compatibility.</p><p>On September 30, 2024, Let’s Encrypt’s certificate chain cross-signed with IdenTrust will <a href="https://letsencrypt.org/2023/07/10/cross-sign-expiration.html">expire</a>. To proactively prepare for this change, on May 15, 2024, Cloudflare will stop issuing certificates from the cross-signed chain and will instead use Let’s Encrypt’s ISRG Root X1 chain for all future Let’s Encrypt certificates.</p><p>The change in the certificate chain will impact legacy devices and systems, such as Android devices version 7.1.1 or older, as those exclusively rely on the cross-signed chain and lack the ISRG X1 root in their trust store. These clients may encounter TLS errors or warnings when accessing domains secured by a Let’s Encrypt certificate.</p><p>According to Let’s Encrypt, more than 93.9% of Android devices already trust the ISRG Root X1 and this number is expected to increase in 2024, especially as Android releases version 14, which makes the Android trust store easily and automatically upgradable.</p><p>We took a look at the data ourselves and found that, from all Android requests, 2.96% of them come from devices that will be affected by the change. In addition, only 1.13% of all requests from Firefox come from affected versions, which means that most (98.87%) of the requests coming from Android versions that are using Firefox will not be impacted.</p>
    <div>
      <h3>Preparing for the change</h3>
      <a href="#preparing-for-the-change">
        
      </a>
    </div>
    <p>If you’re worried about the change impacting your clients, there are a few things that you can do to reduce the impact of the change. If you control the clients that are connecting to your application, we recommend updating the trust store to include the <a href="https://letsencrypt.org/certificates/#root-certificates">ISRG Root X1</a>. If you use certificate pinning, remove or update your pin. In general, we discourage all customers from pinning their certificates, as this usually leads to issues during certificate renewals or CA changes.</p><p>If you experience issues with the Let’s Encrypt chain change, and you’re using <a href="https://developers.cloudflare.com/ssl/edge-certificates/advanced-certificate-manager/">Advanced Certificate Manager</a> or <a href="https://developers.cloudflare.com/cloudflare-for-platforms/cloudflare-for-saas/security/certificate-management/">SSL for SaaS</a> on the Enterprise plan, you can choose to switch your certificate to use Google Trust Services as the certificate authority instead.</p><p>For more information, please refer to our <a href="https://developers.cloudflare.com/ssl/reference/migration-guides/lets-encrypt-chain/">developer documentation</a>.</p><p>While this change will impact a very small portion of clients, we support the shift that Let’s Encrypt is making as it supports a more secure and agile Internet.</p>
    <div>
      <h3>Embracing change to move towards a better Internet</h3>
      <a href="#embracing-change-to-move-towards-a-better-internet">
        
      </a>
    </div>
    <p>Looking back, there were a number of challenges that slowed down the adoption of new technologies and standards that helped make the Internet faster, more secure, and more reliable.</p><p>For starters, before Cloudflare <a href="/introducing-universal-ssl">launched Universal SSL</a>, free certificates were not attainable. Instead, domain owners had to pay around $100 to get a TLS certificate. For a <a href="https://www.cloudflare.com/small-business/">small business</a>, this is a big cost and without browsers enforcing <a href="https://www.cloudflare.com/learning/ssl/transport-layer-security-tls/">TLS</a>, this significantly hindered TLS adoption for years. Insecure algorithms have taken decades to deprecate due to lack of support of new algorithms in browsers or devices. We learned this lesson while <a href="/sha-1-deprecation-no-browser-left-behind/">deprecating SHA-1</a>.</p><p>Supporting new security standards and protocols is vital for us to continue improving the Internet. Over the years, big and sometimes risky changes were made in order for us to move forward. The launch of Let’s Encrypt in 2015 was monumental. Let’s Encrypt allowed every domain to <a href="https://www.cloudflare.com/application-services/products/ssl/">get a TLS certificate for free</a>, which paved the way to a more secure Internet, with now <a href="https://radar.cloudflare.com/adoption-and-usage#http-vs-https">around 98%</a> of traffic using <a href="https://www.cloudflare.com/learning/ssl/what-is-https/">HTTPS</a>.</p><p>In 2014, Cloudflare launched <a href="https://www.cloudflare.com/learning/dns/dnssec/ecdsa-and-dnssec/">elliptic curve digital signature algorithm (ECDSA)</a> support for Cloudflare-issued certificates and made the decision to issue ECDSA-only certificates to free customers. This boosted ECDSA adoption by pressing clients and web operators to make changes to support the <a href="https://www.cloudflare.com/dns/dnssec/ecdsa-and-dnssec/">new algorithm</a>, which provided the same (if not better) security as RSA while also improving performance. In addition to that, modern browsers and operating systems are now being built in a way that allows them to constantly support new standards, so that they can deprecate old ones.</p><p>For us to move forward in supporting new standards and protocols, we need to make the Public Key Infrastructure (PKI) ecosystem more agile. By retiring the cross-signed chain, Let’s Encrypt is pushing devices, browsers, and clients to support adaptable trust stores. This allows clients to support new standards without causing a breaking change. It also lays the groundwork for new certificate authorities to emerge.</p><p>Today, one of the main reasons why there’s a limited number of CAs available is that it takes years for them to become widely trusted, that is, without cross-signing with another CA. In 2017, Google launched a new publicly trusted CA, <a href="https://pki.goog/">Google Trust Services</a>, that issued free TLS certificates. Even though they launched a few years after Let’s Encrypt, they faced the same challenges with device compatibility and adoption, which caused them to cross-sign with GlobalSign’s CA. We hope that, by the time GlobalSign’s CA comes up for expiration, almost all traffic is coming from a modern client and browser, meaning the change impact should be minimal.</p> ]]></content:encoded>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <category><![CDATA[SSL]]></category>
            <category><![CDATA[TLS]]></category>
            <category><![CDATA[Application Security]]></category>
            <guid isPermaLink="false">m2vQRUxG0AW4dcsV1vanp</guid>
            <dc:creator>Dina Kozlov</dc:creator>
        </item>
        <item>
            <title><![CDATA[Connection coalescing with ORIGIN Frames: fewer DNS queries, fewer connections]]></title>
            <link>https://blog.cloudflare.com/connection-coalescing-with-origin-frames-fewer-dns-queries-fewer-connections/</link>
            <pubDate>Mon, 04 Sep 2023 13:00:51 GMT</pubDate>
            <description><![CDATA[ In this blog we’re going to take a closer look at “connection coalescing”, with specific focus on manage it at a large scale ]]></description>
            <content:encoded><![CDATA[ <p><i>This blog reports and summarizes the contents of a Cloudflare </i><a href="https://research.cloudflare.com/publications/Singanamalla2022/"><i>research paper</i></a><i> which appeared at the ACM </i><a href="https://conferences.sigcomm.org/imc/2022/program/"><i>Internet Measurement Conference</i></a><i>, that measures and prototypes connection coalescing with ORIGIN Frames.</i></p><p>Some readers might be surprised to hear that a single visit to a web page can cause a browser to make tens, sometimes even hundreds, of web connections. Take this very blog as an example. If it is your first visit to the Cloudflare blog, or it has been a while since your last visit, your browser will make multiple connections to render the page. The browser will make DNS queries to find IP addresses corresponding to blog.cloudflare.com and then subsequent requests to retrieve any necessary subresources on the web page needed to successfully render the complete page. How many? Looking below, at the time of writing, there are 32 different hostnames used to load the Cloudflare Blog. That means 32 DNS queries and <i>at least</i> 32 TCP (or QUIC) connections, unless the client is able to reuse (or coalesce) some of those connections.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5iVEUq8ZQ8HsPbg1FAP5jq/de5899ea338a7628e732558caf2f8710/Screenshot-2023-09-03-at-18.34.41.png" />
            
            </figure><p>Each new web connection not only introduces additional load on a server's processing capabilities – potentially leading to scalability challenges during peak usage hours – but also exposes client metadata to the network, such as the plaintext hostnames being accessed by an individual. Such meta information can potentially reveal a user’s online activities and browsing behaviors to on-path network adversaries and eavesdroppers!</p><p>In this blog we’re going to take a closer look at “connection coalescing”. Since our initial look at <a href="/connection-coalescing-experiments/">IP-based coalescing in 2021</a>, we have done further large-scale measurements and modeling across the Internet, to understand and predict if and where coalescing would work best. Since IP coalescing is difficult to manage at large scale, last year we implemented and experimented with a promising standard called the <a href="https://datatracker.ietf.org/doc/rfc8336/">HTTP/2 ORIGIN Frame extension</a> that we leveraged to coalesce connections to our edge without worrying about managing IP addresses.</p><p>All told, there are opportunities being missed by many large providers. We hope that this blog (and our <a href="https://research.cloudflare.com/publications/Singanamalla2022/">publication</a> at ACM IMC 2022 with full details) offers a first step that helps servers and clients take advantage of the ORIGIN Frame standard.</p>
    <div>
      <h3>Setting the stage</h3>
      <a href="#setting-the-stage">
        
      </a>
    </div>
    <p>At a high level, as a user navigates the web, the browser renders web pages by retrieving dependent subresources to construct the complete web page. This process bears a striking resemblance to the way physical products are assembled in a factory. In this sense, a modern web page can be considered like an assembly plant. It relies on a ‘supply chain’ of resources that are needed to produce the final product.</p><p>An assembly plant in the physical world can place a single order for different parts and get a single shipment from the supplier (similar to the <a href="https://www.sciencedirect.com/science/article/abs/pii/092552739290109K">kitting process</a> for maximizing value and minimizing response time); no matter the manufacturer of those parts or where they are made -- one ‘connection’ to the supplier is all that is needed. Any single truck from a supplier to an assembly plant can be filled with parts from multiple manufacturers.</p><p>The design of the web causes browsers to typically do the opposite in nature. To retrieve the images, JavaScript, and other resources on a web page (the parts), web clients (assembly plants) have to make <i>at least</i> one connection to every hostname (the manufacturers) defined in the HTML that is returned by the server (the supplier). It makes no difference if the connections to those hostnames go to the same server or not, for example they could go to a <a href="https://www.cloudflare.com/learning/cdn/glossary/reverse-proxy/">reverse proxy</a> like Cloudflare. For each manufacturer a ‘new’ truck would be needed to transfer the materials to the assembly plant from the same supplier, or more formally, a new connection would need to be made to request a subresource from a hostname on the same web page.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3jdJ2PTbkKJKMFxbWD6RgD/40b366c9315d9c55e2409e1654bc0a3d/pasted-image-0--4-.png" />
            
            </figure><p>Without connection coalescing</p><p>The number of connections used to load a web page can be surprisingly high. It is also common for the subresources to need yet other sub-subresources, and so new connections emerge as a result of earlier ones. Remember, too, that HTTP connections to hostnames are often preceded by DNS queries! Connection coalescing allows us to use fewer connections_, or ‘reuse’ the same set of trucks to carry parts from multiple manufacturers from a single supplier._</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/jJx6aGfQCngN6w6D4jpeE/1dcfe2b5e3f8128d8837512ec1a959c9/pasted-image-0--5-.png" />
            
            </figure><p>With connection coalescing </p>
    <div>
      <h3>Connection coalescing in principle</h3>
      <a href="#connection-coalescing-in-principle">
        
      </a>
    </div>
    <p>Connection coalescing was <a href="https://datatracker.ietf.org/doc/html/rfc7540">introduced in HTTP/2</a>, and carried over into <a href="https://www.rfc-editor.org/rfc/rfc9114.html#name-connection-reuse">HTTP/3</a>. We’ve blogged about connection coalescing <a href="/connection-coalescing-experiments/">previously</a> (for a detailed primer we encourage going over that blog). While the idea is simple, implementing it can present a number of engineering challenges. For example, recall from above that there are 32 hostnames (at the time of writing) to load the web page you are reading right now. Among the 32 hostnames are 16 unique domains (defined as “Effective <a href="https://www.cloudflare.com/learning/dns/top-level-domain/">TLD+1</a>”). Can we create fewer connections or ‘coalesce’ existing connections for each unique domain? The answer is ‘<i>Yes, but it depends</i>’.</p><p>The exact number of connections to load the blog page is not at all obvious, and hard to know. There may be 32 hostnames attached to 16 domains but, counter-intuitively, this does not mean the answer to “how many unique connections?” is 16. The true answer could be as few as <i>one</i> connection if all the hostnames are reachable at a single server; or as many as 32 independent connections if a different and distinct server is needed to access each individual hostname.</p><p>Connection reuse comes in many forms, so it’s important to define “connection coalescing” in the HTTP space. For example, the reuse of an existing <a href="https://www.cloudflare.com/learning/ddos/glossary/tcp-ip/">TCP</a> or TLS connection to a hostname to make multiple requests for subresources from that <b><i>same</i></b> hostname is connection reuse, but not coalescing.</p><p>Coalescing occurs when an existing TLS channel for some hostname can be repurposed or used for connecting to a <b><i>different</i></b> hostname. For example, upon visiting blog.cloudflare.com, the HTML points to subresources at cdnjs.cloudflare.com. To reuse the same TLS connection for the subresources, it is necessary for both hostnames to appear together in the TLS certificate's <a href="https://en.wikipedia.org/wiki/Subject_Alternative_Name">“Server Alternative Name (SAN)</a>” list, but this step alone is not sufficient to convince browsers to coalesce. After all, the cdnjs.cloudflare.com service may or may not be reachable at the same server as blog.cloudflare.com, despite being on the same certificate. So how can the browser know? Coalescing only works if servers set up the right conditions, but clients have to decide whether to coalesce or not – thus, browsers require a signal to coalesce beyond the SANs list on the certificate. Revisiting our analogy, the assembly plant may order a part from a manufacturer directly, not knowing that the supplier already has the same part in its warehouse.</p><p>There are two explicit signals a browser can use to decide whether connections can be coalesced: one is IP-based, the other ORIGIN Frame-based. The former requires the server operators to tightly bind DNS records to the HTTP resources available on the server. This is difficult to manage and deploy, and actually creates a risky dependency, because you have to place all the resources behind a specific set or a single IP address. The way IP addresses influence coalescing decisions <a href="https://daniel.haxx.se/blog/2016/08/18/http2-connection-coalescing/">varies among browsers</a>, with some choosing to be more conservative and others more permissive. Alternatively, the HTTP ORIGIN Frame is an easier signal for the servers to orchestrate; it’s also flexible and has graceful failure with no interruption to service (for a specification compliant implementation).</p><p>A foundational difference between both these coalescing signals is: IP-based coalescing signals are implicit, even accidental, and force clients to infer coalescing possibilities that may exist, or not. None of this is surprising since IP addresses are designed to <a href="/addressing-agility/">have no real relationship with names!</a> In contrast, ORIGIN Frame is an explicit signal from servers to clients that coalescing is available no matter what DNS says for any particular hostname.</p><p>We have experimented with <a href="/connection-coalescing-experiments/">IP-based coalescing previously</a>; for the purpose of this blog we will take a deeper look at ORIGIN Frame-based coalescing.</p>
    <div>
      <h3>What is the ORIGIN Frame standard?</h3>
      <a href="#what-is-the-origin-frame-standard">
        
      </a>
    </div>
    <p>The ORIGIN Frame is an extension to the <a href="https://www.rfc-editor.org/rfc/rfc8336">HTTP/2</a> and <a href="https://www.rfc-editor.org/rfc/rfc9412">HTTP/3</a> specification, a special Frame sent on stream 0 or the control stream of the connection respectively. The Frame allows the servers to send an ‘origin-set’ to the clients on an <i>existing</i> established TLS connection, which includes hostnames that it is authorized for and will not incur any <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/421">HTTP 421 errors</a>. Hostnames in the origin-set MUST also appear in the certificate SAN list for the server, even if those hostnames are announced on different IP addresses via DNS.</p><p>Specifically, two different steps are required:</p><ol><li><p>Web servers must send a list enumerating the Origin Set (the hostnames that a given connection might be used for) in the ORIGIN Frame extension.</p></li><li><p>The TLS certificate returned by the web server must cover the additional hostnames being returned in the ORIGIN Frame in the DNS names SAN entries.</p></li></ol><p>At a high-level ORIGIN Frames are a supplement to the TLS certificate that operators can attach to say, “Psst! Hey, client, here are the names in the SANs that are available on this connection -- you can coalesce!” Since the ORIGIN Frame is not part of the certificate itself, its contents can be made to change independently. No new certificate is required. There is also no dependency on IP addresses. For a coalesceable hostname, existing TCP/QUIC+TLS connections can be reused without requiring new connections or DNS queries.</p><p><a href="https://w3techs.com/technologies/overview/proxy">Many websites today</a> rely on content which is served by CDNs, like Cloudflare CDN service. The practice of using external CDN services offers websites the advantages of speed, reliability, and reduces the load of content served by their <a href="https://www.cloudflare.com/learning/cdn/glossary/origin-server/">origin servers</a>. When both the website, and the resources are served by the same CDN, despite being different hostnames, owned by different entities, it opens up some very interesting opportunities for CDN operators to allow connections to be reused and coalesced since they can control both the certificate management and connection requests for sending ORIGIN frames on behalf of the real origin server.</p><p>Unfortunately, there has been no way to turn the possibilities enabled by ORIGIN Frame into practice. To the best of our knowledge, until today, there has been no server implementation that supports ORIGIN Frames. Among browsers, only Firefox supports ORIGIN Frames. Since IP coalescing is challenging and ORIGIN Frame has no deployed support, is the engineering time and energy to better support coalescing worth the investment? We decided to find out with a large-scale Internet-wide measurement to understand the opportunities and predict the possibilities, and then implemented the ORIGIN Frame to experiment on production traffic.</p>
    <div>
      <h3>Experiment #1: What is the scale of required changes?</h3>
      <a href="#experiment-1-what-is-the-scale-of-required-changes">
        
      </a>
    </div>
    <p>In February 2021, <a href="/connection-coalescing-experiments/">we collected data</a> for 500K of the <a href="https://radar.cloudflare.com/domains">most popular websites</a> on the Internet, using a modified <a href="https://github.com/WPO-Foundation/webpagetest">Web Page Test</a> on 100 virtual machines. An automated Chrome (v88) browser instance was launched for every visit to a web page to eliminate caching effects (because we wanted to understand coalescing, not caching). On successful completion of each session, Chrome developer tools were used to retrieve and write the page load data as an HTTP Archive format (HAR) file with a full timeline of events, as well as additional information about certificates and their validation. Additionally, we parsed the certificate chains for the root web page and new TLS connections triggered by subresource requests to (i) identify certificate issuers for the hostnames, (ii) inspect the presence of the Subject Alternative Name (SAN) extension, and (iii) validate that DNS names resolve to the IP address used. Further details about our methodology and results can be found in the technical <a href="https://research.cloudflare.com/publications/Singanamalla2022/">paper</a>.</p><p>The first step was to understand what resources are requested by web pages to successfully render the page contents, and where these resources were present on the Internet. Connection coalescing becomes possible when subresource domains are ideally co-located. We approximated the location of a domain by finding its corresponding autonomous system (AS). For example, the domain attached to <a href="https://cdnjs.cloudflare.com/https://cdnjs.cloudflare.com/">cdnjs</a> is reachable via AS 13335 in the BGP routing table, and that AS number belongs to Cloudflare. The figure below describes the percentage of web pages and the number of unique ASes needed to fully load a web page.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3H188I6GyYxiqZEYxBPMPp/4bde6b40a523d0e66207c87cee40755c/Screenshot-2023-08-31-at-1.39.16-PM.png" />
            
            </figure><p>Around 14% of the web pages need two ASes to fully load i.e. pages that have a dependency on one additional AS for subresources. More than 50% of the web pages need to contact no more than six ASes to obtain all the necessary subresources. This finding as shown in the plot above implies that a relatively small number of operators serve the sub-resource content necessary for a majority (~50%) of the websites, and any usage of ORIGIN Frames would need only a few changes to have its intended impact. The potential for connection coalescing can therefore be optimistically approximated to the number of unique ASes needed to retrieve all subresources in a web page. In practice however, this may be superseded by operational factors such as SLAs or helped by flexible mappings between sockets, names, and IP addresses which we worked on <a href="https://research.cloudflare.com/publications/Fayed2021/">previously at Cloudflare</a>.</p><p>We then tried to understand the impact of coalescing on connection metrics. The measured and ideal number of DNS queries and TLS connections needed to load a web page are summarized by their CDFs in the figure below.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7aCaLsYdjNqeuQAuVSfQ6c/beae2aeaf3e96830f9562c09aeb0a2cd/Screenshot-2023-08-31-at-1.39.02-PM.png" />
            
            </figure><p>Through modeling and extensive analysis, we identify that connection coalescing through ORIGIN Frames could reduce the number of DNS and TLS connections made by browsers by over 60% at the median. We performed this modeling by identifying the number of times the clients requested DNS records, and combined them with the ideal ORIGIN Frames to serve.</p><p>Many multi-origin servers such as those operated by <a href="https://www.cloudflare.com/learning/cdn/what-is-a-cdn/">CDNs</a> tend to reuse certificates and serve the same certificate with multiple DNS SAN entries. This allows the operators to manage fewer certificates through their creation and renewal cycles. While theoretically one can have millions of names in the certificate, creating such certificates is unreasonable and a challenge to manage effectively. By continuing to rely on existing certificates, our modeling measurements bring to light the volume of changes required to enable perfect coalescing, while presenting information about the scale of changes needed, as highlighted in the figure below.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5rIcRVeD6UEGNCW9dj3MYM/187f0bc53e9afedf3baca072410eb4db/Screenshot-2023-08-31-at-1.38.35-PM.png" />
            
            </figure><p>We identify that over 60% of the certificates served by websites do not need any modifications and could benefit from ORIGIN Frames, while with no more than 10 additions to the DNS SAN names in certificates we’re able to successfully coalesce connections to over 92% of the websites in our measurement. The most effective changes could be made by CDN providers by adding three or four of their most popular requested hostnames into each certificate.</p>
    <div>
      <h3>Experiment #2: ORIGIN Frames in action</h3>
      <a href="#experiment-2-origin-frames-in-action">
        
      </a>
    </div>
    <p>In order to validate our modeling expectations, we then took a more active approach in early 2022. Our next experiment focused on 5,000 websites that make extensive use of <i>cdnjs.cloudflare.com</i> as a subresource. By modifying our experimental TLS termination endpoint we deployed HTTP/2 ORIGIN Frame support as defined in the <a href="https://datatracker.ietf.org/doc/rfc8336/">RFC standard</a>. This involved changing the internal fork of <i>net</i> and <i>http</i> dependency modules of Golang which we have open sourced (<a href="https://github.com/cloudflare/go-originframe">see here</a>, and <a href="https://github.com/cloudflare/net-originframe">here</a>).</p><p>During the experiments, connecting to a website in the experiment set would return <i>cdnjs.cloudflare.com</i> in the ORIGIN frame, while the control set returned an arbitrary (unused) hostname. All existing edge certificates for the 5000 websites were also modified. For the experimental group, the corresponding certificates were renewed with <i>cdnjs.cloudflare.com</i> added to the SAN. To ensure integrity between control and experimental sets, control group domains certificates were also renewed with a valid and identical size third party domain used by none of the control domains. This is done to ensure that the relative size changes to the certificates is kept constant avoiding potential biases due to different certificate sizes. Our results were striking!</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2jGJXHliykd9CGh3lNujpq/bc1b9faf871d7d4c6ecf7041966078cb/Screenshot-2023-08-31-at-1.38.47-PM.png" />
            
            </figure><p>Sampling 1% of the requests we received from Firefox to the websites in the experiment, we identified over <b>50% reduction in new TLS connections per second</b> indicating a lesser number of cryptographic verification operations done by both the client and reduced server compute overheads. As expected there were no differences in the control set indicating the effectiveness of connection re-use as seen by the CDN or server operators.</p>
    <div>
      <h3>Discussion and insights</h3>
      <a href="#discussion-and-insights">
        
      </a>
    </div>
    <p>While our modeling measurements indicated that we could anticipate some performance improvements, in practice it was not significantly better suggesting that ‘no-worse’ is the appropriate mental model regarding performance. The subtle interplay between resource object sizes, competing connections, and congestion control is subject to network conditions. Bottleneck-share capacity, for example, diminishes as fewer connections compete for bottleneck resources on network links. It would be interesting to revisit these measurements as more operators deploy support on their servers for ORIGIN Frames.</p><p>Apart from performance, one major benefit of ORIGIN frames is in terms of privacy. How? Well, each coalesced connection hides client metadata that is otherwise leaked from non-coalesced connections. Certain resources on a web page are loaded depending on how one is interacting with the website. This means for every new connection for retrieving some resource from the server, TLS plaintext metadata like <a href="https://www.cloudflare.com/learning/ssl/what-is-sni/">SNI</a> (in the absence of <a href="/encrypted-client-hello/">Encrypted Client Hello</a>) and at least one plaintext DNS query, if transmitted over UDP or TCP on port 53, is exposed to the network. Coalescing connections helps remove the need for browsers to open new TLS connections, and the need to do extra DNS queries. This prevents metadata leakage from anyone listening on the network. ORIGIN Frames help minimize those signals from the network path, improving privacy by reducing the amount of cleartext information leaked on path to network eavesdroppers.</p><p>While the browsers benefit from reduced cryptographic computations needed to verify multiple certificates, a major advantage comes from the fact that it opens up very interesting future opportunities for resource scheduling at the endpoints (the browsers, and the origin servers) such as <a href="/better-http-3-prioritization-for-a-faster-web/">prioritization</a>, or recent proposals like <a href="/early-hints/">HTTP early hints</a> to provide clients experiences where connections are not overloaded or competing for those resources. When coupled with <a href="https://datatracker.ietf.org/doc/html/draft-ietf-httpbis-http2-secondary-certs-06#section-3.4">CERTIFICATE Frames</a> IETF draft, we can further eliminate the need for manual certificate modifications as a server can prove its authority of hostnames after connection establishment without any additional SAN entries on the website’s TLS certificate.</p>
    <div>
      <h3>Conclusion and call to action</h3>
      <a href="#conclusion-and-call-to-action">
        
      </a>
    </div>
    <p>In summary, the current Internet ecosystem has a lot of opportunities for connection coalescing with only a few changes to certificates and their server infrastructure. Servers can significantly reduce the number of TLS handshakes by roughly 50%, while reducing the number of render blocking DNS queries by over 60%. Clients additionally reap these benefits in privacy by reducing cleartext DNS exposure to network on-lookers.</p><p>To help make this a reality we are currently planning to add support for both HTTP/2 and HTTP/3 ORIGIN Frames for our customers. We also encourage other operators that manage third party resources to adopt support of ORIGIN Frame to improve the Internet ecosystem.Our paper submission was accepted to the ACM Internet Measurement Conference 2022 and is <a href="https://research.cloudflare.com/publications/Singanamalla2022/">available for download</a>. If you’d like to work on projects like this, where you get to see the rubber meet the road for new standards, visit our <a href="https://www.cloudflare.com/careers/">careers page</a>!</p> ]]></content:encoded>
            <category><![CDATA[DNS]]></category>
            <category><![CDATA[Internship Experience]]></category>
            <category><![CDATA[Speed & Reliability]]></category>
            <category><![CDATA[HTTP2]]></category>
            <category><![CDATA[TLS]]></category>
            <category><![CDATA[Research]]></category>
            <guid isPermaLink="false">QjYiQB1Bf6uRL71yURBMi</guid>
            <dc:creator>Suleman Ahmad</dc:creator>
            <dc:creator>Jonathan Hoyland</dc:creator>
            <dc:creator>Sudheesh Singanamalla</dc:creator>
        </item>
        <item>
            <title><![CDATA[Introducing per hostname TLS settings — security fit to your needs]]></title>
            <link>https://blog.cloudflare.com/introducing-per-hostname-tls-settings/</link>
            <pubDate>Wed, 09 Aug 2023 13:00:41 GMT</pubDate>
            <description><![CDATA[ Starting today, customers that use Cloudflare’s Advanced Certificate Manager can configure TLS settings on individual hostnames within a domain ]]></description>
            <content:encoded><![CDATA[ <p></p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/335NNxJPBqafpqdRHUa7D6/24c8f5b2dca27353fc617c61f9f44ded/image1-6.png" />
            
            </figure><p>One of the goals of Cloudflare is to give our customers the necessary knobs to enable security in a way that fits their needs. In the realm of SSL/TLS, we offer two key controls: setting the minimum TLS version, and restricting the list of supported cipher suites. Previously, these settings applied to the entire domain, resulting in an “all or nothing” effect. While having uniform settings across the entire domain is ideal for some users, it sometimes lacks the necessary granularity for those with diverse requirements across their subdomains.</p><p>It is for that reason that we’re excited to announce that as of today, customers will be able to set their TLS settings on a per-hostname basis.</p>
    <div>
      <h3>The trade-off with using modern protocols</h3>
      <a href="#the-trade-off-with-using-modern-protocols">
        
      </a>
    </div>
    <p>In an ideal world, every domain could be updated to use the most secure and modern protocols without any setbacks. Unfortunately, that's not the case. New standards and protocols require adoption in order to be effective. TLS 1.3 was standardized by the IETF in April 2018. It removed the vulnerable cryptographic algorithms that TLS 1.2 supported and provided a performance boost by requiring only one roundtrip, as opposed to two. For a user to benefit from TLS 1.3, they need their browser or device to support the new TLS version. For modern browsers and devices, this isn’t a problem - these operating systems are built to dynamically update to support new protocols. But legacy clients and devices were, obviously, not built with the same mindset. Before 2015, new protocols and standards were developed over decades, not months or years, so the clients were shipped out with support for one standard — the one that was used at the time.</p><p>If we look at <a href="https://radar.cloudflare.com/adoption-and-usage/">Cloudflare Radar</a>, we can see that about 62.9% of traffic uses TLS 1.3. That’s quite significant for a protocol that was only standardized 5 years ago. But that also means that a significant portion of the Internet continues to use TLS 1.2 or lower.</p><p>The same trade-off applies for <a href="https://www.cloudflare.com/learning/dns/dnssec/ecdsa-and-dnssec/">digital signature algorithms</a>. ECDSA for TLS was standardized in 2006, years after RSA. It offers a higher level of security than RSA and uses shorter key lengths, which adds a performance boost for every request. To use ECDSA, a domain owner needs to obtain and serve an ECDSA certificate and the connecting client needs to support cipher suites that use elliptical curve cryptography (ECC). While most publicly trusted certificate authorities now support ECDSA-based certificates, the slow rate of adoption has led many legacy systems to only support RSA, which means that restricting applications to only support ECC-based algorithms could prevent access from those that use older clients and devices.</p>
    <div>
      <h3>Balancing the trade-offs</h3>
      <a href="#balancing-the-trade-offs">
        
      </a>
    </div>
    <p>When it comes to security and accessibility, it’s important to find the right middle ground for your business.</p><p>To maintain brand, most companies deploy all of their assets under one domain. It’s common for the root domain (e.g. example.com) to be used as a marketing website to provide information about the company, its mission, and the products and services it offers. Then, under the same domain, you might have your company blog (e.g. blog.example.com), your management portal (e.g. dash.example.com), and your <a href="https://www.cloudflare.com/application-services/products/api-gateway/">API gateway</a> (e.g. api.example.com).</p><p>The marketing website and the blog are similar in that they’re static sites that don’t collect information from the accessing users. On the other hand, the management portal and API gateway collect and present sensitive data that needs to be protected.</p><p>When you’re thinking about which settings to deploy, you want to consider the data that’s exchanged and the user base. The marketing website and blog should be accessible to all users. You can set them up to support modern protocols for the clients that support them, but you don’t necessarily want to restrict access for users that are accessing these pages from old devices.</p><p>The management portal and API gateway should be set up in a manner that provides the best protection for the data exchanged. That means dropping support for less secure standards with known vulnerabilities and requiring new, secure protocols to be used.</p><p>To be able to achieve this setup, you need to be able to configure settings for every subdomain within your domain individually.</p>
    <div>
      <h3>Per hostname TLS settings - now available!</h3>
      <a href="#per-hostname-tls-settings-now-available">
        
      </a>
    </div>
    <p>Customers that use Cloudflare’s Advanced Certificate Manager can configure TLS settings on individual hostnames within a domain. Customers can use this to enable HTTP/2, or to configure the minimum TLS version and the supported ciphers suites on a particular hostname. Any settings that are applied on a specific hostname will supersede the zone level setting. The new capability also allows you to have different settings on a hostname and its wildcard record; which means you can configure example.com to use one setting, and *.example.com to use another.</p><p>Let’s say that you want the default min TLS version for your domain to be TLS 1.2, but for your dashboard and API subdomains, you want to set the minimum TLS version to be TLS 1.3. In the Cloudflare dashboard, you can set the zone level minimum TLS version to 1.2 as shown below. Then, to make the minimum TLS version for the dashboard and API subdomains TLS 1.3, make a call to the per-hostname TLS settings <a href="https://developers.cloudflare.com/api/operations/per-hostname-tls-settings-put">API endpoint</a> with the specific hostname and setting.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2dKeWGy6efhlXffUsYRPRN/49865908b6ef64bfbf77c453eca68d41/Untitled.png" />
            
            </figure><p>This is all available, starting today, through the <a href="https://developers.cloudflare.com/api/operations/per-hostname-tls-settings-put">API endpoint</a>! And if you’d like to learn more about how to use our per-hostname TLS settings, please jump on over to our <a href="https://developers.cloudflare.com/ssl/edge-certificates/additional-options/minimum-tls/">developer documentation</a>.</p> ]]></content:encoded>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[SSL]]></category>
            <category><![CDATA[TLS]]></category>
            <category><![CDATA[Product News]]></category>
            <guid isPermaLink="false">5Ur3aAZPj8DIsf4qnHteG8</guid>
            <dc:creator>Dina Kozlov</dc:creator>
        </item>
        <item>
            <title><![CDATA[Bring your own CA for client certificate validation with API Shield]]></title>
            <link>https://blog.cloudflare.com/bring-your-own-ca-for-client-certificate-validation-with-api-shield/</link>
            <pubDate>Tue, 11 Jul 2023 13:00:21 GMT</pubDate>
            <description><![CDATA[ API shield customers can now upload their own CA to use for client certificate validation. This ensures that only authorized clients and devices can make requests to your API endpoint or application.  ]]></description>
            <content:encoded><![CDATA[ <p></p><p><a href="https://www.cloudflare.com/learning/security/api/what-is-an-api/">APIs</a> account for more than half of the total traffic of the Internet. They are the building blocks of many modern web applications. As API usage grows, so does the number of <a href="https://www.cloudflare.com/learning/insights-waap-api-security/">API attacks</a>. And so now, more than ever, it’s important to keep these API endpoints secure. Cloudflare’s API Shield solution offers a comprehensive suite of products to safeguard your API endpoints and now we’re excited to give our customers one more tool to keep their endpoints safe. We’re excited to announce that customers can now bring their own Certificate Authority (CA) to use for mutual TLS client authentication. This gives customers more security, while allowing them to maintain control around their Mutual TLS configuration.</p>
    <div>
      <h3>The power of Mutual TLS (mTLS)</h3>
      <a href="#the-power-of-mutual-tls-mtls">
        
      </a>
    </div>
    <p>Traditionally, when we refer to <a href="https://www.cloudflare.com/application-services/products/ssl/">TLS certificates</a>, we talk about the publicly trusted certificates that are presented by servers to prove their identity to the connecting client. With Mutual TLS, both the client and the server present a certificate to establish a two-way channel of trust. Doing this allows the server to check who the connecting client is and whether or not they’re allowed to make a request. The certificate presented by the client - the client certificate - doesn’t need to come from a publicly trusted CA. In fact, it usually comes from a private or self-signed CA. That’s because the only party that needs to be able to trust it is the connecting server. As long as the connecting server has the client certificate and can check its validity, it doesn’t need to be public.</p>
    <div>
      <h3>Securing API endpoints with Mutual TLS</h3>
      <a href="#securing-api-endpoints-with-mutual-tls">
        
      </a>
    </div>
    <p>Mutual TLS plays a crucial role in protecting API endpoints. When it comes to safeguarding these endpoints, it's important to have a security model in place that only allows authorized clients to make requests and keeps everyone else out.</p><p>That’s why when we launched API Shield in 2020 - a product that’s centered around securing API endpoints - we included mutual TLS client certificate validation as a part of the offering. We knew that mTLS was the best way for our customers to identify and authorize their connecting clients.</p><p>When we launched mutual TLS for API Shield, we gave each of our customers a dedicated self-signed CA that they could use to issue client certificates. Once the certificates are installed on devices and mTLS is set up, administrators can enforce that connections can only be made if they present a client certificate issued from that self-signed CA.</p><p>This feature has been paramount in securing thousands of endpoints, but it does require our customer to install new client certificates on their devices, which isn’t always possible. Some customers have been using mutual TLS for years with their own CA, which means that the client certificates are already in the wild. Unless the application owner has direct control over the clients, it’s usually arduous, if not impossible, to replace the client certificates with ones issued from Cloudflare’s CA. Other customers may be required to use a CA issued from an approved third party in order to meet regulatory requirements.</p><p>To help all of our customers keep their endpoints secure, we’re extending API Shield’s mTLS capability to allow customers to bring their own CA.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1r7GjCcka2i39OMoxNMQvy/250261c4cdae4a84d9f09f6479893748/image2-3.png" />
            
            </figure>
    <div>
      <h3>Get started today</h3>
      <a href="#get-started-today">
        
      </a>
    </div>
    <p>To simplify the management of private PKI at Cloudflare, we created one account level endpoint that enables customers to upload self-signed CAs to use across different Cloudflare products. Today, this endpoint can be used for API shield CAs and for <a href="/bring-your-certificates-cloudflare-gateway/">Gateway</a> CAs that are used for traffic inspection.</p><p>If you’re an Enterprise customer, you can upload up to five CAs to your account. Once you’ve uploaded the CA, you can use the API Shield hostname association API to associate the CA with the mTLS enabled hostnames. That will tell Cloudflare to start validating the client certificate against the uploaded CA for requests that come in on that hostname. Before you enforce the client certificate validation, you can create a Firewall rule that logs an event when a valid or invalid certificate is served. That will help you determine if you’ve set things up correctly before you enforce the client certificate validation and drop unauthorized requests.</p><p>To learn more about how you can use this, refer to our <a href="https://developers.cloudflare.com/ssl/client-certificates/byo-ca-api-shield/">developer documentation</a>.</p><p>If you’re interested in using mutual TLS to secure your corporate network, talk to an account representative about using our Access product to do so.</p> ]]></content:encoded>
            <category><![CDATA[API Shield]]></category>
            <category><![CDATA[SSL]]></category>
            <category><![CDATA[TLS]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[API Security]]></category>
            <guid isPermaLink="false">1TRKVmXwQ3GahPiC428E89</guid>
            <dc:creator>Dina Kozlov</dc:creator>
        </item>
        <item>
            <title><![CDATA[mTLS client certificate revocation vulnerability with TLS Session Resumption]]></title>
            <link>https://blog.cloudflare.com/mtls-client-certificate-revocation-vulnerability-with-tls-session-resumption/</link>
            <pubDate>Mon, 03 Apr 2023 13:00:00 GMT</pubDate>
            <description><![CDATA[ This blog post outlines the root cause analysis and solution for a bug found in Cloudflare’s mTLS implementation ]]></description>
            <content:encoded><![CDATA[ <p></p><p>On December 16, 2022, Cloudflare discovered a bug where, in limited circumstances, some users with revoked certificates may not have been blocked by Cloudflare firewall settings. Specifically, Cloudflare’s <a href="https://developers.cloudflare.com/firewall/">Firewall Rules solution</a> did not block some users with revoked certificates from resuming a session via <a href="https://www.cloudflare.com/learning/access-management/what-is-mutual-tls/">mutual transport layer security (mTLS)</a>, even if the customer had configured Firewall Rules to do so. This bug has been mitigated, and we have no evidence of this being exploited. We notified any customers that may have been impacted in an abundance of caution, so they can check their own logs to determine if an mTLS protected resource was accessed by entities holding a revoked certificate.</p>
    <div>
      <h3>What happened?</h3>
      <a href="#what-happened">
        
      </a>
    </div>
    <p>One of Cloudflare Firewall Rules’ features, introduced in March 2021, lets customers revoke or block a client certificate, preventing it from being used to authenticate and establish a session. For example, a customer may use Firewall Rules to protect a service by requiring clients to provide a client certificate through the mTLS authentication protocol. Customers could also revoke or disable a client certificate, after which it would no longer be able to be used to authenticate a party initiating an encrypted session via mTLS.</p><p>When Cloudflare receives traffic from an end user, a service at the edge is responsible for terminating the incoming TLS connection. From there, this service is <a href="https://www.cloudflare.com/en-gb/learning/cdn/glossary/reverse-proxy/">a reverse proxy</a>, and it is responsible for acting as a bridge between the end user and various upstreams. Upstreams might include other services within Cloudflare such as Workers or Caching, or may travel through Cloudflare to an external server such as an origin hosting content. Sometimes, you may want to restrict access to an endpoint, ensuring that only authorized actors can access it. Using client certificates is a common way of authenticating users. This is referred to as <i>mutual TLS</i>, because both the server and client provide a certificate. When mTLS is enabled for a specific hostname, this service at the edge is responsible for parsing the incoming client certificate and converting that into metadata that is attached to HTTP requests that are forwarded to upstreams. The upstreams can process this metadata and make the decision whether the client is authorized or not.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2TvrmqDG2SgLoVONCkuao6/37eae4dafac44d112dcc4ca8bcfb4baf/image1-12.png" />
            
            </figure><p>Customers can use the Cloudflare dashboard to <a href="https://developers.cloudflare.com/ssl/client-certificates/revoke-client-certificate/">revoke existing client certificates</a>. Instead of immediately failing handshakes involving revoked client certificates, revocation is optionally enforced via Firewall Rules, which take effect at the HTTP request level. This leaves the decision to enforce revocation with the customer.</p><p>So how exactly does this service determine whether a client certificate is revoked?</p><p>When we see a client certificate presented as part of the TLS handshake, we store the entire certificate chain on the TLS connection. This means that for every HTTP request that is sent on the connection, the client certificate chain is available to the application. When we receive a request, we look at the following fields related to a client certificate chain:</p><ol><li><p>Leaf certificate Subject Key Identifier (SKI)</p></li><li><p>Leaf certificate Serial Number (SN)</p></li><li><p>Issuer certificate SKI</p></li><li><p>Issuer certificate SN</p></li></ol><p>Some of these values are used for upstream processing, but the issuer SKI and leaf certificate SN are used to query our internal data stores for revocation status. The data store indexes on an issuer SKI, and stores a collection of revoked leaf certificate serial numbers. If we find the leaf certificate in this collection, we set the relevant metadata for consumption in Firewall Rules.</p><p>But what does this have to do with TLS session resumption?</p><p>To explain this, let’s first discuss how session resumption works. At a high level, session resumption grants the ability for clients and servers to expedite the handshake process, saving both time and resources. The idea is that if a client and server successfully handshake, then future handshakes are more or less redundant, assuming nothing about the handshake needs to change at a fundamental level (e.g. cipher suite or TLS version).</p><p>Traditionally, there are two mechanisms for session resumption - session IDs and session tickets. In both cases, the TLS server will handle encrypting the context of the session, which is basically a snapshot of the acquired TLS state that is built up during the handshake process. <b>Session IDs</b> work in a stateful fashion, meaning that the server is responsible for saving this state, somewhere, and keying against the session ID. When a client provides a session ID in the client hello, the server checks to see if it has a corresponding session cached. If it does, then the handshake process is expedited and the cached session is restored. In contrast, <b>session tickets</b> work in a stateless fashion, meaning that the server has no need to store the encrypted session context. Instead, the server sends the client the encrypted session context (AKA a session ticket). In future handshakes, the client can send the session ticket in the client hello, which the server can decrypt in order to restore the session and expedite the handshake.</p><p>Recall that when a client presents a certificate, we store the certificate chain on the TLS connection. It was discovered that when sessions were resumed, the code to store the client certificate chain in application data did not run. As a result, we were left with an empty certificate chain, meaning we were unable to check the revocation status and pass this information to firewall rules for further processing.</p><p>To illustrate this, let's use an example where mTLS is used for api.example.com. Firewall Rules are configured to block revoked certificates, and all certificates are revoked. We can reconstruct the client certificate checking behavior using a two-step process. First we use OpenSSL's s_client to perform a handshake using the revoked certificate (recall that revocation has nothing to do with the success of the handshake - it only affects HTTP requests <i>on</i> the connection), and dump the session’s context into a "session.txt" file. We then issue an HTTP request on the connection, which fails with a 403 status code response because the certificate is revoked.</p>
            <pre><code>❯ echo -e "GET / HTTP/1.1\r\nHost:api.example.com\r\n\r\n" | openssl s_client -connect api.example.com:443 -cert cert2.pem -key key2.pem -ign_eof  -sess_out session.txt | grep 'HTTP/1.1'
depth=2 C=IE, O=Baltimore, OU=CyberTrust, CN=Baltimore CyberTrust Root
verify return:1
depth=1 C=US, O=Cloudflare, Inc., CN=Cloudflare Inc ECC CA-3
verify return:1
depth=0 C=US, ST=California, L=San Francisco, O=Cloudflare, Inc., CN=sni.cloudflaressl.com
verify return:1
HTTP/1.1 403 Forbidden
^C⏎</code></pre>
            <p>Now, if we reuse "session.txt" to perform session resumption and then issue an identical HTTP request, the request succeeds. This shouldn't happen. We should fail both requests because they both use the same revoked client certificate.</p>
            <pre><code>❯ echo -e "GET / HTTP/1.1\r\nHost:api.example.com\r\n\r\n" | openssl s_client -connect api.example.com:443 -cert cert2.pem -key key2.pem -ign_eof -sess_in session.txt | grep 'HTTP/1.1'
HTTP/1.1 200 OK</code></pre>
            
    <div>
      <h3>How we addressed the problem</h3>
      <a href="#how-we-addressed-the-problem">
        
      </a>
    </div>
    <p>Upon realizing that session resumption led to the inability to properly check revocation status, our first reaction was to disable session resumption for all mTLS connections. This blocked the vulnerability immediately.</p><p>The next step was to figure out how to safely re-enable resumption for mTLS. To do so, we need to remove the requirement of depending on data stored within the TLS connection state. Instead, we can use an API call that will grant us access to the leaf certificate in both session resumption and non session resumption cases. Two pieces of information are necessary: the leaf certificate serial number and the issuer SKI. The issuer SKI is actually included in the leaf certificate, also known as the Authority Key Identifier (AKI). Similar to how one would obtain the SKI for a certificate, <code>X509_get0_subject_key_id</code>, we can use <code>X509_get0_authority_key_id</code> to get the AKI.</p>
    <div>
      <h3>Detailed timeline</h3>
      <a href="#detailed-timeline">
        
      </a>
    </div>
    <p><i>All timestamps are in UTC</i></p><p>In March 2021 we introduced a new feature in Firewall Rules that allows customers to block traffic from revoked mTLS certificates.</p><p>2022-12-16 21:53 - Cloudflare discovers that the vulnerability resulted from a bug whereby certificate revocation status was not checked for session resumptions. Cloudflare begins working on a fix to disable session resumption for all mTLS connections to the edge.2022-12-17 02:20 - Cloudflare validates the fix and starts to roll out a fix globally.2022-12-17 21:07 - Rollout is complete, mitigating the vulnerability.2023-01-12 16:40 - Cloudflare starts to roll out a fix that supports both session resumption and revocation.2023-01-18 14:07 - Rollout is complete.</p><p>In conclusion: once Cloudflare identified the vulnerability, a remediation was put into place quickly. A fix that correctly supports session resumption and revocation has been fully rolled out as of 2023-01-18. After reviewing the logs, Cloudflare has not seen any evidence that this vulnerability has been exploited in the wild.</p> ]]></content:encoded>
            <category><![CDATA[TLS]]></category>
            <category><![CDATA[Bugs]]></category>
            <guid isPermaLink="false">1c0CkT5im0RxAvfTT5PMiA</guid>
            <dc:creator>Rushil Mehra</dc:creator>
        </item>
        <item>
            <title><![CDATA[Out now! Auto-renew TLS certificates with DCV Delegation]]></title>
            <link>https://blog.cloudflare.com/introducing-dcv-delegation/</link>
            <pubDate>Thu, 23 Mar 2023 13:00:00 GMT</pubDate>
            <description><![CDATA[ Cloudflare will now allow customers that are managing DNS externally to auto-renew certificates through DCV Delegation ]]></description>
            <content:encoded><![CDATA[ <p></p><p>To get a <a href="https://www.cloudflare.com/application-services/products/ssl/">TLS certificate</a> issued, the requesting party must prove that they own the domain through a process called Domain Control Validation (DCV). As industry wide standards have evolved to enhance security measures, this process has become manual for Cloudflare customers that manage their DNS externally. Today, we’re excited to announce DCV Delegation — a feature that gives all customers the ability offload the DCV process to Cloudflare, so that all certificates can be <a href="https://www.cloudflare.com/application-services/solutions/certificate-lifecycle-management/">auto-renewed</a> without the management overhead.</p><p>Security is of utmost importance when it comes to managing web traffic, and one of the most critical aspects of security is ensuring that your application always has a TLS certificate that’s valid and up-to-date. Renewing TLS certificates can be an arduous and time-consuming task, especially as the recommended certificate lifecycle continues to gradually decrease, causing certificates to be renewed more frequently. Failure to get a certificate renewed can result in downtime or insecure connection which can lead to revenue decrease, mis-trust with your customers, and a management nightmare for your Ops team.</p><p>Every time a certificate is renewed with a Certificate Authority (CA), the certificate needs to pass a check called Domain Control Validation (DCV). This is a process that a CA goes through to verify that the party requesting the certificate does in fact own or control ownership over the domain for which the certificate is being requested. One of the benefits of using Cloudflare as your Authoritative DNS provider is that we can always prove ownership of your domain and therefore auto-renew your certificates. However, a big chunk of our customers manage their DNS externally. Before today, certificate renewals required these customers to make manual changes every time the certificate came up for renewal. Now, with DCV Delegation - you can let Cloudflare do all the heavy lifting.</p>
    <div>
      <h3>DCV primer</h3>
      <a href="#dcv-primer">
        
      </a>
    </div>
    <p>Before we dive into how DCV Delegation works, let's talk about it. DCV is the process of verifying that the party requesting a certificate owns or controls the domain for which they are requesting a certificate.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5L3DANFKTaTScKScK8hdsA/c406eda2b34d4e8aa06e9c1892d7acdc/image4-18.png" />
            
            </figure><p>When a subscriber requests a certificate from a CA, the CA returns validation tokens that the domain owner needs to place. The token can be an HTTP file that the domain owner needs to serve from a specific endpoint or it can be a DNS TXT record that they can place at their Authoritative DNS provider. Once the tokens are placed, ownership has been proved, and the CA can proceed with the certificate issuance.</p>
    <div>
      <h3>Better security practices for certificate issuance</h3>
      <a href="#better-security-practices-for-certificate-issuance">
        
      </a>
    </div>
    <p>Certificate issuance is a serious process. Any shortcomings can lead to a malicious actor issuing a certificate for a domain they do not own. What this means is that the actor could serve the certificate from a spoofed domain that looks exactly like yours and hijack and decrypt the incoming traffic. Because of this, over the last few years, changes have been put in place to ensure higher security standards for certificate issuances.</p>
    <div>
      <h3>Shorter certificate lifetimes</h3>
      <a href="#shorter-certificate-lifetimes">
        
      </a>
    </div>
    <p>The first change is the move to shorter lived certificates. Before 2011, a certificate could be valid for up to 96 months (about eight years). Over the last few years, the accepted validity period has been significantly shortened. In 2012, certificate validity went down to 60 months (5 years), in 2015 the lifespan was shortened to 39 months (about 3 years), in 2018 to 24 months (2 years), and in 2020, the lifetime was dropped to 13 months. Following the trend, we’re going to continue to see certificate lifetimes decrease even further to 3 month certificates as the standard. We’re already seeing this in action with Certificate Authorities like Let’s Encrypt and Google Trust Services offering a maximum validity period of 90 days (3 months). Shorter-lived certificates are aimed to reduce the compromise window in a situation where a malicious party has gained control over a TLS certificate or private key. The shorter the lifetime, the less time the bad actor can make use of the compromised material. At Cloudflare, we even give customers the ability to issue 2 week certificates to reduce the impact window even further.</p><p>While this provides a better security posture, it does require more overhead management for the domain owner, as they’ll now be responsible for completing the DCV process every time the certificate is up for renewal, which can be every 90 days. In the past, CAs would allow the re-use of validation tokens, meaning even if the certificate was renewed more frequently, the validation tokens could be re-used so that the domain owner wouldn’t need to complete DCV again. Now, more and more CAs are requiring unique tokens to be placed for every renewal, meaning shorter certificate lifetimes now result in additional management overhead.</p>
    <div>
      <h3>Wildcard certificates now require DNS-based DCV</h3>
      <a href="#wildcard-certificates-now-require-dns-based-dcv">
        
      </a>
    </div>
    <p>Aside from certificate lifetimes, the process required to get a certificate issued has developed stricter requirements over the last few years. The Certificate Authority/Browser Forum (CA/B Forum), the governing body that sets the rules and standards for certificates, has enforced or stricter requirements around certificate issuance to ensure that certificates are issued in a secure manner that prevents a malicious actor from obtaining certificates for domains they do not own.</p><p>In May 2021, the CA/B Forum <a href="https://cabforum.org/2021/06/03/ballot-sc45-wildcard-domain-validation/">voted</a> to require DNS based validation for any certificate with a wildcard certificate on it. Meaning, that if you would like to get a TLS certificate that covers example.com and *.example.com, you can no longer use HTTP based validation, but instead, you will need to add TXT validation tokens to your DNS provider to get that certificate issued. This is because a wildcard certificate covers a large portion of the domain’s namespace. If a malicious actor receives a certificate for a wildcard hostname, they now have control over all of the subdomains under the domain. Since HTTP validation only proves ownership of a hostname and not the whole domain, it’s better to use DNS based validation for a certificate with broader coverage.</p><p>All of these changes are great from a security standpoint - we should be adopting these processes! However, this also requires domain owners to adapt to the changes. Failure to do so can lead to a certificate renewal failure and downtime for your application. If you’re managing more than 10 domains, these new processes become a management nightmare fairly quickly.</p><p>At Cloudflare, we’re here to help. We don’t think that security should come at the cost of reliability or the time that your team spends managing new standards and requirements. Instead, we want to make it as easy as possible for you to have the best security posture for your certificates, without the management overhead.</p>
    <div>
      <h3>How Cloudflare helps customers auto-renew certificates</h3>
      <a href="#how-cloudflare-helps-customers-auto-renew-certificates">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6qoJcTgOrzUqy8EdRg89TF/aa8c224880919cab69b226552c94e6c3/image3-30.png" />
            
            </figure><p>For years, Cloudflare has been managing TLS certificates for 10s of millions of domains. One of the reasons customers choose to manage their TLS certificates with Cloudflare is that we keep up with all the changes in standards, so you don’t have to.</p><p>One of the superpowers of having Cloudflare as your Authoritative DNS provider is that Cloudflare can add necessary DNS records on your behalf to ensure successful certificate issuances. If you’re using Cloudflare for your DNS, you probably haven’t thought about certificate renewals, because you never had to. We do all the work for you.</p><p>When the CA/B Forum announced that wildcard certificates would now require TXT based validation to be used, customers that use our Authoritative DNS didn’t even notice any difference - we continued to do the auto-renewals for them, without any additional work on their part.</p><p>While this provides a reliability and management boost to some customers, it still leaves out a large portion of our customer base — customers who use Cloudflare for certificate issuance with an external DNS provider.</p><p>There are two groups of customers that were impacted by the wildcard DCV change: customers with domains that host DNS externally - we call these “partial” zones - and SaaS providers that use Cloudflare’s SSL for SaaS product to provide wildcard certificates for their customers’ domains.</p><p>Customers with “partial” domains that use wildcard certificates on Cloudflare are now required to fetch the TXT DCV tokens every time the certificate is up for renewal and manually place those tokens at their DNS provider. With Cloudflare deprecating DigiCert as a Certificate Authority, certificates will now have a lifetime of 90 days, meaning this manual process will need to occur every 90 days for any certificate with a wildcard hostname.</p><p>Customers that use our <a href="https://www.cloudflare.com/application-services/products/ssl-for-saas-providers/">SSL for SaaS product </a>can request that Cloudflare issues a certificate for their customer’s domain - called a custom hostname. SaaS providers on the Enterprise plan have the ability to extend this support to wildcard custom hostnames, meaning we’ll issue a certificate for the domain (example.com) and for a wildcard (*.example.com). The issue with that is that SaaS providers will now be required to fetch the TXT DCV tokens, return them to their customers so that they can place them at their DNS provider, and do this process every 90 days. Supporting this requires a big change to our SaaS provider’s management system.</p><p>At Cloudflare, we want to help every customer choose security, reliability, and ease of use — all three! And that’s where DCV Delegation comes in.</p>
    <div>
      <h3>Enter DCV Delegation: certificate auto-renewal for every Cloudflare customer</h3>
      <a href="#enter-dcv-delegation-certificate-auto-renewal-for-every-cloudflare-customer">
        
      </a>
    </div>
    <p>DCV Delegation is a new feature that allows customers who manage their DNS externally to delegate the DCV process to Cloudflare. DCV Delegation requires customers to place a one-time record that allows Cloudflare to auto-renew all future certificate orders, so that there’s no manual intervention from the customer at the time of the renewal.</p>
    <div>
      <h3>How does it work?</h3>
      <a href="#how-does-it-work">
        
      </a>
    </div>
    <p>Customers will now be able to place a CNAME record at their Authoritative DNS provider at their acme-challenge endpoint - where the DCV records are currently placed - to point to a domain on Cloudflare.</p><p>This record will have the the following syntax:</p><p><code>_acme-challenge._&lt;domain.TLD&gt;_ CNAME _&lt;domain.TLD&gt;_._&lt;UUID&gt;_.dcv.cloudflare.com</code></p><p>Let’s say I own example.com and need to get a certificate issued for it that covers the apex and wildcard record. I would place the following record at my DNS provider: _acme-challenge.example.com CNAME example.com..dcv.cloudflare.com. Then, Cloudflare would place the two TXT DNS records required to issue the certificate at  example.com..dcv.cloudflare.com.</p><p>As long as the partial zone or custom hostname remains Active on Cloudflare, Cloudflare will add the DCV tokens on every renewal. All you have to do is keep the CNAME record in place.</p><p>If you’re a “partial” zone customer or an SSL for SaaS customer, you will now see this card in the dashboard with more information on how to use DCV Delegation, or you can read our <a href="https://developers.cloudflare.com/ssl/edge-certificates/changing-dcv-method/methods/delegated-dcv/">documentation</a> to learn more.</p><p>DCV Delegation for Partial Zones:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3SApOCdK4niR4kkXWci8nb/586abd158ebc79c1da6ffab7ae2d9ffa/pasted-image-0-12.png" />
            
            </figure><p>DCV Delegation for Custom Hostnames:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6JmwTdJCJkKecERcxhoe2T/899f8f9fce371e9eb8f5d3893608f132/pasted-image-0--1--10.png" />
            
            </figure><p>The UUID in the CNAME target is a unique identifier. Each partial domain will have its own UUID that corresponds to all of the DCV delegation records created under that domain. Similarly, each SaaS zone will have one UUID that all custom hostnames under that domain will use. Keep in mind that if the same domain is moved to another account, the UUID value will change and the corresponding DCV delegation records will need to be updated.</p><p>If you’re using Cloudflare as your Authoritative DNS provider, you don’t need to worry about this! We already add the DCV tokens on your behalf to ensure successful certificate renewals.</p>
    <div>
      <h3>What’s next?</h3>
      <a href="#whats-next">
        
      </a>
    </div>
    <p>Right now, DCV Delegation only allows delegation to one provider. That means that if you’re using multiple CDN providers or you’re using Cloudflare to manage your certificates but you’re also issuing certificates for the same hostname for your origin server then DCV Delegation won’t work for you. This is because once that CNAME record is pointed to Cloudflare, only Cloudflare will be able to add DCV tokens at that endpoint, blocking you or an external CDN provider from doing the same.</p><p>However, an RFC <a href="https://datatracker.ietf.org/doc/draft-todo-chariton-dns-account-01/">draft</a> is in progress that will allow each provider to have a separate "acme-challenge" endpoint, based on the ACME account used to issue the certs. Once this becomes standardized and CAs and CDNs support it, customers will be able to use multiple providers for DCV delegation.</p><p>In conclusion, DCV delegation is a powerful feature that simplifies the process of managing certificate renewals for all Cloudflare customers. It eliminates the headache of managing certificate renewals, ensures that certificates are always up-to-date, and most importantly, ensures that your web traffic is always secure. Try DCV delegation today and see the difference it can make for your web traffic!</p> ]]></content:encoded>
            <category><![CDATA[TLS]]></category>
            <category><![CDATA[Product News]]></category>
            <guid isPermaLink="false">L9LCjIwBm08kcPcfgdQ0y</guid>
            <dc:creator>Dina Kozlov</dc:creator>
        </item>
        <item>
            <title><![CDATA[Mutual TLS now available for Workers]]></title>
            <link>https://blog.cloudflare.com/mtls-workers/</link>
            <pubDate>Mon, 13 Mar 2023 13:00:00 GMT</pubDate>
            <description><![CDATA[ Mutual TLS is used to secure a range of network services and applications: APIs, web applications, microservices, databases and IoT devices. With mTLS support for Workers you can use Workers to authenticate to any service secured by mTLS directly! ]]></description>
            <content:encoded><![CDATA[ <p></p><p>In today’s digital world, security is a top priority for businesses. Whether you’re a Fortune 500 company or a startup just taking off, it’s essential to implement security measures in order to protect sensitive information. Security starts inside an organization; it starts with having <a href="https://www.cloudflare.com/learning/security/glossary/what-is-zero-trust/">Zero Trust principles</a> that protect access to resources.</p><p>Mutual TLS (mTLS) is useful in a Zero Trust world to secure a wide range of network services and applications: APIs, web applications, microservices, databases and IoT devices. Cloudflare has products that enforce mTLS: <a href="https://www.cloudflare.com/products/api-gateway/">API Shield</a> uses it to secure API endpoints and <a href="https://www.cloudflare.com/products/zero-trust/access/">Cloudflare Access</a> uses it to secure applications. Now, with mTLS support for Workers you can use Workers to authenticate to services secured by mTLS directly. mTLS for Workers is now generally available for all Workers customers!</p>
    <div>
      <h3>A recap on how TLS works</h3>
      <a href="#a-recap-on-how-tls-works">
        
      </a>
    </div>
    <p>Before diving into mTLS, let’s first understand what TLS (Transport Layer Security) is. Any website that uses HTTPS, like the one you’re reading this blog on, uses TLS encryption. TLS is used to create private communications on the Internet – it gives users assurance that the website you’re connecting to is legitimate and any information passed to it is encrypted.</p><p>TLS is enforced through a TLS handshake where the client and server exchange messages in order to create a secure connection. The handshake consists of several steps outlined below:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7KkcVehx0zunn4wVQmFsat/525b79d21264f64000f5a88086dd1edf/pasted-image-0-4.png" />
            
            </figure><p>The client and server exchange cryptographic keys, the client authenticates the server's identity and the client and server generate a secret key that’s used to create an encrypted connection.</p><p>For most connections over the public Internet TLS is sufficient. If you have a website, for example a landing page, storefront or documentation, you want any user to be able to navigate to it – validating the identity of the client isn’t necessary. But, if you have services that run on a corporate network or private/sensitive applications you may want access to be locked down to only authorized clients.</p>
    <div>
      <h3>Enter mTLS</h3>
      <a href="#enter-mtls">
        
      </a>
    </div>
    <p>With mTLS, both the client and server present a digital certificate during the TLS handshake to mutually verify each other’s credentials and prove identities. mTLS adds additional steps to the TLS handshake in order to authenticate the client as well as the server.</p><p>In comparison to TLS, with mTLS the server also sends a ‘Certificate Request’ message that contains a list of parties that it trusts and it tells the client to respond with its certificate. The mTLS authentication process is outlined below:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7eacWWZ0IBh5Cwa76YTvKt/f48f8880ca74436227e98d87f28ab9b7/pasted-image-0--1--1.png" />
            
            </figure><p>mTLS is typically used when a managed list of clients (eg. users, devices) need access to a server. It uses cryptographically signed certificates for authentication, which are harder to spoof than passwords or tokens. Some common use cases include: communications between APIs or microservices, database connections from authorized hosts, and machine-to-machine IoT connections.</p>
    <div>
      <h3>Introducing mTLS on Workers</h3>
      <a href="#introducing-mtls-on-workers">
        
      </a>
    </div>
    <p>With mTLS support on Workers, your Workers can now communicate with resources that enforce an mTLS connection. mTLS through Workers can be configured in just a few easy steps:</p><p>1. Upload a client certificate and private key obtained from your service that enforces mTLS using wrangler</p>
            <pre><code>wrangler mtls-certificate upload --cert cert.pem --key key.pem --name my-client-cert</code></pre>
            <p>2. Add a mtls_certificates binding in your project’s wrangler.toml file. The CERTIFICATE_ID is returned after your certificate is uploaded in step 1</p>
            <pre><code>mtls_certificates = [
 { binding = "MY_CERT", certificate_id = "&lt;CERTIFICATE_ID&gt;" }
]</code></pre>
            <p>3. Add the mTLS binding to any fetch() requests. This will present the client certificate when establishing the TLS connection.</p>
            <pre><code>index.js

export default {
   async fetch(request, environment) {
       return await environment.MY_CERT.fetch("https://example-secure-origin.com")
   }

}</code></pre>
            <p>To get per-request granularity, you can configure multiple mTLS certificates if you’re connecting to multiple hosts within the same Worker. There’s a limit of 1,000 certificates that can be uploaded per account. Contact your account team or reach out through the <a href="https://discord.gg/cloudflaredev">Cloudflare Developer Discord</a> if you need more certificates.</p>
    <div>
      <h3>Try it yourself</h3>
      <a href="#try-it-yourself">
        
      </a>
    </div>
    <p>It’s that easy! For more information and to try it yourself, refer to our developer documentation – <a href="https://developers.cloudflare.com/workers/runtime-apis/mtls/">client authentication with mTLS</a>.</p><p>We love to see what you’re developing on Workers. Join our <a href="https://discord.gg/cloudflaredev">Discord community</a> and show us what you’re building!</p> ]]></content:encoded>
            <category><![CDATA[Security Week]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[TLS]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <guid isPermaLink="false">6H38WT9SkzKXQsQRivBXFz</guid>
            <dc:creator>Tanushree Sharma</dc:creator>
            <dc:creator>Dina Kozlov</dc:creator>
        </item>
        <item>
            <title><![CDATA[A new, configurable and scalable version of Geo Key Manager, now available in Closed Beta]]></title>
            <link>https://blog.cloudflare.com/configurable-and-scalable-geo-key-manager-closed-beta/</link>
            <pubDate>Thu, 15 Dec 2022 14:00:00 GMT</pubDate>
            <description><![CDATA[ We’re excited to announce a new version of Geo Key Manager — one that allows customers to define boundaries by country, by a region, or by a standard, such as “only store my private keys in FIPS compliant data centers” — now available in Closed Beta. ]]></description>
            <content:encoded><![CDATA[ <p><i></i></p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5MaRNMB7y16ivTdqoGenT8/140190c6e91d3de30cd51f728b4d852a/image2-35.png" />
            
            </figure><p>Today, traffic on the Internet stays encrypted through the use of public and private keys that encrypt the data as it's being transmitted. Cloudflare helps secure millions of websites by managing the encryption keys that keep this data protected. To provide lightning fast services, Cloudflare stores these keys on our fleet of data centers that spans more than 150 countries. However, some compliance regulations require that private keys are only stored in specific geographic locations.</p><p>In 2017, we <a href="/introducing-cloudflare-geo-key-manager/">introduced</a> Geo Key Manager, a product that allows customers to store and manage the encryption keys for their domains in different geographic locations so that compliance regulations are met and that data remains secure. We launched the product a few months before General Data Protection Regulation (GDPR) went into effect and built it to support three regions: the US, the European Union (EU), and a set of our top tier data centers that employ the highest security measures. Since then, GDPR-like laws have quickly expanded and now, more than 15 countries have comparable data protection laws or regulations that include restrictions on data transfer across and/or data localization within a certain boundary.</p><p>At Cloudflare, we like to be prepared for the future. We want to give our customers tools that allow them to maintain compliance in this ever-changing environment. That’s why we’re excited to announce a new version of Geo Key Manager — one that allows customers to define boundaries by country, ”only store my private keys in India”, by a region ”only store my private keys in the European Union”, or by a standard, such as “only store my private keys in FIPS compliant data centers” — now available in Closed Beta, sign up <a href="https://www.cloudflare.com/lp/geo-key-manager/">here</a>!</p>
    <div>
      <h3>Learnings from Geo Key Manager v1</h3>
      <a href="#learnings-from-geo-key-manager-v1">
        
      </a>
    </div>
    <p>Geo Key Manager has been around for a few years now, and we’ve used this time to gather feedback from our customers. As the demand for a more flexible system grew, we decided to go back to the drawing board and create a new version of Geo Key Manager that would better meet our customers’ needs.</p><p>We initially launched Geo Key Manager with support for US, EU, and Highest Security Data centers. Those regions were sufficient at the time, but customers wrestling with data localization obligations in other jurisdictions need more flexibility when it comes to selecting countries and regions. Some customers want to be able to set restrictions to maintain their private keys in one country, some want the keys stored everywhere except in certain countries, and some may want to mix and match rules and say “store them in X and Y, but not in Z”. What we learned from our customers is that they need flexibility, something that will allow them to keep up with the ever-changing rules and policies — and that’s what we set out to build out.</p><p>The next issue we faced was scalability.  When we built the initial regions, we included a hard-coded list of data centers that met our criteria for the US, EU, “high security” data center regions.  However, this list was static because the underlying cryptography did not support dynamic changes to our list of data centers. In order to distribute private keys to new data centers that met our criteria, we would have had to completely overhaul the system. In addition to that, our network significantly expands every year, with more than 100 new data centers since the initial launch. That means that any new potential locations that could be used to store private keys are currently not in use, degrading the performance and reliability of customers using this feature.</p><p>With our current scale, automation and expansion is a must-have. Our new system needs to dynamically scale every time we onboard or remove a data center from our Network, without any human intervention or large overhaul.</p><p>Finally, one of our biggest learnings was that customers make mistakes, such as defining a region that’s so small that availability becomes a concern. Our job is to prevent our customers from making changes that we know will negatively impact them.</p>
    <div>
      <h3>Define your own geo-restrictions with the new version of Geo Key Manager</h3>
      <a href="#define-your-own-geo-restrictions-with-the-new-version-of-geo-key-manager">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/32wIGpiYX9XBNE6IqIo4qW/53a0340bfbd4c175a1ced8177df8cdfe/image3-21.png" />
            
            </figure><p>Cloudflare has significantly grown in the last few years and so has our international customer base. Customers need to keep their traffic regionalized. This region can be as broad as a continent — Asia, for example. Or, it can be a specific country, like Japan.</p><p>From our conversations with our customers, we’ve heard that they want to be able to define these regions themselves. This is why today we’re excited to announce that customers will be able to use Geo Key Manager to create what we call “policies”.</p><p>A policy can be a single country, defined by two-letter (ISO 3166) country code. It can be a region, such as “EU” for the European Union or Oceania. It can be a mix and match of the two, “country:US or region: EU”.</p><p>Our new policy based Geo Key Manager allows you to create allowlist or blocklists of countries and supported regions, giving you control over the boundary in which your private key will be stored. If you’d like to store your private keys globally and omit a few countries, you can do that.</p><p>If you would like to store your private keys in the EU and US, you would make the following <a href="https://api.cloudflare.com/#custom-ssl-for-a-zone-create-ssl-configuration">API</a> call:</p>
            <pre><code>curl -X POST "https://api.cloudflare.com/client/v4/zones/zone_id/custom_certificates" \
     -H "X-Auth-Email: user@example.com" \
     -H "X-Auth-Key: auth-key" \
     -H "Content-Type: application/json" \
     --data '{"certificate":"certificate","private_key":"private_key","policy":"(country: US) or (region: EU)", "type": "sni_custom"}'</code></pre>
            <p>If you would like to store your private keys in the EU, but not in France, here is how you can define that:</p>
            <pre><code>curl -X POST "https://api.cloudflare.com/client/v4/zones/zone_id/custom_certificates" \
     -H "X-Auth-Email: user@example.com" \
     -H "X-Auth-Key: auth-key" \
     -H "Content-Type: application/json" \
     --data '{"certificate":"certificate","private_key":"private_key","policy": "region: EU and (not country: FR)", "type": "sni_custom"}'</code></pre>
            <p>Geo Key Manager can now support more than 30 countries and regions. But that’s not all! The superpower of our Geo Key Manager technology is that it doesn’t actually have to be “geo” based, but instead, it’s attribute based. In the future, we’ll have a policy that will allow our customers to define where their private keys are stored based on a compliance standard like <a href="https://www.cloudflare.com/learning/privacy/what-is-fedramp/">FedRAMP</a> or ISO 27001.</p>
    <div>
      <h3>Reliability, resiliency, and redundancy</h3>
      <a href="#reliability-resiliency-and-redundancy">
        
      </a>
    </div>
    <p>By giving our customers the remote control for Geo Key Manager, we want to make sure that customers understand the impact of their changes on both redundancy and latency.</p><p>On the redundancy side, one of our biggest concerns is allowing customers to choose a region small enough that if a data center is removed for maintenance, for example, then availability is drastically impacted. To protect our customers, we’ve added redundancy restrictions. These prevent our customers from setting regions with too few data centers, ensuring that all the data centers within a policy can offer high availability and redundancy.</p><p>Not just that, but in the last few years, we’ve significantly improved the underlying networking that powers Geo Key Manager. For more information on how we did that, keep an eye out for a technical deep dive inside Geo Key Manager.</p>
    <div>
      <h3>Performance matters</h3>
      <a href="#performance-matters">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/77nOyNlNPEyaWkHPZ1xfpM/aac3e0d99f380dbc97d244f99d89aeec/image1-39.png" />
            
            </figure><p>With the original regions (US, EU, and Highest Security Data Centers), we learned customers may overlook possible latency impacts that occur when defining the key manager to a certain region. Imagine your keys are stored in the US. For your Asia-based customers, there’s going to be some latency impact for the requests that go around the world. Now, with customers being able to define more granular regions, we want to make sure that before customers make that change, they see the impact of it.</p><p>If you’re an <a href="https://www.cloudflare.com/ecommerce/">E-Commerce platform</a> then <a href="https://www.cloudflare.com/solutions/ecommerce/optimization/">performance</a> is always top-of-mind. One thing that we’re working on right now is performance metrics for Geo Key Manager policies both from a regional point of view — “what’s the latency impact for Asia based customers?” and from a global point of view — “for anyone in the world, what is the average impact of this policy?”.</p><p>By seeing the latency impact, if you see that the impact is unacceptable, you may want to create a separate domain for your service that’s specific to the region that it’s serving.</p>
    <div>
      <h3>Closed Beta, now available!</h3>
      <a href="#closed-beta-now-available">
        
      </a>
    </div>
    <p>Interested in trying out the latest version of Geo Key Manager? Fill out this <a href="https://www.cloudflare.com/lp/geo-key-manager/">form</a>.</p>
    <div>
      <h3>Coming soon!</h3>
      <a href="#coming-soon">
        
      </a>
    </div>
    <p>Geo Key Manager is only available via API at the moment. But, we are working on creating an easy-to-use UI for it, so that customers can easily manage their policies and regions. In addition, we’ll surface performance measurements and warnings when we see any degraded impact in terms of performance or redundancy to ensure that customers are mindful when setting policies.</p><p>We’re also excited to extend our Geo Key Manager product beyond custom uploaded certificates. In the future, certificates issued through Advanced Certificate Manager or <a href="https://www.cloudflare.com/application-services/products/ssl-for-saas-providers/">SSL for SaaS</a> will be allowed to add policy based restrictions for the key storage.</p><p>Finally, we’re looking to add more default regions to make the selection process simple for our customers. If you have any regions that you’d like us to support, or just general feedback or feature requests related to Geo Key Manager, make a note of it on the <a href="https://www.cloudflare.com/lp/geo-key-manager/">form</a>. We love hearing from our customers!</p> ]]></content:encoded>
            <category><![CDATA[Impact Week]]></category>
            <category><![CDATA[SSL]]></category>
            <category><![CDATA[TLS]]></category>
            <category><![CDATA[Regional Services]]></category>
            <category><![CDATA[Geo Key Manager]]></category>
            <guid isPermaLink="false">2OgNkneECLDxlFcB0j9my4</guid>
            <dc:creator>Dina Kozlov</dc:creator>
        </item>
        <item>
            <title><![CDATA[Bringing authentication and identification to Workers through Mutual TLS]]></title>
            <link>https://blog.cloudflare.com/mutual-tls-for-workers/</link>
            <pubDate>Wed, 16 Nov 2022 14:00:06 GMT</pubDate>
            <description><![CDATA[ We’re excited to announce that Workers will soon be able to send outbound requests through a mutually authenticated channel via mutual TLS authentication! ]]></description>
            <content:encoded><![CDATA[ <p></p><p>We’re excited to announce that Workers will soon be able to send outbound requests through a mutually authenticated channel via mutual TLS authentication!</p><p>When making outbound requests from a Worker, TLS is always used on the server side, so that the client can validate that the information is being sent to the right destination. But in the same way, the server may want to authenticate the client to ensure that the request is coming from an authorized client. This two-way street of authentication is called Mutual TLS. In this blog, we’re going to talk through the importance of mutual TLS authentication, what it means to use mutual TLS within Workers, and how in a few months you’ll be able to use it to send information through an authenticated channel — adding a layer of security to your application!</p>
    <div>
      <h3>mTLS between Cloudflare and an Origin</h3>
      <a href="#mtls-between-cloudflare-and-an-origin">
        
      </a>
    </div>
    <p>Mutual TLS authentication works by having a server validate the client certificate against a CA. If the validation passes then the server knows that it’s the right client and will let the request go through. If the validation fails or if a client certificate is not presented then the server can choose to drop the request.</p><p>Today, customers use mTLS to secure connections between Cloudflare and an origin — this is done through a product called Authenticated Origin Pull. Once a customer enables it, Cloudflare starts serving a client certificate on all outgoing requests. This is either a Cloudflare managed client certificate or it can be one uploaded by the customer. When enabled, Cloudflare will present this certificate when connecting to an origin. The origin should then check the client certificate to see if it’s the one that it expects to see. If it is then the request will go through. If it’s the wrong client certificate or is not included then the origin can choose to drop the request.</p><p>Doing this brings a big security boost because it allows the origin to only accept traffic from Cloudflare and drop any unexpected external traffic.</p>
    <div>
      <h3>Digging up problems with dogfooding</h3>
      <a href="#digging-up-problems-with-dogfooding">
        
      </a>
    </div>
    <p>Today, many Cloudflare services are built on Cloudflare Workers — it’s the secret sauce we use to continuously ship fast, reliable products to our customers. Internally, we might have one Cloudflare account that runs multiple services, with each service deployed on an individual Worker.</p><p>Whenever one service needs to talk to another, the <code>fetch()</code> function is used to request or send information. This can be object data that we send to upstream providers, it can be a read or write to a database, or service to service communication. In most regards, the information that’s going to the origin is sensitive and requires a layer of authentication. Without proper authentication, any client would be able to access the data, removing a layer of security.</p>
    <div>
      <h3>Implementing service to service authentication</h3>
      <a href="#implementing-service-to-service-authentication">
        
      </a>
    </div>
    <p>Today, there are a few ways that you can set up service to service authentication, if you’re building on Workers.</p><p>One way to set up service authentication is to use Authenticated Origin Pull. Authenticated Origin Pull allows customers to implement mutual TLS between Cloudflare and an origin by attaching a client certificate and private key to a domain or hostname, so that all outbound requests include a client certificate. The origin can then check this certificate to see whether the request came from Cloudflare. If there’s a valid certificate, then the origin can let the request through and if there’s an invalid certificate or no certificate then the origin can choose to drop the request. However, Authenticated Origin Pull has its limitations and isn’t ideal for some use-cases.</p><p>The first limitation is that an Authenticated Origin Pull certificate is tied to a publicly hosted hostname or domain. Some services that are built on Workers don’t necessarily need to be exposed to the public Internet. Therefore, tying it to a domain doesn’t really make sense.</p><p>The next limitation is that if you have multiple Workers services that are each writing to the same database, you may want to be able to distinguish them. What if at some point, you need to take the “write” power away from the Worker? Or, what if only Workers A and B are allowed to make writes but Worker C should only make “read” requests?</p><p>Today, if you use Authenticated Origin Pulls with Cloudflare’s client certificate then all requests will be accepted as valid. This is because for all outbound requests, we attach the same client certificate. So even though you're restricting your traffic to “Cloudflare-Only”, there’s no Worker-level granularity.</p><p>Now, there’s another solution that you can use. You can make use of Access and set up <a href="https://support.cloudflare.com/hc/en-us/articles/115001376488-Configuring-Token-Authentication">Token Authentication</a> by using a pre-shared key and <a href="https://developers.cloudflare.com/workers/examples/auth-with-headers/">configuring your Worker</a> to allow or deny access based on the pre-shared key, presented in the header. While this does allow you to lock down authentication on a per-Worker or per-service basis, the feedback that we’ve gotten from our internal teams who have implemented this is that it’s 1) cumbersome to manage and 2) requires the two service to speak over HTTP, and 3) doesn’t expose the client’s identity. And so, with these limitations in mind, we’re excited to bring mutual TLS authentication to Workers — an easy, scalable way to manage authentication and identity for anyone building on Workers.</p>
    <div>
      <h3>Coming soon: Mutual TLS for Workers</h3>
      <a href="#coming-soon-mutual-tls-for-workers">
        
      </a>
    </div>
    <p>We’re excited to announce that in the next few months, we’re going to be bringing mutual TLS support to Workers. Customers will be able to upload client certificates to Cloudflare and attach them in the <code>fetch()</code> requests within a Worker. That way, you can have per-Worker or even per-request level of granularity when it comes to authentication and identification.</p><p>When sending out the subrequest, Cloudflare will present the client certificate and the receiving server will be able to check:</p><ol><li><p>Is this client presenting a valid certificate?</p></li><li><p>Within Cloudflare, what service or Worker is this request coming from?</p></li></ol><p>This is one of our most highly requested features, both from customers and from internal teams, and we’re excited to launch it and make it a no-brainer for any developer to use Cloudflare as their platform for anything they want to build!</p> ]]></content:encoded>
            <category><![CDATA[Developer Week]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[TLS]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <guid isPermaLink="false">1FztjaPuk1E1iPJRY0Trkr</guid>
            <dc:creator>Dina Kozlov</dc:creator>
        </item>
        <item>
            <title><![CDATA[Total TLS: one-click TLS for every hostname you have]]></title>
            <link>https://blog.cloudflare.com/total-tls-one-click-tls-for-every-hostname/</link>
            <pubDate>Thu, 06 Oct 2022 18:00:00 GMT</pubDate>
            <description><![CDATA[ Today, we’re excited to announce Total TLS — a one-click feature that will issue individual TLS certificates for every subdomain in our customer’s domains ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Today, we’re excited to announce Total TLS — a one-click feature that will issue individual TLS certificates for every subdomain in our customer’s domains.</p><p>By default, all Cloudflare customers get a free, TLS certificate that covers the apex and wildcard (example.com, *.example.com) of their domain. Now, with Total TLS, customers can get additional coverage for all of their subdomains with just one-click! Once enabled, customers will no longer have to worry about insecure connection errors to subdomains not covered by their default TLS certificate because Total TLS will keep all the traffic bound to the subdomains encrypted.</p>
    <div>
      <h2>A primer on Cloudflare’s TLS certificate offerings</h2>
      <a href="#a-primer-on-cloudflares-tls-certificate-offerings">
        
      </a>
    </div>
    
    <div>
      <h3>Universal SSL — the “easy” option</h3>
      <a href="#universal-ssl-the-easy-option">
        
      </a>
    </div>
    <p>In 2014, we announced <a href="/introducing-universal-ssl/">Universal SSL</a> — a <a href="https://www.cloudflare.com/application-services/products/ssl/">free TLS certificate</a> for every Cloudflare customer. Universal SSL was built to be a simple “one-size-fits-all” solution. For customers that use Cloudflare as their authoritative DNS provider, this certificate covers the apex and a wildcard e.g. example.com and *.example.com. While a Universal SSL certificate provides sufficient coverage for most, some customers have deeper subdomains like a.b.example.com for which they’d like TLS coverage. For those customers, we built Advanced Certificate Manager — a customizable platform for certificate issuance that allows customers to issue certificates with the hostnames of their choice.</p>
    <div>
      <h3>Advanced certificates — the “customizable” option</h3>
      <a href="#advanced-certificates-the-customizable-option">
        
      </a>
    </div>
    <p>For customers that want flexibility and choice, we build Advanced certificates which are available as a part of <a href="/advanced-certificate-manager/">Advanced Certificate Manager</a>. With Advanced certificates, customers can specify the exact hostnames that will be included on the certificate.</p><p>That means that if my Universal SSL certificate is insufficient, I can use the Advanced certificates UI or API to request a certificate that covers “a.b.example.com” and “a.b.c.example.com”. Today, we allow customers to place up to 50 hostnames on an Advanced certificate. The only caveat — customers have to tell us which hostnames to protect.</p><p>This may seem trivial, but some of our customers have thousands of subdomains that they want to <a href="https://www.cloudflare.com/application-services/solutions/domain-protection-services/">keep protected</a>. We have customers with subdomains that range from a.b.example.com to a.b.c.d.e.f.example.com and for those to be covered, customers have to use the Advanced certificates <a href="https://api.cloudflare.com/#certificate-packs-order-advanced-certificate-manager-certificate-pack">API</a> to tell us the hostname that they’d like us to protect. A process like this is error-prone, not easy to scale, and has been rejected as a solution by some of our largest customers.</p><p>Instead, customers want Cloudflare to issue the certificates for them. If Cloudflare is the DNS provider then Cloudflare should know what subdomains need protection. Ideally, Cloudflare would issue a TLS certificate for every subdomain that’s proxying its traffic through the Cloudflare Network… and that’s where Total TLS comes in.</p>
    <div>
      <h2>Enter Total TLS: easy, customizable, and scalable</h2>
      <a href="#enter-total-tls-easy-customizable-and-scalable">
        
      </a>
    </div>
    <p>Total TLS is a one-click button that signals Cloudflare to automatically issue TLS certificates for every proxied DNS record in your domain. Once enabled, Cloudflare will issue individual certificates for every proxied hostname. This way, you can add as many DNS records and subdomains as you need to, without worrying about whether they’ll be covered by a TLS certificate.</p><p>If you have a DNS record for a.b.example.com, we’ll issue a TLS certificate with the hostname a.b.example.com. If you have a wildcard record for *.a.b.example.com then we’ll issue a TLS certificate for “*.a.b.example.com”. Here’s an example of what this will look like in the Edge Certificates table of the dashboard:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5orvWZD4F1Qu1ekoHIuFch/b67ba45b1efbf4c75e674593ff23b478/image2-6.png" />
            
            </figure>
    <div>
      <h2>Available now</h2>
      <a href="#available-now">
        
      </a>
    </div>
    <p>Total TLS is now available to use as a part of Advanced Certificate Manager for domains that use Cloudflare as an Authoritative DNS provider. One of the superpowers of having Cloudflare as your DNS provider is that we’ll always add the proper Domain Control Validation (DCV) records on your behalf to ensure successful certificate issuance and renewal.</p><p>Enabling Total TLS is easy — you can do it through the Cloudflare dashboard or via <a href="https://api.cloudflare.com/#total-tls-enable-or-disable-total-tls">API</a>. In the SSL/TLS tab of the Cloudflare dashboard, navigate to Total TLS. There, choose the issuing CA — Let’s Encrypt, Google Trust Services, or No Preference, if you’d like Cloudflare to select the CA on your behalf then click on the toggle to enable the feature.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4GA2uiIYneXvEc4C09GtGe/e779432573a819444c7493ecad4487cf/image4-1.png" />
            
            </figure>
    <div>
      <h2>But that’s not all…</h2>
      <a href="#but-thats-not-all">
        
      </a>
    </div>
    <p>One pain point that we wanted to address for all customers was visibility. From looking at support tickets and talking to customers, one of the things that we realized was that customers don’t always know whether their domain is covered by a TLS certificate —  a simple oversight that can result in downtime or errors.</p><p>To prevent this from happening, we are now going to warn every customer if we see that the proxied DNS record that they’re creating, viewing, or editing doesn’t have a TLS certificate covering it. This way, our customers can get a TLS certificate issued before the hostname becomes publicly available, preventing visitors from encountering this error:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/17OAtnH4osT3t91MeJaHUB/bc0dee60dcc9c17a8b1786b5ec6d9671/image3-2.png" />
            
            </figure>
    <div>
      <h2>Join the mission</h2>
      <a href="#join-the-mission">
        
      </a>
    </div>
    <p>At Cloudflare, we love building products that help secure all Internet properties. Interested in achieving this mission with us? <a href="https://www.cloudflare.com/careers/jobs/">Join the team</a>!</p> ]]></content:encoded>
            <category><![CDATA[SSL]]></category>
            <category><![CDATA[TLS]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[Advanced Certificate Manager]]></category>
            <guid isPermaLink="false">3u0rb4HzoGD9cOz1Kix4f9</guid>
            <dc:creator>Dina Kozlov</dc:creator>
        </item>
    </channel>
</rss>