
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/">
    <channel>
        <title><![CDATA[ The Cloudflare Blog ]]></title>
        <description><![CDATA[ Get the latest news on how products at Cloudflare are built, technologies used, and join the teams helping to build a better Internet. ]]></description>
        <link>https://blog.cloudflare.com</link>
        <atom:link href="https://blog.cloudflare.com/" rel="self" type="application/rss+xml"/>
        <language>en-us</language>
        
        <lastBuildDate>Tue, 07 Apr 2026 23:46:57 GMT</lastBuildDate>
        <item>
            <title><![CDATA[DIY BYOIP: a new way to Bring Your Own IP prefixes to Cloudflare]]></title>
            <link>https://blog.cloudflare.com/diy-byoip/</link>
            <pubDate>Fri, 07 Nov 2025 14:00:00 GMT</pubDate>
            <description><![CDATA[ Announcing a new self-serve API for Bring Your Own IP (BYOIP), giving customers unprecedented control and flexibility to onboard, manage, and use their own IP prefixes with Cloudflare's services. ]]></description>
            <content:encoded><![CDATA[ <p>When a customer wants to <a href="https://blog.cloudflare.com/bringing-your-own-ips-to-cloudflare-byoip/"><u>bring IP address space to</u></a> Cloudflare, they’ve always had to reach out to their account team to put in a request. This request would then be sent to various Cloudflare engineering teams such as addressing and network engineering — and then the team responsible for the particular service they wanted to use the prefix with (e.g., CDN, Magic Transit, Spectrum, Egress). In addition, they had to work with their own legal teams and potentially another organization if they did not have primary ownership of an IP prefix in order to get a <a href="https://developers.cloudflare.com/byoip/concepts/loa/"><u>Letter of Agency (LOA)</u></a> issued through hoops of approvals. This process is complex, manual, and  time-consuming for all parties involved — sometimes taking up to 4–6 weeks depending on various approvals. </p><p>Well, no longer! Today, we are pleased to announce the launch of our self-serve BYOIP API, which enables our customers to onboard and set up their BYOIP prefixes themselves.</p><p>With self-serve, we handle the bureaucracy for you. We have automated this process using the gold standard for routing security — the Resource Public Key Infrastructure, RPKI. All the while, we continue to ensure the best quality of service by generating LOAs on our customers’ behalf, based on the security guarantees of our new ownership validation process. This ensures that customer routes continue to be accepted in every corner of the Internet.</p><p>Cloudflare takes the security and stability of the whole Internet very seriously. RPKI is a cryptographically-strong authorization mechanism and is, we believe, substantially more reliable than common practice which relies upon human review of scanned documents. However, deployment and availability of some RPKI-signed artifacts like the AS Path Authorisation (ASPA) object remains limited, and for that reason we are limiting the initial scope of self-serve onboarding to BYOIP prefixes originated from Cloudflare's autonomous system number (ASN) AS 13335. By doing this, we only need to rely on the publication of Route Origin Authorisation (ROA) objects, which are widely available. This approach has the advantage of being safe for the Internet and also meeting the needs of most of our BYOIP customers. </p><p>Today, we take a major step forward in offering customers a more comprehensive IP address management (IPAM) platform. With the recent update to <a href="https://blog.cloudflare.com/your-ips-your-rules-enabling-more-efficient-address-space-usage/"><u>enable multiple services on a single BYOIP prefix</u></a> and this latest advancement to enable self-serve onboarding via our API, we hope customers feel empowered to take control of their IPs on our network.</p>
    <div>
      <h2>An evolution of Cloudflare BYOIP</h2>
      <a href="#an-evolution-of-cloudflare-byoip">
        
      </a>
    </div>
    <p>We want Cloudflare to feel like an extension of your infrastructure, which is why we <a href="https://blog.cloudflare.com/bringing-your-own-ips-to-cloudflare-byoip/"><u>originally launched Bring-Your-Own-IP (BYOIP) back in 2020</u></a>. </p><p>A quick refresher: Bring-your-own-IP is named for exactly what it does - it allows customers to bring their own IP space to Cloudflare. Customers choose BYOIP for a number of reasons, but the main reasons are control and configurability. An IP prefix is a range or block of IP addresses. Routers create a table of reachable prefixes, known as a routing table, to ensure that packets are delivered correctly across the Internet. When a customer's Cloudflare services are configured to use the customer's own addresses, onboarded to Cloudflare as BYOIP, a packet with a corresponding destination address will be routed across the Internet to Cloudflare's global edge network, where it will be received and processed. BYOIP can be used with our Layer 7 services, Spectrum, or Magic Transit. </p>
    <div>
      <h2>A look under the hood: How it works</h2>
      <a href="#a-look-under-the-hood-how-it-works">
        
      </a>
    </div>
    
    <div>
      <h3>Today’s world of prefix validation</h3>
      <a href="#todays-world-of-prefix-validation">
        
      </a>
    </div>
    <p>Let’s take a step back and take a look at the state of the BYOIP world right now. Let’s say a customer has authority over a range of IP addresses, and they’d like to bring them to Cloudflare. We require customers to provide us with a Letter of Authorization (LOA) and have an Internet Routing Registry (IRR) record matching their prefix and ASN. Once we have this, we require manual review by a Cloudflare engineer. There are a few issues with this process:</p><ul><li><p>Insecure: The LOA is just a document—a piece of paper. The security of this method rests entirely on the diligence of the engineer reviewing the document. If the review is not able to detect that a document is fraudulent or inaccurate, it is possible for a prefix or ASN to be hijacked.</p></li><li><p>Time-consuming: Generating a single LOA is not always sufficient. If you are leasing IP space, we will ask you to provide documentation confirming that relationship as well, so that we can see a clear chain of authorisation from the original assignment or allocation of addresses to you. Getting all the paper documents to verify this chain of ownership, combined with having to wait for manual review can result in weeks of waiting to deploy a prefix!</p></li></ul>
    <div>
      <h3>Automating trust: How Cloudflare verifies your BYOIP prefix ownership in minutes</h3>
      <a href="#automating-trust-how-cloudflare-verifies-your-byoip-prefix-ownership-in-minutes">
        
      </a>
    </div>
    <p>Moving to a self-serve model allowed us to rethink the manner in which we conduct prefix ownership checks. We asked ourselves: How can we quickly, securely, and automatically prove you are authorized to use your IP prefix and intend to route it through Cloudflare?</p><p>We ended up killing two birds with one stone, thanks to our two-step process involving the creation of an RPKI ROA (verification of intent) and modification of IRR or rDNS records (verification of ownership). Self-serve unlocks the ability to not only onboard prefixes more quickly and without human intervention, but also exercises more rigorous ownership checks than a simple scanned document ever could. While not 100% foolproof, it is a significant improvement in the way we verify ownership.</p>
    <div>
      <h3>Tapping into the authorities	</h3>
      <a href="#tapping-into-the-authorities">
        
      </a>
    </div>
    <p>Regional Internet Registries (RIRs) are the organizations responsible for distributing and managing Internet number resources like IP addresses. They are composed of 5 different entities operating in different regions of the world (<a href="https://developers.cloudflare.com/byoip/get-started/#:~:text=Your%20prefix%20must%20be%20registered%20under%20one%20of%20the%20Regional%20Internet%20Registries%20(RIRs)%3A"><u>RIRs</u></a>). Originally allocated address space from the Internet Assigned Numbers Authority (IANA), they in turn assign and allocate that IP space to Local Internet Registries (LIRs) like ISPs.</p><p>This process is based on RIR policies which generally look at things like legal documentation, existing database/registry records, technical contacts, and BGP information. End-users can obtain addresses from an LIR, or in some cases through an RIR directly. As IPv4 addresses have become more scarce, brokerage services have been launched to allow addresses to be leased for fixed periods from their original assignees.</p><p>The Internet Routing Registry (IRR) is a separate system that focuses on routing rather than address assignment. Many organisations operate IRR instances and allow routing information to be published, including all five RIRs. While most IRR instances impose few barriers to the publication of routing data, those that are operated by RIRs are capable of linking the ability to publish routing information with the organisations to which the corresponding addresses have been assigned. We believe that being able to modify an IRR record protected in this way provides a good signal that a user has the rights to use a prefix.</p><p>Example of a route object containing validation token (using the documentation-only address 192.0.2.0/24):</p>
            <pre><code>% whois -h rr.arin.net 192.0.2.0/24

route:          192.0.2.0/24
origin:         AS13335
descr:          Example Company, Inc.
                cf-validation: 9477b6c3-4344-4ceb-85c4-6463e7d2453f
admin-c:        ADMIN2521-ARIN
tech-c:         ADMIN2521-ARIN
tech-c:         CLOUD146-ARIN
mnt-by:         MNT-CLOUD14
created:        2025-07-29T10:52:27Z
last-modified:  2025-07-29T10:52:27Z
source:         ARIN</code></pre>
            <p>For those that don’t want to go through the process of IRR-based validation, reverse DNS (rDNS) is provided as another secure method of verification. To manage rDNS for a prefix — whether it's creating a PTR record or a security TXT record — you must be granted permission by the entity that allocated the IP block in the first place (usually your ISP or the RIR).</p><p>This permission is demonstrated in one of two ways:</p><ul><li><p>Directly through the IP owner’s authenticated customer portal (ISP/RIR).</p></li><li><p>By the IP owner delegating authority to your third-party DNS provider via an NS record for your reverse zone.</p></li></ul><p>Example of a reverse domain lookup using dig command (using the documentation-only address 192.0.2.0/24):</p>
            <pre><code>% dig cf-validation.2.0.192.in-addr.arpa TXT

; &lt;&lt;&gt;&gt; DiG 9.10.6 &lt;&lt;&gt;&gt; cf-validation.2.0.192.in-addr.arpa TXT
;; global options: +cmd
;; Got answer:
;; -&gt;&gt;HEADER&lt;&lt;- opcode: QUERY, status: NOERROR, id: 16686
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;cf-validation.2.0.192.in-addr.arpa. IN TXT

;; ANSWER SECTION:
cf-validation.2.0.192.in-addr.arpa. 300 IN TXT "b2f8af96-d32d-4c46-a886-f97d925d7977"

;; Query time: 35 msec
;; SERVER: 127.0.2.2#53(127.0.2.2)
;; WHEN: Fri Oct 24 10:43:52 EDT 2025
;; MSG SIZE  rcvd: 150</code></pre>
            <p>So how exactly is one supposed to modify these records? That’s where the validation token comes into play. Once you choose either the IRR or Reverse DNS method, we provide a unique, single-use validation token. You must add this token to the content of the relevant record, either in the IRR or in the DNS. Our system then looks for the presence of the token as evidence that the request is being made by someone with authorization to make the requested modification. If the token is found, verification is complete and your ownership is confirmed!</p>
    <div>
      <h3>The digital passport 🛂</h3>
      <a href="#the-digital-passport">
        
      </a>
    </div>
    <p>Ownership is only half the battle; we also need to confirm your intention that you authorize Cloudflare to advertise your prefix. For this, we rely on the gold standard for routing security: the Resource Private Key Infrastructure (RPKI), and in particular Route Origin Authorization (ROA) objects.</p><p>A ROA is a cryptographically-signed document that specifies which Autonomous System Number (ASN) is authorized to originate your IP prefix. You can think of a ROA as the digital equivalent of a certified, signed, and notarised contract from the owner of the prefix.</p><p>Relying parties can validate the signatures in a ROA using the RPKI.You simply create a ROA that specifies Cloudflare's ASN (AS13335) as an authorized originator and arrange for it to be signed. Many of our customers used hosted RPKI systems available through RIR portals for this. When our systems detect this signed authorization, your routing intention is instantly confirmed. </p><p>Many other companies that support BYOIP require a complex workflow involving creating self-signed certificates and manually modifying RDAP (Registration Data Access Protocol) records—a heavy administrative lift. By embracing a choice of IRR object modification and Reverse DNS TXT records, combined with RPKI, we offer a verification process that is much more familiar and straightforward for existing network operators.</p>
    <div>
      <h3>The global reach guarantee</h3>
      <a href="#the-global-reach-guarantee">
        
      </a>
    </div>
    <p>While the new self-serve flow ditches the need for the "dinosaur relic" that is the LOA, many network operators around the world still rely on it as part of the process of accepting prefixes from other networks.</p><p>To help ensure your prefix is accepted by adjacent networks globally, Cloudflare automatically generates a document on your behalf to be distributed in place of a LOA. This document provides information about the checks that we have carried out to confirm that we are authorised to originate the customer prefix, and confirms the presence of valid ROAs to authorise our origination of it. In this way we are able to support the workflows of network operators we connect to who rely upon LOAs, without our customers having the burden of generating them.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1GimIe80gJn5PrRUGkEMpF/130d2590e45088d58ac62ab2240f4d5c/image1.png" />
          </figure>
    <div>
      <h2>Staying away from black holes</h2>
      <a href="#staying-away-from-black-holes">
        
      </a>
    </div>
    <p>One concern in designing the Self-Serve API is the trade-off between giving customers flexibility while implementing the necessary safeguards so that an IP prefix is never advertised without a matching service binding. If this were to happen, Cloudflare would be advertising a prefix with no idea on what to do with the traffic when we receive it! We call this “blackholing” traffic. To handle this, we introduced the requirement of a default service binding — i.e. a service binding that spans the entire range of the IP prefix onboarded. </p><p>A customer can later layer different service bindings on top of their default service binding via <a href="https://blog.cloudflare.com/your-ips-your-rules-enabling-more-efficient-address-space-usage/"><u>multiple service bindings</u></a>, like putting CDN on top of a default Spectrum service binding. This way, a prefix can never be advertised without a service binding and blackhole our customers’ traffic.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/20QAM5GITJ5m5kYkNlh701/82812d202ffa7b9a4e46838aa6c04937/image2.png" />
          </figure>
    <div>
      <h2>Getting started</h2>
      <a href="#getting-started">
        
      </a>
    </div>
    <p>Check out our <a href="https://developers.cloudflare.com/byoip/get-started/"><u>developer docs</u></a> on the most up-to-date documentation on how to onboard, advertise, and add services to your IP prefixes via our API. Remember that onboardings can be complex, and don’t hesitate to ask questions or reach out to our <a href="https://www.cloudflare.com/professional-services/"><u>professional services</u></a> team if you’d like us to do it for you.</p>
    <div>
      <h2>The future of network control</h2>
      <a href="#the-future-of-network-control">
        
      </a>
    </div>
    <p>The ability to script and integrate BYOIP management into existing workflows is a game-changer for modern network operations, and we’re only just getting started. In the months ahead, look for self-serve BYOIP in the dashboard, as well as self-serve BYOIP offboarding to give customers even more control.</p><p>Cloudflare's self-serve BYOIP API onboarding empowers customers with unprecedented control and flexibility over their IP assets. This move to automate onboarding empowers a stronger security posture, moving away from manually-reviewed PDFs and driving <a href="https://rpki.cloudflare.com/"><u>RPKI adoption</u></a>. By using these API calls, organizations can automate complex network tasks, streamline migrations, and build more resilient and agile network infrastructures.</p> ]]></content:encoded>
            <category><![CDATA[API]]></category>
            <category><![CDATA[Addressing]]></category>
            <category><![CDATA[BYOIP]]></category>
            <category><![CDATA[IPv4]]></category>
            <category><![CDATA[IPv6]]></category>
            <category><![CDATA[Spectrum]]></category>
            <category><![CDATA[CDN]]></category>
            <category><![CDATA[Magic Transit]]></category>
            <category><![CDATA[Egress]]></category>
            <category><![CDATA[Cloudflare Gateway]]></category>
            <category><![CDATA[RPKI]]></category>
            <category><![CDATA[Aegis]]></category>
            <category><![CDATA[Smart Shield]]></category>
            <guid isPermaLink="false">4usaEaUwShJ04VKzlMV0V9</guid>
            <dc:creator>Ash Pallarito</dc:creator>
            <dc:creator>Lynsey Haynes</dc:creator>
            <dc:creator>Gokul Unnikrishnan</dc:creator>
        </item>
        <item>
            <title><![CDATA[Cloudflare 1.1.1.1 incident on July 14, 2025]]></title>
            <link>https://blog.cloudflare.com/cloudflare-1-1-1-1-incident-on-july-14-2025/</link>
            <pubDate>Tue, 15 Jul 2025 15:05:00 GMT</pubDate>
            <description><![CDATA[ July 14th, 2025, Cloudflare made a change to our service topologies that caused an outage for 1.1.1.1 on the edge, causing downtime for 62 minutes for customers using the 1.1.1.1 public DNS Resolver. ]]></description>
            <content:encoded><![CDATA[ <p>On 14 July 2025, Cloudflare made a change to our service topologies that caused an outage for 1.1.1.1 on the edge, resulting in downtime for 62 minutes for customers using the 1.1.1.1 public DNS Resolver as well as intermittent degradation of service for Gateway DNS.</p><p>Cloudflare's 1.1.1.1 Resolver service became unavailable to the Internet starting at 21:52 UTC and ending at 22:54 UTC. The majority of 1.1.1.1 users globally were affected. For many users, not being able to resolve names using the 1.1.1.1 Resolver meant that basically all Internet services were unavailable. This outage can be observed on <a href="https://radar.cloudflare.com/dns?dateStart=2025-07-14&amp;dateEnd=2025-07-15"><u>Cloudflare Radar</u></a>.</p><p>The outage occurred because of a misconfiguration of legacy systems used to maintain the infrastructure that advertises Cloudflare’s IP addresses to the Internet.</p><p>This was a global outage. During the outage, Cloudflare's 1.1.1.1 Resolver was unavailable worldwide.</p><p>We’re very sorry for this outage. The root cause was an internal configuration error and <u>not</u> the result of an attack or a <a href="https://blog.cloudflare.com/cloudflare-1111-incident-on-june-27-2024/"><u>BGP hijack</u></a>. In this blog, we’re going to talk about what the failure was, why it occurred, and what we’re doing to make sure this doesn’t happen again.</p>
    <div>
      <h2><b>Background</b></h2>
      <a href="#background">
        
      </a>
    </div>
    <p>Cloudflare <a href="https://blog.cloudflare.com/announcing-1111"><u>introduced</u></a> the <a href="https://one.one.one.one/"><u>1.1.1.1</u></a> public DNS Resolver service in 2018. Since the announcement, 1.1.1.1 has become one of the most popular DNS Resolver IP addresses and it is free for anyone to use.</p><p>Almost all of Cloudflare's services are made available to the Internet using a routing method known as <a href="https://www.cloudflare.com/learning/cdn/glossary/anycast-network/"><u>anycast</u></a>, a well-known technique intended to allow traffic for popular services to be served in many different locations across the Internet, increasing capacity and performance. This is the best way to ensure we can globally manage our traffic, but also means that problems with the advertisement of this address space can result in a global outage.   </p><p>Cloudflare announces these anycast routes to the Internet in order for traffic to those addresses to be delivered to a Cloudflare data center, providing services from many different places. Most Cloudflare services are provided globally, like the 1.1.1.1 public DNS Resolver, but a subset of services are specifically constrained to particular regions. </p><p>These services are part of our <a href="https://developers.cloudflare.com/data-localization/"><u>Data Localization Suite</u></a> (DLS), which allows customers to configure Cloudflare in a variety of ways to meet their compliance needs across different countries and regions. One of the ways in which Cloudflare manages these different requirements is to make sure the right service's IP addresses are Internet-reachable only where they need to be, so your traffic is handled correctly worldwide. A particular service has a matching "service topology" – that is, traffic for a service should be routed only to a <a href="https://blog.cloudflare.com/introducing-the-cloudflare-data-localization-suite/"><u>particular set of locations</u></a>.</p><p>On June 6, during a release to prepare a service topology for a future DLS service, a configuration error was introduced: the prefixes associated with the 1.1.1.1 Resolver service were inadvertently included alongside the prefixes that were intended for the new DLS service. This configuration error sat dormant in the production network as the new DLS service was not yet in use,  but it set the stage for the outage on July 14. Since there was no immediate change to the production network there was no end-user impact, and because there was no impact, no alerts were fired.</p>
    <div>
      <h2><b>Incident Timeline</b></h2>
      <a href="#incident-timeline">
        
      </a>
    </div>
    <table><tr><td><p>Time (UTC)</p></td><td><p>Event</p></td></tr><tr><td><p>2025-06-06 17:38</p></td><td><p><b>ISSUE INTRODUCED - NO IMPACT</b></p><p>
</p><p>A configuration change was made for a DLS service that was not yet in production. This configuration change accidentally included a reference to the 1.1.1.1 Resolver service and, by extension, the prefixes associated with the 1.1.1.1 Resolver service.</p><p>
</p><p>This change did not result in a change of network configuration, and so routing for the 1.1.1.1 Resolver was not affected.</p><p>
</p><p>Since there was no change in traffic, no alerts fired, but the misconfiguration lay dormant for a future release. </p></td></tr><tr><td><p>2025-07-14 21:48</p></td><td><p><b>IMPACT START</b></p><p>
</p><p>A configuration change was made for the same DLS service. The change attached a test location to the non-production service; this location itself was not live, but the change triggered a refresh of network configuration globally.</p><p>
</p><p>Due to the earlier configuration error linking the 1.1.1.1 Resolver's IP addresses to our non-production service, those 1.1.1.1 IPs were inadvertently included when we changed how the non-production service was set up.</p><p>
</p><p>The 1.1.1.1 Resolver prefixes started to be withdrawn from production Cloudflare data centers globally.</p></td></tr><tr><td><p>2025-07-14 21:52</p></td><td><p>DNS traffic to 1.1.1.1 Resolver service begins to drop globally</p></td></tr><tr><td><p>2025-07-14 21:54</p></td><td><p>Related, non-causal event: BGP origin hijack of 1.1.1.0/24 exposed by withdrawal of routes from Cloudflare. This <b>was not</b> a cause of the service failure, but an unrelated issue that was suddenly visible as that prefix was withdrawn by Cloudflare. </p></td></tr><tr><td><p>2025-07-14 22:01</p><p>
</p></td><td><p><b>IMPACT DETECTED</b></p><p>
</p><p>Internal service health alerts begin to fire for the 1.1.1.1 Resolver</p></td></tr><tr><td><p>2025-07-14 22:01</p></td><td><p><b>INCIDENT DECLARED</b></p></td></tr><tr><td><p>2025-07-14 22:20</p></td><td><p><b>FIX DEPLOYED</b></p><p>
</p><p>Revert was initiated to restore the previous configuration. To accelerate full restoration of service, a manually triggered action is validated in testing locations before being executed.</p></td></tr><tr><td><p>2025-07-14 22:54</p></td><td><p><b>IMPACT ENDS</b></p><p>
</p><p>Resolver alerts cleared and DNS traffic on Resolver prefixes return to normal levels</p></td></tr><tr><td><p>2025-07-14 22:55</p></td><td><p><b>INCIDENT RESOLVED</b></p></td></tr></table>
    <div>
      <h2><b>Impact</b></h2>
      <a href="#impact">
        
      </a>
    </div>
    <p>Any traffic coming to Cloudflare via 1.1.1.1 Resolver services on these IPs was impacted. Traffic to each of these addresses were also impacted on the corresponding routes. </p>
            <pre><code>1.1.1.0/24
1.0.0.0/24 
2606:4700:4700::/48
162.159.36.0/24
162.159.46.0/24
172.64.36.0/24
172.64.37.0/24
172.64.100.0/24
172.64.101.0/24
2606:4700:4700::/48
2606:54c1:13::/48
2a06:98c1:54::/48</code></pre>
            <p>When the impact started we observed an immediate and significant drop in queries over UDP, TCP and <a href="https://www.rfc-editor.org/rfc/rfc7858"><u>DNS over TLS (DoT)</u></a>. Most users have 1.1.1.1, 1.0.0.1, 2606:4700:4700::1111, or 2606:4700:4700::1001 configured as their DNS server. Below you can see the query rate for each of the individual protocols and how they were impacted during the incident:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/XATlkx1Im1QhnBTJL3ER5/6cc65fce22bd66815c348dac555a1501/image1.png" />
          </figure><p>It’s worth noting that <a href="https://developers.cloudflare.com/1.1.1.1/encryption/dns-over-https/"><u>DoH (DNS-over-HTTPS)</u></a> traffic remained relatively stable as most DoH users use the domain <a href="http://cloudflare-dns.com"><u>cloudflare-dns.com</u></a>, configured manually or through their browser, to access the public DNS resolver, rather than by IP address. DoH remained available and traffic was mostly unaffected as <a href="http://cloudflare-dns.com"><u>cloudflare-dns.com</u></a> uses a different set of IP addresses. Some DNS traffic over UDP that also used different IP addresses remained mostly unaffected as well.</p><p>As the corresponding prefixes were withdrawn, no traffic sent to those addresses could reach Cloudflare. We can see this in the timeline for the BGP announcements for 1.1.1.0/24:
</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/c28k2YwaBVLevqmpV4cjG/f923ecef419b71e5b70cb6a6ca616bbd/image5.png" />
          </figure><p><sup><i>Pictured above is the timeline for BGP withdrawal and re-announcement of 1.1.1.0/24 globally</i></sup></p><p>When looking at the query rate of the withdrawn IPs it can be observed that almost no traffic arrives during the impact window. When the initial fix was applied at 22:20 UTC, a large spike in traffic can be seen before it drops off again. This spike is due to clients retrying their queries. When we started announcing the withdrawn prefixes again, queries were able to reach Cloudflare once more. It took until 22:54 UTC before routing was restored in all locations and traffic returned to mostly normal levels.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5vfTPQ6ndKXzsgphist0Mg/610477306f1f056b4cdf98fbbe274e5b/image6.png" />
          </figure>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/67oZjnT3jx272udhoA5hp7/8c41c972162f81d020cb5d189885882a/image3.png" />
          </figure>
    <div>
      <h2><b>Technical description of the error and how it happened</b></h2>
      <a href="#technical-description-of-the-error-and-how-it-happened">
        
      </a>
    </div>
    
    <div>
      <h3>Failure of 1.1.1.1 Resolver Service</h3>
      <a href="#failure-of-1-1-1-1-resolver-service">
        
      </a>
    </div>
    <p>As described above, a configuration change on June 6 introduced an error in the service topology for a pre-production, DLS service. On July 14, a second change to that service was made: an offline data center location was added to the service topology for the pre-production DNS service in order to allow for some internal testing. This change triggered a refresh of the global configuration of the associated routes, and it was at this point that the impact from the earlier configuration error was felt. The service topology for the 1.1.1.1 Resolver's prefixes was reduced from all locations down to a single, offline location. The effect was to trigger the global and immediate withdrawal of all 1.1.1.1 prefixes.</p><p>As routes to 1.1.1.1 were withdrawn, the 1.1.1.1 service itself became unavailable. Alerts fired and an incident was declared.</p>
    <div>
      <h3>Technical Investigation and Analysis</h3>
      <a href="#technical-investigation-and-analysis">
        
      </a>
    </div>
    <p>The way that Cloudflare manages service topologies has been refined over time and currently consist of a combination of a legacy and a strategic system that are synced. Cloudflare's IP ranges are currently bound and configured across these systems that  dictate where an IP range should be announced (in terms of datacenter location) on the edge network. The legacy approach of hard-coding explicit lists of data center locations and attaching them to particular prefixes has proved error-prone, since (for example) bringing a new data center online requires many different lists to be updated and synced consistently. This model also has a significant flaw in that updates to the configuration do not follow a progressive deployment methodology: Even though this release was peer-reviewed by multiple engineers, the change didn’t go through a series of canary deployments before reaching every Cloudflare data center. Our newer approach is to describe service topologies without needing to hard-code IP addresses, which better accommodate expansions to new locations and customer scenarios while also allowing for a staged deployment model, so changes can propagate slowly with health monitoring. During the migration between these approaches, we need to maintain both systems and synchronize data between them, which looks like this:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1ofHPUKzoes5uJY7VluA0F/b39b729457ef62361443f7c83444d8fe/image2.png" />
          </figure><p>Initial alerts were triggered for the DNS Resolver at 22:01, indicating query, proxy, and data center failures. While investigating the alerts, we noted traffic toward the Resolver prefixes had drastically dropped and was no longer being received at our edge data centers. Internally, we use BGP to control route advertisements, and we found the Resolver routes from servers were completely missing.</p><p>Once our configuration error had been exposed and Cloudflare systems had withdrawn the routes from our routing table, all of the 1.1.1.1 routes should have disappeared entirely from the global Internet routing table. However, this isn’t what happened with the prefix 1.1.1.0/24. Instead, we got reports from <a href="https://radar.cloudflare.com/routing/anomalies/hijack-107469"><u>Cloudflare Radar</u></a> that Tata Communications India (AS4755) had started advertising 1.1.1.0/24: from the perspective of the routing system, this looked exactly like a prefix hijack. This was unexpected to see while we were troubleshooting the routing problem, but to be perfectly clear: <b>this BGP hijack was not the cause of the outage.</b> We are following up with Tata Communications.</p>
    <div>
      <h3>Restoring the 1.1.1.1 Service</h3>
      <a href="#restoring-the-1-1-1-1-service">
        
      </a>
    </div>
    <p>We reverted to the previous configuration at 22:20 UTC. Near instantly, we began readvertising the BGP prefixes which were previously withdrawn from the routers, including 1.1.1.0/24. This restored 1.1.1.1 traffic levels to roughly 77% of what they were prior to the incident. However, during the period since withdrawal, approximately 23% of the fleet of edge servers had been automatically reconfigured to remove required IP bindings as a result of the topology change. To add the configurations back, these servers needed to be reconfigured with our change management system which is not an instantaneous process by default for safety. </p><p>The process by which the IP bindings can be restored normally takes some time, as the network in individual locations is designed to be updated over a course of multiple hours. We implement a progressive rollout, rather than on all nodes at once to ensure we don’t introduce additional impact. However, given the severity of the incident, we accelerated the rollout of the fix after verifying the changes in testing locations to restore service as quickly and safely as possible. Normal traffic levels were observed at 22:54 UTC.</p>
    <div>
      <h2><b>Remediation and follow-up steps</b></h2>
      <a href="#remediation-and-follow-up-steps">
        
      </a>
    </div>
    <p>We take incidents like this seriously, and we recognise the impact that this incident had. Though this specific issue has been resolved, we have identified several steps we can take to mitigate the risk of a similar problem occurring in the future. We are implementing the following plan as a result of this incident:</p><p><b>Staging Addressing Deployments: </b>Legacy components do not leverage a gradual, staged deployment methodology. Cloudflare will deprecate these systems which enables modern progressive and health mediated deployment processes to provide earlier indication in a staged manner and rollback accordingly.</p><p><b>Deprecating Legacy Systems:</b> We are currently in an intermediate state in which current and legacy components need to be updated concurrently, so we will be migrating addressing systems away from risky deployment methodologies like this one. We will accelerate our deprecation of the legacy systems in order to provide higher standards for documentation and test coverage.</p>
    <div>
      <h2><b>Conclusion</b></h2>
      <a href="#conclusion">
        
      </a>
    </div>
    <p>Cloudflare's 1.1.1.1 DNS Resolver service fell victim to an internal configuration error.</p><p>We are sorry for the disruption this incident caused for our customers. We are actively making these improvements to ensure improved stability moving forward and to prevent this problem from happening again.</p> ]]></content:encoded>
            <category><![CDATA[Outage]]></category>
            <category><![CDATA[IPv4]]></category>
            <category><![CDATA[1.1.1.1]]></category>
            <category><![CDATA[WARP]]></category>
            <guid isPermaLink="false">5rRaCTCC50CW9n2PKjL7xY</guid>
            <dc:creator>Ash Pallarito</dc:creator>
            <dc:creator>Joe Abley</dc:creator>
        </item>
        <item>
            <title><![CDATA[Your IPs, your rules: enabling more efficient address space usage]]></title>
            <link>https://blog.cloudflare.com/your-ips-your-rules-enabling-more-efficient-address-space-usage/</link>
            <pubDate>Mon, 19 May 2025 13:00:00 GMT</pubDate>
            <description><![CDATA[ IPv4 is expensive, and moving network resources around is hard. Previously, when customers wanted to use multiple Cloudflare services, they had to bring a new address range. ]]></description>
            <content:encoded><![CDATA[ <p>IPv4 addresses have become a costly commodity, driven by their growing scarcity. With the original pool of 4.3 billion addresses long exhausted, organizations must now rely on the secondary market to acquire them. Over the years, prices have surged, often exceeding $30–$50 USD per address, with <a href="https://auctions.ipv4.global/?cf_history_state=%7B%22guid%22%3A%22C255D9FF78CD46CDA4F76812EA68C350%22%2C%22historyId%22%3A6%2C%22targetId%22%3A%22B695D806845101070936062659E97ADD%22%7D"><u>costs</u></a> varying based on block size and demand. Given the scarcity, these prices are only going to rise, particularly for businesses that haven’t transitioned to <a href="https://blog.cloudflare.com/amazon-2bn-ipv4-tax-how-avoid-paying/"><u>IPv6</u></a>. This rising cost and limited availability have made efficient IP address management more critical than ever. In response, we’ve evolved how we handle BYOIP (<a href="https://blog.cloudflare.com/bringing-your-own-ips-to-cloudflare-byoip/"><u>Bring Your Own IP</u></a>) prefixes to give customers greater flexibility.</p><p>Historically, when customers onboarded a BYOIP prefix, they were required to assign it to a single service, binding all IP addresses within that prefix to one service before it was advertised. Once set, the prefix's destination was fixed — to direct traffic exclusively to that service. If a customer wanted to use a different service, they had to onboard a new prefix or go through the cumbersome process of offboarding and re-onboarding the existing one.</p><p>As a step towards addressing this limitation, we’ve introduced a new level of flexibility: customers can now use parts of any prefix — whether it’s bound to Cloudflare CDN, Spectrum, or Magic Transit — for additional use with CDN or Spectrum. This enhancement provides much-needed flexibility, enabling businesses to optimize their IP address usage while keeping costs under control. </p>
    <div>
      <h2>The challenges of moving onboarded BYOIP prefixes between services</h2>
      <a href="#the-challenges-of-moving-onboarded-byoip-prefixes-between-services">
        
      </a>
    </div>
    <p>Migrating BYOIP prefixes dynamically between Cloudflare services is no trivial task, especially with thousands of servers capable of accepting and processing connections. The problem required overcoming several technical challenges related to IP address management, kernel-level bindings, and orchestration. </p>
    <div>
      <h3>Dynamic reallocation of prefixes across services</h3>
      <a href="#dynamic-reallocation-of-prefixes-across-services">
        
      </a>
    </div>
    <p>When configuring an IP prefix for a service, we need to update IP address lists and firewall rules on each of our servers to allow only the traffic we expect for that service, such as opening ports 80 and 443 to allow HTTP and HTTPS traffic for the Cloudflare CDN. We use Linux <a href="https://en.wikipedia.org/wiki/Iptables#:~:text=iptables%20is%20a%20user%2Dspace,to%20treat%20network%20traffic%20packets."><u>iptables</u></a> and <a href="https://en.wikipedia.org/wiki/Iptables"><u>IP sets</u></a> for this.</p><p>Migrating IP prefixes to a different service involves dynamically reassigning them to different IP sets and iptable rules. This requires automated updates across a large-scale distributed environment.</p><p>As prefixes shift between services, it is critical that servers update their IP sets and iptable rules dynamically to ensure traffic is correctly routed. Failure to do so could lead to routing loops or dropped connections.  </p>
    <div>
      <h3>Updating Tubular – an eBPF-based IP and port binding service</h3>
      <a href="#updating-tubular-an-ebpf-based-ip-and-port-binding-service">
        
      </a>
    </div>
    <p>Most web applications bind to a list of IP addresses at startup, and listen on only those IPs until shutdown. To allow customers to change the IPs bound to each service dynamically, we needed a way to add and remove IPs from a running service, without restarting it. <a href="https://blog.cloudflare.com/tubular-fixing-the-socket-api-with-ebpf/"><u>Tubular</u></a> is a <a href="https://blog.cloudflare.com/cloudflare-architecture-and-how-bpf-eats-the-world/"><u>BPF</u></a> program we wrote that runs on Cloudflare servers that allows services to listen on a single socket, dynamically updating the list of addresses that are routed to that socket over the lifetime of the service, without requiring it to restart when those addresses change.</p><p>A significant engineering challenge was extending Tubular to support traffic destined for Cloudflare’s CDN.  Without this enhancement, customers would be unable to leverage dynamic reassignment to bind prefixes onboarded through Spectrum to the Cloudflare CDN, limiting flexibility across services.</p><p>Cloudflare’s CDN depends on each server running an NGINX ingress proxy to terminate incoming connections. Due to the <a href="https://blog.cloudflare.com/how-we-built-pingora-the-proxy-that-connects-cloudflare-to-the-internet/"><u>scale and performance limitations of NGINX</u></a>, we are actively working to replace it by 2026. In the interim, however, we still depend on the current ingress proxy to reliably handle incoming connections.</p><p>One limitation is that this ingress proxy does not support <a href="https://systemd.io/"><u>systemd</u></a> socket activation, a mechanism Tubular relies on to integrate with other Cloudflare services on each server. For services that do support systemd socket activation, systemd independently starts the sockets for the owning service and passes them to Tubular, allowing Tubular to easily detect and route traffic to the correct terminating service.</p><p>Since this integration model is not feasible, an alternative solution was required. This was addressed by introducing a shared Unix domain socket between Tubular and the ingress proxy service on each server. Through this channel,  the ingress proxy service explicitly transmits socket information to Tubular, enabling it to correctly register the sockets in its datapath.</p><p>The final challenge was deploying the Tubular-ingress proxy integration across the fleet of servers without disrupting active connections. As of April 2025, Cloudflare handles an average of 71 million HTTP requests per second, peaking at 100 million. To safely deploy at this scale, the necessary Tubular and ingress proxy configuration changes were staged across all Cloudflare servers without disrupting existing connections. The final step involved adding bindings — IP addresses and ports corresponding to Cloudflare CDN prefixes — to the Tubular configuration. These bindings direct connections through Tubular via the Unix sockets registered during the previous integration step. To minimize risk, bindings were gradually enabled in a controlled rollout across the global fleet.</p>
    <div>
      <h4>Tubular data plane in action</h4>
      <a href="#tubular-data-plane-in-action">
        
      </a>
    </div>
    <p>This high-level representation of the Tubular data plane binds together the Layer 4 protocol (TCP), prefix (192.0.2.0/24 - which is 254 usable IP addresses), and port number 0 (any port). When incoming packets match this combination, they are directed to the correct socket of the service — in this case, Spectrum.​</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5yQpYeTxPM7B8DZwLsQATs/3f488c5b37ef2358eacf779a42ac59d5/image4.png" />
          </figure><p>In the following example, TCP 192.0.2.200/32 has been upgraded to the Cloudflare CDN via the edge <a href="https://developers.cloudflare.com/api/resources/addressing/subresources/prefixes/subresources/service_bindings/"><u>Service Bindings API</u></a>. Tubular dynamically consumes this information, adding a new entry to its data plane bindings and socket table. Using Longest Prefix Match, all packets within the 192.0.2.0/24 range port 0 will be routed to Spectrum, except for 192.0.2.200/32 port 443, which will be directed to the Cloudflare CDN.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6wWlR9gWb6JEoyZm4iOpgQ/4a59bcab4a6731a53ea235500596c7f5/image1.png" />
          </figure>
    <div>
      <h4>Coordination and orchestration at scale </h4>
      <a href="#coordination-and-orchestration-at-scale">
        
      </a>
    </div>
    <p>Our goal is to achieve a quick transition of IP address prefixes between services when initiated by customers, which requires a high level of coordination. We need to ensure that changes propagate correctly across all servers to maintain stability. Currently, when a customer migrates a prefix between services, there is a 4-6 hour window of uncertainty where incoming packets may be dropped due to a lack of guaranteed routing. To address this, we are actively implementing systems that will reduce this transition time from hours to just a matter of minutes, significantly improving reliability and minimizing disruptions.</p>
    <div>
      <h2>Smarter IP address management</h2>
      <a href="#smarter-ip-address-management">
        
      </a>
    </div>
    <p>Service Bindings are mappings that control whether traffic destined for a given IP address is routed to Magic Transit, the CDN pipeline, or the Spectrum pipeline.</p><p>Consider the example in the diagram below. One of our customers, a global finance infrastructure platform, is using BYOIP and has a /24 range bound to <a href="https://developers.cloudflare.com/spectrum/"><u>Spectrum</u></a> for DDoS protection of their TCP and UDP traffic. However, they are only using a few addresses in that range for their Spectrum applications, while the rest go unused. In addition, the customer is using Cloudflare’s CDN for their Layer 7 traffic and wants to set up <a href="https://developers.cloudflare.com/byoip/concepts/static-ips/"><u>Static IPs</u></a>, so that their customers can allowlist a consistent set of IP addresses owned and controlled by their own network infrastructure team. Instead of using up another block of address space, they asked us whether they could carve out those unused sub-ranges of the /24 prefix.</p><p>From there, we set out to determine how to selectively map sub-ranges of the onboarded prefix to different services using service bindings:</p><ul><li><p>192.0.2.0/24 is already bound to <b>Spectrum</b></p><ul><li><p>192.0.2.0/25 is updated and bound to <b>CDN</b></p></li><li><p>192.0.2.200/32 is also updated bound to <b>CDN</b></p></li></ul></li></ul><p>Both the /25 and /32 are sub-ranges within the /24 prefix and will receive traffic directed to the CDN. All remaining IP addresses within the /24 prefix, unless explicitly bound, will continue to use the default Spectrum service binding.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/uwhMHBEuI1NHfp9qD9IFM/d2dcea59a8d9f962f03389831fd73851/image3.png" />
          </figure><p>As you can see in this example, this approach provides customers with greater control and agility over how their IP address space is allocated. Instead of rigidly assigning an entire prefix to a single service, users can now tailor their IP address usage to match specific workloads or deployment needs. Setting this up is straightforward — all it takes is a few HTTP requests to the <a href="https://developers.cloudflare.com/api/resources/addressing/subresources/prefixes/subresources/service_bindings/"><u>Cloudflare API</u></a>. You can define service bindings by specifying which IP addresses or subnets should be routed to CDN, Spectrum, or Magic Transit. This allows you to tailor traffic routing to match your architecture without needing to restructure your entire IP address allocation. The process remains consistent whether you're configuring a single IP address or splitting up larger subnets, making it easy to apply across different parts of your network. The foundational technical work addressing the underlying architectural challenges outlined above made it possible to streamline what could have been a complex setup into a straightforward series of API interactions.</p>
    <div>
      <h2>Conclusion</h2>
      <a href="#conclusion">
        
      </a>
    </div>
    <p>We envision a future where customers have granular control over how their traffic moves through Cloudflare’s global network, not just by service, but down to the port level. A single prefix could simultaneously power web applications on CDN, protect infrastructure through Magic Transit, and much more. This isn't just flexible routing, but programmable traffic orchestration across different services. What was once rigid and static becomes dynamic and fully programmable to meet each customer’s unique needs. </p><p>If you are an existing BYOIP customer using Magic Transit, CDN, or Spectrum, check out our <a href="https://developers.cloudflare.com/byoip/service-bindings/magic-transit-with-cdn/"><u>configuration guide here</u></a>. If you are interested in bringing your own IP address space and using multiple Cloudflare services on it, please reach out to your account team to enable setting up this configuration via <a href="https://developers.cloudflare.com/api/resources/addressing/subresources/prefixes/subresources/service_bindings/"><u>API</u></a> or reach out to sales@cloudflare.com if you’re new to Cloudflare.</p> ]]></content:encoded>
            <category><![CDATA[API]]></category>
            <category><![CDATA[Addressing]]></category>
            <category><![CDATA[IPv4]]></category>
            <category><![CDATA[BYOIP]]></category>
            <category><![CDATA[Spectrum]]></category>
            <category><![CDATA[CDN]]></category>
            <category><![CDATA[Magic Transit]]></category>
            <guid isPermaLink="false">7FAYMppkyZG4CEGdLEcLlR</guid>
            <dc:creator>Mark Rodgers</dc:creator>
            <dc:creator>Sphoorti Metri</dc:creator>
            <dc:creator>Ash Pallarito</dc:creator>
        </item>
        <item>
            <title><![CDATA[HTTPS-only for Cloudflare APIs: shutting the door on cleartext traffic]]></title>
            <link>https://blog.cloudflare.com/https-only-for-cloudflare-apis-shutting-the-door-on-cleartext-traffic/</link>
            <pubDate>Thu, 20 Mar 2025 13:00:00 GMT</pubDate>
            <description><![CDATA[ We are closing the cleartext HTTP ports entirely for Cloudflare API traffic. This prevents the risk of clients unintentionally leaking their secret API keys in cleartext during the initial request.  ]]></description>
            <content:encoded><![CDATA[ <p>Connections made over cleartext HTTP ports risk exposing sensitive information because the data is transmitted unencrypted and can be intercepted by network intermediaries, such as ISPs, Wi-Fi hotspot providers, or malicious actors on the same network. It’s common for servers to either <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Redirections"><u>redirect</u></a> or return a <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/403"><u>403 (Forbidden)</u></a> response to close the HTTP connection and enforce the use of HTTPS by clients. However, by the time this occurs, it may be too late, because sensitive information, such as an API token, may have already been <a href="https://jviide.iki.fi/http-redirects"><u>transmitted in cleartext</u></a> in the initial client request. This data is exposed before the server has a chance to redirect the client or reject the connection.</p><p>A better approach is to refuse the underlying cleartext connection by closing the <a href="https://developers.cloudflare.com/fundamentals/reference/network-ports/"><u>network ports</u></a> used for plaintext HTTP, and that’s exactly what we’re going to do for our customers.</p><p><b>Today we’re announcing that we’re closing all of the </b><a href="https://developers.cloudflare.com/fundamentals/reference/network-ports/#network-ports-compatible-with-cloudflares-proxy:~:text=HTTP%20ports%20supported%20by%20Cloudflare"><b><u>HTTP ports</u></b></a><b> on api.cloudflare.com.</b> We’re also making changes so that api.cloudflare.com can change IP addresses dynamically, in line with on-going efforts to <a href="https://blog.cloudflare.com/addressing-agility/"><u>decouple names from IP addresses</u></a>, and reliably <a href="https://blog.cloudflare.com/topaz-policy-engine-design/"><u>managing</u></a> addresses in our authoritative DNS. This will enhance the agility and flexibility of our API endpoint management. Customers relying on static IP addresses for our API endpoints will be notified in advance to prevent any potential availability issues.</p><p>In addition to taking this first step to secure Cloudflare API traffic, we’ll release the ability for customers to opt-in to safely disabling all HTTP port traffic for their websites on Cloudflare. We expect to make this free security feature available in the last quarter of 2025.</p><p>We have <a href="https://blog.cloudflare.com/introducing-universal-ssl/"><u>consistently</u></a> <a href="https://blog.cloudflare.com/enforce-web-policy-with-hypertext-strict-transport-security-hsts/"><u>advocated</u></a> for <a href="https://blog.cloudflare.com/post-quantum-for-all/"><u>strong encryption standards</u></a> to safeguard users’ data and privacy online. As part of our ongoing commitment to enhancing Internet security, this blog post details our efforts to <i>enforce</i> HTTPS-only connections across our global network. </p>
    <div>
      <h3>Understanding the problem</h3>
      <a href="#understanding-the-problem">
        
      </a>
    </div>
    <p>We already provide an “<a href="https://developers.cloudflare.com/ssl/edge-certificates/additional-options/always-use-https/"><u>Always Use HTTPS</u></a>” setting that can be used to redirect all visitor traffic on our customers’ domains (and subdomains) from HTTP (plaintext) to HTTPS (encrypted). For instance, when a user clicks on an HTTP version of the URL on the site (http://www.example.com), we issue an HTTP 3XX redirection status code to immediately redirect the request to the corresponding HTTPS version (https://www.example.com) of the page. While this works well for most scenarios, there’s a subtle but important risk factor: What happens if the initial plaintext HTTP request (before the redirection) contains sensitive user information?</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2mXYZL0JRZOb8J6Tqm4mCj/1b24b76335ad9cf3f3b630ef31868f6c/1.png" />
          </figure><p><sup><i>Initial plaintext HTTP request is exposed to the network before the server can redirect to the secure HTTPS connection.</i></sup></p><p>Third parties or intermediaries on shared networks could intercept sensitive data from the first plaintext HTTP request, or even carry out a <a href="https://blog.cloudflare.com/monsters-in-the-middleboxes/"><u>Monster-in-the-Middle (MITM)</u></a> attack by impersonating the web server.</p><p>One may ask if <a href="https://developers.cloudflare.com/ssl/edge-certificates/additional-options/http-strict-transport-security/"><u>HTTP Strict Transport Security (HSTS)</u></a> would partially alleviate this concern by ensuring that, after the first request, visitors can only access the website over HTTPS without needing a redirect. While this does reduce the window of opportunity for an adversary, the first request still remains exposed. Additionally, HSTS is not applicable by default for most non-user-facing use cases, such as API traffic from stateless clients. Many API clients don’t retain browser-like state or remember HSTS headers they've encountered. It is quite <a href="https://jviide.iki.fi/http-redirects"><u>common practice</u></a> for API calls to be redirected from HTTP to HTTPS, and hence have their initial request exposed to the network.</p><p>Therefore, in line with our <a href="https://blog.cloudflare.com/dogfooding-from-home/"><u>culture of dogfooding</u></a>, we evaluated the accessibility of the Cloudflare API (<a href="https://api.cloudflare.com"><u>api.cloudflare.com</u></a>) over <a href="https://developers.cloudflare.com/fundamentals/reference/network-ports/#:~:text=ports%20listed%20below.-,HTTP,-ports%20supported%20by"><u>HTTP ports (80, and others)</u></a>. In that regard, imagine a client making an initial request to our API endpoint that includes their <i>secret API key</i>. While we outright reject all plaintext connections with a 403 Forbidden response instead of redirecting for API traffic — clearly indicating that “<i>Cloudflare API is only accessible over TLS”</i> — this rejection still happens at the <a href="https://www.cloudflare.com/learning/ddos/what-is-layer-7/">application layer</a>. By that point, the API key may have already been exposed over the network before we can even reject the request. We do have a notification mechanism in place to alert customers and rotate their API keys accordingly, but a stronger approach would be to eliminate the exposure entirely. We have an opportunity to improve!</p>
    <div>
      <h3>A better approach to API security</h3>
      <a href="#a-better-approach-to-api-security">
        
      </a>
    </div>
    <p>Any API key or token exposed in plaintext on the public Internet should be considered compromised. We can either address exposure after it occurs or prevent it entirely. The reactive approach involves continuously tracking and revoking compromised credentials, requiring active management to rotate each one. For example, when a plaintext HTTP request is made to our API endpoints, we detect exposed tokens by scanning for 'Authorization' header values.</p><p>In contrast, a preventive approach is stronger and more effective, stopping exposure before it happens. Instead of relying on the API service application to react after receiving potentially sensitive cleartext data, we can preemptively refuse the underlying connection at the <a href="https://www.cloudflare.com/learning/ddos/glossary/open-systems-interconnection-model-osi/"><u>transport layer</u></a>, before any HTTP or application-layer data is exchanged. The <i>preventative </i>approach can be achieved by closing all plaintext HTTP ports for API traffic on our global network. The added benefit is that this is operationally much simpler: by eliminating cleartext traffic, there's no need for key rotation.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1PYo3GMZjQ4LbfUHXNOVpj/2341da1d926e077624563358bd5034ef/2.png" />
          </figure><p><sup><i>The transport layer carries the application layer data on top.</i></sup></p><p>To explain why this works: an application-layer request requires an underlying transport connection, like TCP or QUIC, to be established first. The combination of a port number and an IP address serves as a transport layer identifier for creating the underlying transport channel. Ports direct network traffic to the correct application-layer process — for example, port 80 is designated for plaintext HTTP, while port 443 is used for encrypted HTTPS. By disabling the HTTP cleartext server-side port, we prevent that transport channel from being established during the initial "<a href="https://www.cloudflare.com/learning/ddos/glossary/tcp-ip/"><u>handshake</u></a>" phase of the connection — before any application data, such as a secret API key, leaves the client’s machine.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/13fKw8cHkHlsLOzlXJYenr/c9156f67ae99cfdc74dc5917ebc1e5bb/3.png" />
          </figure><p><sup><i>Both TCP and QUIC transport layer handshakes are a pre-requisite for HTTPS application data exchange on the web.</i></sup></p><p>Therefore, closing the HTTP interface entirely for API traffic gives a strong and visible <b>fast-failure</b> signal to developers that might be mistakenly accessing <code>http://… </code>instead of <code>https://…</code> with their secret API keys in the first request — a simple one-letter omission, but one with serious implications.</p><p>In theory, this is a simple change, but at Cloudflare’s global scale, implementing it required careful planning and execution. We’d like to share the steps we took to make this transition.</p>
    <div>
      <h3>Understanding the scope</h3>
      <a href="#understanding-the-scope">
        
      </a>
    </div>
    <p>In an ideal scenario, we could simply close all cleartext HTTP ports on our network. However, two key challenges prevent this. First, as shown in the <a href="https://radar.cloudflare.com/adoption-and-usage#http-vs-https"><u>Cloudflare Radar</u></a> figure below, about 2-3% of requests from “likely human” clients to our global network are over plaintext HTTP. While modern browsers prominently warn users about insecure HTTP connections and <a href="https://support.mozilla.org/en-US/kb/https-only-prefs"><u>offer features to silently upgrade to HTTPS</u></a>, this protection doesn't extend to the broader ecosystem of connected devices. IoT devices with limited processing power, automated API clients, or legacy software stacks often lack such safeguards entirely. In fact, when filtering on plaintext HTTP traffic that is “likely automated”, the share <a href="https://radar.cloudflare.com/explorer?dataSet=http&amp;groupBy=http_protocol&amp;filters=botClass%253DLikely_Automated"><u>rises to over 16%</u></a>! We continue to see a wide variety of legacy clients accessing resources over plaintext connections. This trend is not confined to specific networks, but is observable globally.</p><p>Closing HTTP ports, like port 80, across our entire IP address space would block such clients entirely, causing a major disruption in services. While we plan to cautiously start by implementing the change on Cloudflare's API IP addresses, it’s not enough. Therefore, our goal is to ensure all of our customers’ API traffic benefits from this change as well.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1OfUjwkP9iMdjymjtJX7tL/4cd278faf71f610c43239cc41d8f6fba/4.png" />
          </figure><p><sup><i>Breakdown of HTTP and HTTPS for ‘human’ connections</i></sup></p><p>The second challenge relates to limitations posed by the longstanding <a href="https://en.wikipedia.org/wiki/Berkeley_sockets"><u>BSD Sockets API</u></a> at the server-side, which we have addressed using <a href="https://blog.cloudflare.com/tubular-fixing-the-socket-api-with-ebpf/"><u>Tubular</u></a>, a tool that inspects every connection terminated by a server and decides which application should receive it. Operators historically have faced a challenging dilemma: either listen to the same ports across many IP addresses using a single socket (scalable but inflexible), or maintain individual sockets for each IP address (flexible but unscalable). Luckily, Tubular has allowed us to <a href="https://blog.cloudflare.com/its-crowded-in-here/"><u>resolve this using 'bindings'</u></a>, which decouples sockets from specific IP:port pairs. This creates efficient pathways for managing endpoints throughout our systems at scale, enabling us to handle both HTTP and HTTPS traffic intelligently without the traditional limitations of socket architecture.</p><p>Step 0, then, is about provisioning both IPv4 and IPv6 address space on our network that by default has all HTTP ports closed. Tubular enables us to configure and manage these IP addresses differently than others for our endpoints. Additionally, <a href="https://blog.cloudflare.com/addressing-agility/"><u>Addressing Agility</u></a> and <a href="https://blog.cloudflare.com/topaz-policy-engine-design/"><u>Topaz</u></a> enable us to assign these addresses dynamically, and safely, for opted-in domains.</p>
    <div>
      <h3>Moving from strategy to execution</h3>
      <a href="#moving-from-strategy-to-execution">
        
      </a>
    </div>
    <p>In the past, our legacy stack would have made this transition challenging, but today’s Cloudflare possesses the appropriate tools to deliver a scalable solution, rather than addressing it on a domain-by-domain basis.</p><p>Using Tubular, we were able to bind our new set of anycast IP prefixes to our TLS-terminating proxies across the globe. To ensure that no plaintext HTTP traffic is served on these IP addresses, we extended our global <a href="https://en.wikipedia.org/wiki/Iptables"><u>iptables</u></a> firewall configuration to reject any inbound packets on HTTP ports.</p>
            <pre><code>iptables -A INPUT -p tcp -d &lt;IP_ADDRESS_BLOCK&gt; --dport &lt;HTTP_PORT&gt; -j REJECT 
--reject-with tcp-reset

iptables -A INPUT -p udp -d &lt;IP_ADDRESS_BLOCK&gt; --dport &lt;HTTP_PORT&gt; -j REJECT 
--reject-with icmp-port-unreachable</code></pre>
            <p>As a result, any connections to these IP addresses on HTTP ports are filtered and rejected at the transport layer, eliminating the need for state management at the application layer by our web proxies.</p><p>The next logical step is to update the <a href="https://www.cloudflare.com/learning/dns/what-is-dns/"><u>DNS assignments</u></a> so that API traffic is routed over the <i>correct</i> IP addresses. In our case, we encoded a new DNS policy for API traffic for the HTTPS-only interface as a declarative <a href="https://blog.cloudflare.com/topaz-policy-engine-design/"><u>Topaz program</u></a> in our authoritative DNS server:</p>
            <pre><code>- name: https_only
 exclusive: true 
 config: |
    (config
      ([traffic_class "API"]
       [ipv4 (ipv4_address “192.0.2.1”)] # Example IPv4 address
       [ipv6 (ipv6_address “2001:DB8::1:1”)] # Example IPv6 address
       [t (ttl 300]))
  match: |
    (= query_domain_class traffic_class)
  response: |
    (response (list ipv4) (list ipv6) t)</code></pre>
            <p>The above policy encodes that for any DNS query targeting the ‘API traffic’ class, we return the respective HTTPS-only interface IP addresses. Topaz’s safety guarantees ensure <i>exclusivity</i>, preventing other DNS policies from inadvertently matching the same queries and misrouting plaintext HTTP expected domains to HTTPS-only IPs

api.cloudflare.com is the first domain to be added to our HTTPS-only API traffic class, with other applicable endpoints to follow.</p>
    <div>
      <h3>Opting-in your API endpoints</h3>
      <a href="#opting-in-your-api-endpoints">
        
      </a>
    </div>
    <p>As we said above, we've started with api.cloudflare.com and our internal API endpoints to thoroughly monitor any side effects on our own systems before extending this feature to customer domains. We have deployed these changes gradually across all data centers, leveraging Topaz’s flexibility to target subsets of traffic, minimizing disruptions, and ensuring a smooth transition.</p><p>To monitor unencrypted connections for your domains, before blocking access using the feature, you can review the relevant analytics on the Cloudflare dashboard. Log in, select your account and domain, and navigate to the "<a href="https://developers.cloudflare.com/analytics/types-of-analytics/#account-analytics-beta"><u>Analytics &amp; Logs</u></a>" section. There, under the "<i>Traffic Served Over SSL</i>" subsection, you will find a breakdown of encrypted and unencrypted traffic for your site. That data can help provide a baseline for assessing the volume of plaintext HTTP connections for your site that will be blocked when you opt in. After opting in, you would expect no traffic for your site will be served over plaintext HTTP, and therefore that number should go down to zero.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4YjvOU3XQqj1Y2Kfv2jIL3/97178a99d17f8938bc3ec53704bbc4b8/5.png" />
          </figure><p><i>Snapshot of ‘Traffic Served Over SSL’ section on Cloudflare dashboard</i></p><p>Towards the last quarter of 2025, we will provide customers the ability to opt in their domains using the dashboard or API (similar to enabling the Always Use HTTPS feature). Stay tuned!</p>
    <div>
      <h3>Wrapping up</h3>
      <a href="#wrapping-up">
        
      </a>
    </div>
    <p>Starting today, any unencrypted connection to api.cloudflare.com will be completely rejected. Developers should <b>not</b> expect a 403 Forbidden response any longer for HTTP connections, as we will prevent the underlying connection to be established by closing the HTTP interface entirely. Only secure HTTPS connections will be allowed to be established.</p><p>We are also making updates to transition api.cloudflare.com away from its static IP addresses in the future. As part of that change, we will be discontinuing support for <a href="https://developers.cloudflare.com/ssl/reference/browser-compatibility/#non-sni-support"><u>non-SNI</u></a> legacy clients for Cloudflare API specifically — currently, an average of just 0.55% of TLS connections to the Cloudflare API do not include an <a href="https://www.cloudflare.com/learning/ssl/what-is-sni/">SNI</a> value. These non-SNI connections are initiated by a small number of accounts. We are committed to coordinating this transition and will work closely with the affected customers before implementing the change. This initiative aligns with our goal of enhancing the agility and reliability of our API endpoints.</p><p>Beyond the Cloudflare API use case, we're also exploring other areas where it's safe to close plaintext traffic ports. While the long tail of unencrypted traffic may persist for a while, it shouldn’t be forced on every site.

In the meantime, a small step like this can allow us to have a big impact in helping make a better Internet, and we are working hard to reliably bring this feature to your domains. We believe security should be free for all!</p> ]]></content:encoded>
            <category><![CDATA[Security Week]]></category>
            <category><![CDATA[HTTPS]]></category>
            <category><![CDATA[API]]></category>
            <category><![CDATA[Addressing]]></category>
            <category><![CDATA[Research]]></category>
            <guid isPermaLink="false">RqjV9vQoPNX8txlmrju6d</guid>
            <dc:creator>Suleman Ahmad</dc:creator>
            <dc:creator>Ash Pallarito</dc:creator>
            <dc:creator>Algin Martin</dc:creator>
        </item>
    </channel>
</rss>