
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/">
    <channel>
        <title><![CDATA[ The Cloudflare Blog ]]></title>
        <description><![CDATA[ Get the latest news on how products at Cloudflare are built, technologies used, and join the teams helping to build a better Internet. ]]></description>
        <link>https://blog.cloudflare.com</link>
        <atom:link href="https://blog.cloudflare.com/" rel="self" type="application/rss+xml"/>
        <language>en-us</language>
        
        <lastBuildDate>Tue, 07 Apr 2026 14:32:44 GMT</lastBuildDate>
        <item>
            <title><![CDATA[DIY BYOIP: a new way to Bring Your Own IP prefixes to Cloudflare]]></title>
            <link>https://blog.cloudflare.com/diy-byoip/</link>
            <pubDate>Fri, 07 Nov 2025 14:00:00 GMT</pubDate>
            <description><![CDATA[ Announcing a new self-serve API for Bring Your Own IP (BYOIP), giving customers unprecedented control and flexibility to onboard, manage, and use their own IP prefixes with Cloudflare's services. ]]></description>
            <content:encoded><![CDATA[ <p>When a customer wants to <a href="https://blog.cloudflare.com/bringing-your-own-ips-to-cloudflare-byoip/"><u>bring IP address space to</u></a> Cloudflare, they’ve always had to reach out to their account team to put in a request. This request would then be sent to various Cloudflare engineering teams such as addressing and network engineering — and then the team responsible for the particular service they wanted to use the prefix with (e.g., CDN, Magic Transit, Spectrum, Egress). In addition, they had to work with their own legal teams and potentially another organization if they did not have primary ownership of an IP prefix in order to get a <a href="https://developers.cloudflare.com/byoip/concepts/loa/"><u>Letter of Agency (LOA)</u></a> issued through hoops of approvals. This process is complex, manual, and  time-consuming for all parties involved — sometimes taking up to 4–6 weeks depending on various approvals. </p><p>Well, no longer! Today, we are pleased to announce the launch of our self-serve BYOIP API, which enables our customers to onboard and set up their BYOIP prefixes themselves.</p><p>With self-serve, we handle the bureaucracy for you. We have automated this process using the gold standard for routing security — the Resource Public Key Infrastructure, RPKI. All the while, we continue to ensure the best quality of service by generating LOAs on our customers’ behalf, based on the security guarantees of our new ownership validation process. This ensures that customer routes continue to be accepted in every corner of the Internet.</p><p>Cloudflare takes the security and stability of the whole Internet very seriously. RPKI is a cryptographically-strong authorization mechanism and is, we believe, substantially more reliable than common practice which relies upon human review of scanned documents. However, deployment and availability of some RPKI-signed artifacts like the AS Path Authorisation (ASPA) object remains limited, and for that reason we are limiting the initial scope of self-serve onboarding to BYOIP prefixes originated from Cloudflare's autonomous system number (ASN) AS 13335. By doing this, we only need to rely on the publication of Route Origin Authorisation (ROA) objects, which are widely available. This approach has the advantage of being safe for the Internet and also meeting the needs of most of our BYOIP customers. </p><p>Today, we take a major step forward in offering customers a more comprehensive IP address management (IPAM) platform. With the recent update to <a href="https://blog.cloudflare.com/your-ips-your-rules-enabling-more-efficient-address-space-usage/"><u>enable multiple services on a single BYOIP prefix</u></a> and this latest advancement to enable self-serve onboarding via our API, we hope customers feel empowered to take control of their IPs on our network.</p>
    <div>
      <h2>An evolution of Cloudflare BYOIP</h2>
      <a href="#an-evolution-of-cloudflare-byoip">
        
      </a>
    </div>
    <p>We want Cloudflare to feel like an extension of your infrastructure, which is why we <a href="https://blog.cloudflare.com/bringing-your-own-ips-to-cloudflare-byoip/"><u>originally launched Bring-Your-Own-IP (BYOIP) back in 2020</u></a>. </p><p>A quick refresher: Bring-your-own-IP is named for exactly what it does - it allows customers to bring their own IP space to Cloudflare. Customers choose BYOIP for a number of reasons, but the main reasons are control and configurability. An IP prefix is a range or block of IP addresses. Routers create a table of reachable prefixes, known as a routing table, to ensure that packets are delivered correctly across the Internet. When a customer's Cloudflare services are configured to use the customer's own addresses, onboarded to Cloudflare as BYOIP, a packet with a corresponding destination address will be routed across the Internet to Cloudflare's global edge network, where it will be received and processed. BYOIP can be used with our Layer 7 services, Spectrum, or Magic Transit. </p>
    <div>
      <h2>A look under the hood: How it works</h2>
      <a href="#a-look-under-the-hood-how-it-works">
        
      </a>
    </div>
    
    <div>
      <h3>Today’s world of prefix validation</h3>
      <a href="#todays-world-of-prefix-validation">
        
      </a>
    </div>
    <p>Let’s take a step back and take a look at the state of the BYOIP world right now. Let’s say a customer has authority over a range of IP addresses, and they’d like to bring them to Cloudflare. We require customers to provide us with a Letter of Authorization (LOA) and have an Internet Routing Registry (IRR) record matching their prefix and ASN. Once we have this, we require manual review by a Cloudflare engineer. There are a few issues with this process:</p><ul><li><p>Insecure: The LOA is just a document—a piece of paper. The security of this method rests entirely on the diligence of the engineer reviewing the document. If the review is not able to detect that a document is fraudulent or inaccurate, it is possible for a prefix or ASN to be hijacked.</p></li><li><p>Time-consuming: Generating a single LOA is not always sufficient. If you are leasing IP space, we will ask you to provide documentation confirming that relationship as well, so that we can see a clear chain of authorisation from the original assignment or allocation of addresses to you. Getting all the paper documents to verify this chain of ownership, combined with having to wait for manual review can result in weeks of waiting to deploy a prefix!</p></li></ul>
    <div>
      <h3>Automating trust: How Cloudflare verifies your BYOIP prefix ownership in minutes</h3>
      <a href="#automating-trust-how-cloudflare-verifies-your-byoip-prefix-ownership-in-minutes">
        
      </a>
    </div>
    <p>Moving to a self-serve model allowed us to rethink the manner in which we conduct prefix ownership checks. We asked ourselves: How can we quickly, securely, and automatically prove you are authorized to use your IP prefix and intend to route it through Cloudflare?</p><p>We ended up killing two birds with one stone, thanks to our two-step process involving the creation of an RPKI ROA (verification of intent) and modification of IRR or rDNS records (verification of ownership). Self-serve unlocks the ability to not only onboard prefixes more quickly and without human intervention, but also exercises more rigorous ownership checks than a simple scanned document ever could. While not 100% foolproof, it is a significant improvement in the way we verify ownership.</p>
    <div>
      <h3>Tapping into the authorities	</h3>
      <a href="#tapping-into-the-authorities">
        
      </a>
    </div>
    <p>Regional Internet Registries (RIRs) are the organizations responsible for distributing and managing Internet number resources like IP addresses. They are composed of 5 different entities operating in different regions of the world (<a href="https://developers.cloudflare.com/byoip/get-started/#:~:text=Your%20prefix%20must%20be%20registered%20under%20one%20of%20the%20Regional%20Internet%20Registries%20(RIRs)%3A"><u>RIRs</u></a>). Originally allocated address space from the Internet Assigned Numbers Authority (IANA), they in turn assign and allocate that IP space to Local Internet Registries (LIRs) like ISPs.</p><p>This process is based on RIR policies which generally look at things like legal documentation, existing database/registry records, technical contacts, and BGP information. End-users can obtain addresses from an LIR, or in some cases through an RIR directly. As IPv4 addresses have become more scarce, brokerage services have been launched to allow addresses to be leased for fixed periods from their original assignees.</p><p>The Internet Routing Registry (IRR) is a separate system that focuses on routing rather than address assignment. Many organisations operate IRR instances and allow routing information to be published, including all five RIRs. While most IRR instances impose few barriers to the publication of routing data, those that are operated by RIRs are capable of linking the ability to publish routing information with the organisations to which the corresponding addresses have been assigned. We believe that being able to modify an IRR record protected in this way provides a good signal that a user has the rights to use a prefix.</p><p>Example of a route object containing validation token (using the documentation-only address 192.0.2.0/24):</p>
            <pre><code>% whois -h rr.arin.net 192.0.2.0/24

route:          192.0.2.0/24
origin:         AS13335
descr:          Example Company, Inc.
                cf-validation: 9477b6c3-4344-4ceb-85c4-6463e7d2453f
admin-c:        ADMIN2521-ARIN
tech-c:         ADMIN2521-ARIN
tech-c:         CLOUD146-ARIN
mnt-by:         MNT-CLOUD14
created:        2025-07-29T10:52:27Z
last-modified:  2025-07-29T10:52:27Z
source:         ARIN</code></pre>
            <p>For those that don’t want to go through the process of IRR-based validation, reverse DNS (rDNS) is provided as another secure method of verification. To manage rDNS for a prefix — whether it's creating a PTR record or a security TXT record — you must be granted permission by the entity that allocated the IP block in the first place (usually your ISP or the RIR).</p><p>This permission is demonstrated in one of two ways:</p><ul><li><p>Directly through the IP owner’s authenticated customer portal (ISP/RIR).</p></li><li><p>By the IP owner delegating authority to your third-party DNS provider via an NS record for your reverse zone.</p></li></ul><p>Example of a reverse domain lookup using dig command (using the documentation-only address 192.0.2.0/24):</p>
            <pre><code>% dig cf-validation.2.0.192.in-addr.arpa TXT

; &lt;&lt;&gt;&gt; DiG 9.10.6 &lt;&lt;&gt;&gt; cf-validation.2.0.192.in-addr.arpa TXT
;; global options: +cmd
;; Got answer:
;; -&gt;&gt;HEADER&lt;&lt;- opcode: QUERY, status: NOERROR, id: 16686
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;cf-validation.2.0.192.in-addr.arpa. IN TXT

;; ANSWER SECTION:
cf-validation.2.0.192.in-addr.arpa. 300 IN TXT "b2f8af96-d32d-4c46-a886-f97d925d7977"

;; Query time: 35 msec
;; SERVER: 127.0.2.2#53(127.0.2.2)
;; WHEN: Fri Oct 24 10:43:52 EDT 2025
;; MSG SIZE  rcvd: 150</code></pre>
            <p>So how exactly is one supposed to modify these records? That’s where the validation token comes into play. Once you choose either the IRR or Reverse DNS method, we provide a unique, single-use validation token. You must add this token to the content of the relevant record, either in the IRR or in the DNS. Our system then looks for the presence of the token as evidence that the request is being made by someone with authorization to make the requested modification. If the token is found, verification is complete and your ownership is confirmed!</p>
    <div>
      <h3>The digital passport 🛂</h3>
      <a href="#the-digital-passport">
        
      </a>
    </div>
    <p>Ownership is only half the battle; we also need to confirm your intention that you authorize Cloudflare to advertise your prefix. For this, we rely on the gold standard for routing security: the Resource Private Key Infrastructure (RPKI), and in particular Route Origin Authorization (ROA) objects.</p><p>A ROA is a cryptographically-signed document that specifies which Autonomous System Number (ASN) is authorized to originate your IP prefix. You can think of a ROA as the digital equivalent of a certified, signed, and notarised contract from the owner of the prefix.</p><p>Relying parties can validate the signatures in a ROA using the RPKI.You simply create a ROA that specifies Cloudflare's ASN (AS13335) as an authorized originator and arrange for it to be signed. Many of our customers used hosted RPKI systems available through RIR portals for this. When our systems detect this signed authorization, your routing intention is instantly confirmed. </p><p>Many other companies that support BYOIP require a complex workflow involving creating self-signed certificates and manually modifying RDAP (Registration Data Access Protocol) records—a heavy administrative lift. By embracing a choice of IRR object modification and Reverse DNS TXT records, combined with RPKI, we offer a verification process that is much more familiar and straightforward for existing network operators.</p>
    <div>
      <h3>The global reach guarantee</h3>
      <a href="#the-global-reach-guarantee">
        
      </a>
    </div>
    <p>While the new self-serve flow ditches the need for the "dinosaur relic" that is the LOA, many network operators around the world still rely on it as part of the process of accepting prefixes from other networks.</p><p>To help ensure your prefix is accepted by adjacent networks globally, Cloudflare automatically generates a document on your behalf to be distributed in place of a LOA. This document provides information about the checks that we have carried out to confirm that we are authorised to originate the customer prefix, and confirms the presence of valid ROAs to authorise our origination of it. In this way we are able to support the workflows of network operators we connect to who rely upon LOAs, without our customers having the burden of generating them.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1GimIe80gJn5PrRUGkEMpF/130d2590e45088d58ac62ab2240f4d5c/image1.png" />
          </figure>
    <div>
      <h2>Staying away from black holes</h2>
      <a href="#staying-away-from-black-holes">
        
      </a>
    </div>
    <p>One concern in designing the Self-Serve API is the trade-off between giving customers flexibility while implementing the necessary safeguards so that an IP prefix is never advertised without a matching service binding. If this were to happen, Cloudflare would be advertising a prefix with no idea on what to do with the traffic when we receive it! We call this “blackholing” traffic. To handle this, we introduced the requirement of a default service binding — i.e. a service binding that spans the entire range of the IP prefix onboarded. </p><p>A customer can later layer different service bindings on top of their default service binding via <a href="https://blog.cloudflare.com/your-ips-your-rules-enabling-more-efficient-address-space-usage/"><u>multiple service bindings</u></a>, like putting CDN on top of a default Spectrum service binding. This way, a prefix can never be advertised without a service binding and blackhole our customers’ traffic.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/20QAM5GITJ5m5kYkNlh701/82812d202ffa7b9a4e46838aa6c04937/image2.png" />
          </figure>
    <div>
      <h2>Getting started</h2>
      <a href="#getting-started">
        
      </a>
    </div>
    <p>Check out our <a href="https://developers.cloudflare.com/byoip/get-started/"><u>developer docs</u></a> on the most up-to-date documentation on how to onboard, advertise, and add services to your IP prefixes via our API. Remember that onboardings can be complex, and don’t hesitate to ask questions or reach out to our <a href="https://www.cloudflare.com/professional-services/"><u>professional services</u></a> team if you’d like us to do it for you.</p>
    <div>
      <h2>The future of network control</h2>
      <a href="#the-future-of-network-control">
        
      </a>
    </div>
    <p>The ability to script and integrate BYOIP management into existing workflows is a game-changer for modern network operations, and we’re only just getting started. In the months ahead, look for self-serve BYOIP in the dashboard, as well as self-serve BYOIP offboarding to give customers even more control.</p><p>Cloudflare's self-serve BYOIP API onboarding empowers customers with unprecedented control and flexibility over their IP assets. This move to automate onboarding empowers a stronger security posture, moving away from manually-reviewed PDFs and driving <a href="https://rpki.cloudflare.com/"><u>RPKI adoption</u></a>. By using these API calls, organizations can automate complex network tasks, streamline migrations, and build more resilient and agile network infrastructures.</p> ]]></content:encoded>
            <category><![CDATA[API]]></category>
            <category><![CDATA[Addressing]]></category>
            <category><![CDATA[BYOIP]]></category>
            <category><![CDATA[IPv4]]></category>
            <category><![CDATA[IPv6]]></category>
            <category><![CDATA[Spectrum]]></category>
            <category><![CDATA[CDN]]></category>
            <category><![CDATA[Magic Transit]]></category>
            <category><![CDATA[Egress]]></category>
            <category><![CDATA[Cloudflare Gateway]]></category>
            <category><![CDATA[RPKI]]></category>
            <category><![CDATA[Aegis]]></category>
            <category><![CDATA[Smart Shield]]></category>
            <guid isPermaLink="false">4usaEaUwShJ04VKzlMV0V9</guid>
            <dc:creator>Ash Pallarito</dc:creator>
            <dc:creator>Lynsey Haynes</dc:creator>
            <dc:creator>Gokul Unnikrishnan</dc:creator>
        </item>
        <item>
            <title><![CDATA[So long, and thanks for all the fish: how to escape the Linux networking stack]]></title>
            <link>https://blog.cloudflare.com/so-long-and-thanks-for-all-the-fish-how-to-escape-the-linux-networking-stack/</link>
            <pubDate>Wed, 29 Oct 2025 13:00:00 GMT</pubDate>
            <description><![CDATA[ Many products at Cloudflare aren’t possible without pushing the limits of network hardware and software to deliver improved performance, increased efficiency, or novel capabilities such as soft-unicast, our method for sharing IP subnets across data centers. Happily, most people do not need to know the intricacies of how your operating system handles network and Internet access in general. Yes, even most people within Cloudflare. But sometimes we try to push well beyond the design intentions of Linux’s networking stack. This is a story about one of those attempts. ]]></description>
            <content:encoded><![CDATA[ <p><b></b><a href="https://www.goodreads.com/quotes/2397-there-is-a-theory-which-states-that-if-ever-anyone"><u>There is a theory which states</u></a> that if ever anyone discovers exactly what the Linux networking stack does and why it does it, it will instantly disappear and be replaced by something even more bizarre and inexplicable.</p><p>There is another theory which states that Git was created to track how many times this has already happened.</p><p>Many products at Cloudflare aren’t possible without pushing the limits of network hardware and software to deliver improved performance, increased efficiency, or novel capabilities such as <a href="https://blog.cloudflare.com/cloudflare-servers-dont-own-ips-anymore/"><u>soft-unicast, our method for sharing IP subnets across data centers</u></a>. Happily, most people do not need to know the intricacies of how your operating system handles network and Internet access in general. Yes, even most people within Cloudflare.</p><p>But sometimes we try to push well beyond the design intentions of Linux’s networking stack. This is a story about one of those attempts.</p>
    <div>
      <h2>Hard solutions for soft problems</h2>
      <a href="#hard-solutions-for-soft-problems">
        
      </a>
    </div>
    <p>My previous blog post about the Linux networking stack teased a problem matching the ideal model of soft-unicast with the basic reality of IP packet forwarding rules. Soft-unicast is the name given to our method of sharing IP addresses between machines. <a href="https://blog.cloudflare.com/cloudflare-servers-dont-own-ips-anymore/"><u>You may learn about all the cool things we do with it</u></a>, but as far as a single machine is concerned, it has dozens to hundreds of combinations of IP address and source-port range, any of which may be chosen for use by outgoing connections.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1NsU3FdxgJ0FNL78SDCo9D/65a27e8fd4339d3318a1b55b5979e3c6/image3.png" />
          </figure><p>The SNAT target in iptables supports a source-port range option to restrict the ports selected during NAT. In theory, we could continue to use iptables for this purpose, and to support multiple IP/port combinations we could use separate packet marks or multiple TUN devices. In actual deployment we would have to overcome challenges such as managing large numbers of iptables rules and possibly network devices, interference with other uses of packet marks, and deployment and reallocation of existing IP ranges.</p><p>Rather than increase the workload on our firewall, we wrote a single-purpose service dedicated to egressing IP packets on soft-unicast address space. For reasons lost in the mists of time, we named it SLATFATF, or “fish” for short. This service’s sole responsibility is to proxy IP packets using soft-unicast address space and manage the lease of those addresses.</p><p>WARP is not the only user of soft-unicast IP space in our network. Many Cloudflare products and services make use of the soft-unicast capability, and many of them use it in scenarios where we create a TCP socket in order to proxy or carry HTTP connections and other TCP-based protocols. Fish therefore needs to lease addresses that are not used by open sockets, and ensure that sockets cannot be opened to addresses leased by fish.</p><p>Our first attempt was to use distinct per-client addresses in fish and continue to let Netfilter/conntrack apply SNAT rules. However, we discovered an unfortunate interaction between Linux’s socket subsystem and the Netfilter conntrack module that reveals itself starkly when you use packet rewriting.</p>
    <div>
      <h2>Collision avoidance</h2>
      <a href="#collision-avoidance">
        
      </a>
    </div>
    <p>Suppose we have a soft-unicast address slice, 198.51.100.10:9000-9009. Then, suppose we have two separate processes that want to bind a TCP socket at 198.51.100.10:9000 and connect it to 203.0.113.1:443. The first process can do this successfully, but the second process will receive an error when it attempts to connect, because there is already a socket matching the requested 5-tuple.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2eXmHlyC0pdDUkZ9OI3JI/b83286088b4efa6ddee897e8b5d3b191/image8.png" />
          </figure><p>Instead of creating sockets, what happens when we emit packets on a TUN device with the same destination IP but a unique source IP, and use source NAT to rewrite those packets to an address in this range?</p><p>If we add an nftables “snat” rule that rewrites the source address to 198.51.100.10:9000-9009, Netfilter will create an entry in the conntrack table for each new connection seen on fishtun, mapping the new source address to the original one. If we try to forward more connections on that TUN device to the same destination IP, new source ports will be selected in the requested range, until all ten available ports have been allocated; once this happens, new connections will be dropped until an existing connection expires, freeing an entry in the conntrack table.</p><p>Unlike when binding a socket, Netfilter will simply pick the first free space in the conntrack table. However, if you use up all the possible entries in the table <a href="https://blog.cloudflare.com/conntrack-tales-one-thousand-and-one-flows/"><u>you will get an EPERM error when writing an IP packet</u></a>. Either way, whether you bind kernel sockets or you rewrite packets with conntrack, errors will indicate when there isn’t a free entry matching your requirements.</p><p>Now suppose that you combine the two approaches: a first process emits an IP packet on the TUN device that is rewritten to a packet on our soft-unicast port range. Then, a second process binds and connects a TCP socket with the same addresses as that IP packet:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/57KuCP4vkp4TGPiLwDRPZv/c066279cd8a84a511f09ed5218488cec/image7.png" />
          </figure><p>The first problem is that there is no way for the second process to know that there is an active connection from 198.51.100.10:9000 to 203.0.113.1:443, at the time the <code>connect() </code>call is made. The second problem is that the connection is successful from the point of view of that second process.</p><p>It should not be possible for two connections to share the same 5-tuple. Indeed, they don’t. Instead, the source address of the TCP socket is <a href="https://github.com/torvalds/linux/blob/v6.15/net/netfilter/nf_nat_core.c#L734"><u>silently rewritten to the next free port</u></a>.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3DWpWJ5gBIDoEhimxIR8TT/fd3d8bd46353cd42ed09a527d4841da8/image6.png" />
          </figure><p>This behaviour is present even if you use conntrack without either SNAT or MASQUERADE rules. It usually happens that the lifetime of conntrack entries matches the lifetime of the sockets they’re related to, but this is not guaranteed, and you cannot depend on the source address of your socket matching the source address of the generated IP packets.</p><p>Crucially for soft-unicast, it means conntrack may rewrite our connection to have a source port outside of the port slice assigned to our machine. This will silently break the connection, causing unnecessary delays and false reports of connection timeouts. We need another solution.</p>
    <div>
      <h2>Taking a breather</h2>
      <a href="#taking-a-breather">
        
      </a>
    </div>
    <p>For WARP, the solution we chose was to stop rewriting and forwarding IP packets, instead to terminate all TCP connections within the server and proxy them to a locally-created TCP socket with the correct soft-unicast address. This was an easy and viable solution that we already employed for a portion of our connections, such as those directed at the CDN, or intercepted as part of the Zero Trust Secure Web Gateway. However, it does introduce additional resource usage and potentially increased latency compared to the status quo. We wanted to find another way (to) forward.</p>
    <div>
      <h2>An inefficient interface</h2>
      <a href="#an-inefficient-interface">
        
      </a>
    </div>
    <p>If you want to use both packet rewriting and bound sockets, you need to decide on a single source of truth. Netfilter is not aware of the socket subsystem, but most of the code that uses sockets and is also aware of soft-unicast is code that Cloudflare wrote and controls. A slightly younger version of myself therefore thought it made sense to change our code to work correctly in the face of Netfilter’s design.</p><p>Our first attempt was to use the Netlink interface to the conntrack module, to inspect and manipulate the connection tracking tables before sockets were created. <a href="https://docs.kernel.org/userspace-api/netlink/intro.html"><u>Netlink is an extensible interface to various Linux subsystems</u></a> and is used by many command-line tools like <a href="https://man7.org/linux/man-pages/man8/ip.8.html"><u>ip</u></a> and, in our case, <a href="https://conntrack-tools.netfilter.org/manual.html"><u>conntrack-tools</u></a>. By creating the conntrack entry for the socket we are about to bind, we can guarantee that conntrack won’t rewrite the connection to an invalid port number, and ensure success every time. Likewise, if creating the entry fails, then we can try another valid address. This approach works regardless of whether we are binding a socket or forwarding IP packets.</p><p>There is one problem with this — it’s not terribly efficient. Netlink is slow compared to the bind/connect socket dance, and when creating conntrack entries you have to specify a timeout for the flow and delete the entry if your connection attempt fails, to ensure that the connection table doesn’t fill up too quickly for a given 5-tuple. In other words, you have to manually reimplement <a href="https://sysctl-explorer.net/net/ipv4/tcp_tw_reuse/"><u>tcp_tw_reuse</u></a> option to support high-traffic destinations with limited resources. In addition, a stray RST packet can erase your connection tracking entry. At our scale, anything like this that can happen, will happen. It is not a place for fragile solutions.</p>
    <div>
      <h2>Socket to ‘em</h2>
      <a href="#socket-to-em">
        
      </a>
    </div>
    <p>Instead of creating conntrack entries, we can abuse kernel features for our own benefit. Some time ago Linux added <a href="https://lwn.net/Articles/495304/"><u>the TCP_REPAIR socket option</u></a>, ostensibly to support connection migration between servers e.g. to relocate a VM. The scope of this feature allows you to create a new TCP socket and specify its entire connection state by hand.</p><p>An alternative use of this is to create a “connected” socket that never performed the TCP three-way handshake needed to establish that connection. At least, the kernel didn’t do that — if you are forwarding the IP packet containing a TCP SYN, you have more certainty about the expected state of the world.</p><p>However, the introduction of <a href="https://en.wikipedia.org/wiki/TCP_Fast_Open"><u>TCP Fast Open</u></a> provides an even simpler way to do this: you can create a “connected” socket that doesn’t perform the traditional three-way handshake, on the assumption that the SYN packet — when sent with its initial payload — contains a valid cookie to immediately establish the connection. However, as nothing is sent until you write to the socket, this serves our needs perfectly.</p><p>You can try this yourself:</p>
            <pre><code>TCP_FASTOPEN_CONNECT = 30
TCP_FASTOPEN_NO_COOKIE = 34
s = socket(AF_INET, SOCK_STREAM)
s.setsockopt(SOL_TCP, TCP_FASTOPEN_CONNECT, 1)
s.setsockopt(SOL_TCP, TCP_FASTOPEN_NO_COOKIE, 1)
s.bind(('198.51.100.10', 9000))
s.connect(('1.1.1.1', 53))</code></pre>
            <p>Binding a “connected” socket that nevertheless corresponds to no actual socket has one important feature: if other processes attempt to bind to the same addresses as the socket, they will fail to do so. This satisfies the problem we had at the beginning to make packet forwarding coexist with socket usage.</p>
    <div>
      <h2>Jumping the queue</h2>
      <a href="#jumping-the-queue">
        
      </a>
    </div>
    <p>While this solves one problem, it creates another. By default, you can’t use an IP address for both locally-originated packets and forwarded packets.</p><p>For example, we assign the IP address 198.51.100.10 to a TUN device. This allows any program to create a TCP socket using the address 198.51.100.10:9000. We can also write packets to that TUN device with the address 198.51.100.10:9001, and Linux can be configured to forward those packets to a gateway, following the same route as the TCP socket. So far, so good.</p><p>On the inbound path, TCP packets addressed to 198.51.100.10:9000 will be accepted and data put into the TCP socket. TCP packets addressed to 198.51.100.10:9001, however, will be dropped. They are not forwarded to the TUN device at all.</p><p>Why is this the case? Local routing is special. If packets are received to a local address, they are treated as “input” and not forwarded, regardless of any routing you think should apply. Behold the default routing rules:</p><p><code>cbranch@linux:~$ ip rule
cbranch@linux:~$ ip rule
0:        from all lookup local
32766:    from all lookup main
32767:    from all lookup default</code></p><p>The rule priority is a nonnegative integer, the smallest priority value is evaluated first. This requires some slightly awkward rule manipulation to “insert” a lookup rule at the beginning that redirects marked packets to the packet forwarding service’s TUN device; you have to delete the existing rule, then create new rules in the right order. However, you don’t want to leave the routing rules without any route to the “local” table, in case you lose a packet while manipulating these rules. In the end, the result looks something like this:</p><p><code>ip rule add fwmark 42 table 100 priority 10
ip rule add lookup local priority 11
ip rule del priority 0
ip route add 0.0.0.0/0 proto static dev fishtun table 100</code></p><p>As with WARP, we simplify connection management by assigning a mark to packets coming from the “fishtun” interface, which we can use to route them back there. To prevent locally-originated TCP sockets from having this same mark applied, we assign the IP to the loopback interface instead of fishtun, leaving fishtun with no assigned address. But it doesn’t need one, as we have explicit routing rules now.</p>
    <div>
      <h2>Uncharted territory</h2>
      <a href="#uncharted-territory">
        
      </a>
    </div>
    <p>While testing this last fix, I ran into an unfortunate problem. It did not work in our production environment.</p><p>It is not simple to debug the path of a packet through Linux’s networking stack. There are a few tools you can use, such as setting nftrace in nftables or applying the LOG/TRACE targets in iptables, which help you understand which rules and tables are applied for a given packet.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7ofuljq2tDVVUyzyPOMYSp/3da5954ef254aa3aae5397b310f6dcad/image5.png" />
          </figure><p><sup></sup><a href="https://en.m.wikipedia.org/wiki/File:Netfilter-packet-flow.svg"><sup><u>Schematic for the packet flow paths through Linux networking and *tables</u></sup></a><sup> by </sup><a href="https://commons.wikimedia.org/wiki/User_talk:Jengelh"><sup>Jan Engelhardt</sup></a></p><p>Our expectation is that the packet will pass the prerouting hook, a routing decision is made to send the packet to our TUN device, then the packet will traverse the forward table. By tracing packets originating from the IP of a test host, we could see the packets enter the prerouting phase, but disappear after the ‘routing decision’ block.</p><p>While there is a block in the diagram for “socket lookup”, this occurs after processing the input table. Our packet doesn’t ever enter the input table; the only change we made was to create a local socket. If we stop creating the socket, the packet passes to the forward table as before.</p><p>It turns out that part of the ‘routing decision’ involves some protocol-specific processing. For IP packets, <a href="https://github.com/torvalds/linux/blob/89be9a83ccf1f88522317ce02f854f30d6115c41/net/ipv4/ip_input.c#L317"><u>routing decisions can be cached</u></a>, and some basic address validation is performed. In 2012, an additional feature was added: <a href="https://lore.kernel.org/all/20120619.163911.2094057156011157978.davem@davemloft.net/"><u>early demux</u></a>. The rationale being, at this point in packet processing we are already looking up something, and the majority of packets received are expected to be for local sockets, rather than an unknown packet or one that needs to be forwarded somewhere. In this case, why not look up the socket directly here and save yourself an extra route lookup?</p>
    <div>
      <h2>The workaround at the end of the universe</h2>
      <a href="#the-workaround-at-the-end-of-the-universe">
        
      </a>
    </div>
    <p>Unfortunately for us, we just created a socket and didn’t want it to receive packets. Our adjustment to the routing table is ignored, because that routing lookup is skipped entirely when the socket is found. Raw sockets avoid this by receiving all packets regardless of the routing decision, but the packet rate is too high for this to be efficient. The only way around this is disabling the early demux feature. According to the patch’s claims, though, this feature improves performance: how far will performance regress on our existing workloads if we disable it?</p><p>This calls for a simple experiment: set the <a href="https://docs.kernel.org/6.16/networking/ip-sysctl.html"><u>net.ipv4.tcp_early_demux</u></a> syscall to 0 on some machines in a datacenter, let it run for a while, then compare the CPU usage with machines using default settings and the same hardware configuration as the machines under test.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3ypZGWN811vIQu04YERP8m/709e115068bad3994c88ce899cdfba29/image4.png" />
          </figure>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5eF441OrGSDwvAFEFYWbtT/40c330d687bf7e30597d046274d959e1/image2.png" />
          </figure>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/34gBimlHXXvLLbGJpriVJA/39f7408dd6ef37aaff3f0fa50a37518f/image1.png" />
          </figure><p>The key metrics are CPU usage from /proc/stat. If there is a performance degradation, we would expect to see higher CPU usage allocated to “softirq” — the context in which Linux network processing occurs — with little change to either userspace (top) or kernel time (bottom). The observed difference is slight, and mostly appears to reduce efficiency during off-peak hours.</p>
    <div>
      <h2>Swimming upstream</h2>
      <a href="#swimming-upstream">
        
      </a>
    </div>
    <p>While we tested different solutions to IP packet forwarding, we continued to terminate TCP connections on our network. Despite our initial concerns, the performance impact was small, and the benefits of increased visibility into origin reachability, fast internal routing within our network, and simpler observability of soft-unicast address usage flipped the burden of proof: was it worth trying to implement pure IP forwarding and supporting two different layers of egress?</p><p>So far, the answer is no. Fish runs on our network today, but with the much smaller responsibility of handling ICMP packets. However, when we decide to tunnel all IP packets, we know exactly how to do it.</p><p>A typical engineering role at Cloudflare involves solving many strange and difficult problems at scale. If you are the kind of goal-focused engineer willing to try novel approaches and explore the capabilities of the Linux kernel despite minimal documentation, look at <a href="https://www.cloudflare.com/en-gb/careers/jobs/?department=Engineering"><u>our open positions</u></a> — we would love to hear from you!</p> ]]></content:encoded>
            <category><![CDATA[Research]]></category>
            <category><![CDATA[Linux]]></category>
            <category><![CDATA[Egress]]></category>
            <guid isPermaLink="false">x9Fb6GXRm3RObU5XezhnE</guid>
            <dc:creator>Chris Branch</dc:creator>
        </item>
        <item>
            <title><![CDATA[Connect and secure any private or public app by hostname, not IP — free for everyone in Cloudflare One]]></title>
            <link>https://blog.cloudflare.com/tunnel-hostname-routing/</link>
            <pubDate>Thu, 18 Sep 2025 14:00:00 GMT</pubDate>
            <description><![CDATA[ Tired of IP Lists? Securely connect private networks to any app by its hostname, not its IP address. This routing is now built into Cloudflare Tunnel and is free for all Cloudflare One customers. ]]></description>
            <content:encoded><![CDATA[ <p>Connecting to an application should be as simple as knowing its name. Yet, many security models still force us to rely on brittle, ever-changing IP addresses. And we heard from many of you that managing those ever-changing IP lists was a constant struggle. </p><p>Today, we’re taking a major step toward making that a relic of the past.</p><p>We're excited to announce that you can now route traffic to <a href="https://developers.cloudflare.com/cloudflare-one/connections/connect-networks/"><u>Cloudflare Tunnel</u></a> based on a hostname or a domain. This allows you to use Cloudflare Tunnel to build simple zero-trust and egress policies for your private and public web applications without ever needing to know their underlying IP. This is one more step on our <a href="https://blog.cloudflare.com/egress-policies-by-hostname/"><u>mission</u></a> to strengthen platform-wide support for hostname- and domain-based policies in the <a href="https://developers.cloudflare.com/cloudflare-one/"><u>Cloudflare One</u></a> <a href="https://www.cloudflare.com/learning/access-management/what-is-sase/">SASE</a> platform, simplifying complexity and improving security for our customers and end users. </p>
    <div>
      <h2>Grant access to applications, not networks</h2>
      <a href="#grant-access-to-applications-not-networks">
        
      </a>
    </div>
    <p>In August 2020, the National Institute of Standards (NIST) published <a href="https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-207.pdf"><u>Special Publication 800-207</u></a>, encouraging organizations to abandon the "castle-and-moat" model of security (where trust is established on the basis of network location) and move to a <a href="https://www.cloudflare.com/learning/security/glossary/what-is-zero-trust/">Zero Trust model </a>(where we “<a href="https://www.whitehouse.gov/wp-content/uploads/2022/01/M-22-09.pdf"><u>verify anything and everything attempting to establish access</u></a>").</p><p>Now, instead of granting broad network permissions, you grant specific access to individual resources. This concept, known as per-resource authorization, is a cornerstone of the Zero Trust framework, and it presents a huge change to how organizations have traditionally run networks. Per-resource authorization requires that access policies be configured on a per-resource basis. By applying the principle of least privilege, you give users access only to the resources they absolutely need to do their job. This tightens security and shrinks the potential attack surface for any given resource.</p><p>Instead of allowing your users to access an entire network segment, like <code><b>10.131.0.0/24</b></code>, your security policies become much more precise. For example:</p><ul><li><p>Only employees in the "SRE" group running a managed device can access <code><b>admin.core-router3-sjc.acme.local</b></code>.</p></li><li><p>Only employees in the "finance" group located in Canada can access <code><b>canada-payroll-server.acme.local</b></code>.</p></li><li><p>All employees located in New York can access<b> </b><code><b>printer1.nyc.acme.local</b></code>.</p></li></ul><p>Notice what these powerful, granular rules have in common? They’re all based on the resource’s private <b>hostname</b>, not its IP address. That’s exactly what our new hostname routing enables. We’ve made it dramatically easier to write effective zero trust policies using stable hostnames, without ever needing to know the underlying IP address.</p>
    <div>
      <h2>Why IP-based rules break</h2>
      <a href="#why-ip-based-rules-break">
        
      </a>
    </div>
    <p>Let's imagine you need to secure an internal server, <code><b>canada-payroll-server.acme.local</b></code>. It’s hosted on internal IP <code><b>10.4.4.4</b></code> and its hostname is available in internal private DNS, but not in public DNS. In a modern cloud environment, its IP address is often the least stable thing about it. If your security policy is tied to that IP, it's built on a shaky foundation.</p><p>This happens for a few common reasons:</p><ul><li><p><b>Cloud instances</b>: When you launch a compute instance in a cloud environment like AWS, you're responsible for its <a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/hostname-types.html"><u>hostname</u></a>, but not always its IP address. As a result, you might only be tracking the hostname and may not even know the server's IP.</p></li><li><p><b>Load Balancers</b>: If the server is behind a load balancer in a cloud environment (like AWS ELB), its IP address could be changing dynamically in response to <a href="https://docs.aws.amazon.com/elasticloadbalancing/latest/application/application-load-balancers.html"><u>changes in traffic</u></a>.</p></li><li><p><b>Ephemeral infrastructure</b>: This is the "<a href="https://cloudscaling.com/blog/cloud-computing/the-history-of-pets-vs-cattle/"><u>cattle, not pets</u></a>" world of modern infrastructure. Resources like servers in an autoscaling group, containers in a Kubernetes cluster, or applications that spin down overnight are created and destroyed as needed. They keep a persistent hostname so users can find them, but their IP is ephemeral and changes every time they spin up.</p></li></ul><p>To cope with this, we've seen customers build complex scripts to maintain dynamic "IP Lists" — mappings from a hostname to its IPs that are updated every time the address changes. While this approach is clever, maintaining IP Lists is a chore. They are brittle, and a single error could cause employees to lose access to vital resources.</p><p>Fortunately, hostname-based routing makes this IP List workaround obsolete.</p>
    <div>
      <h2>How it works: secure a private server by hostname using Cloudflare One SASE platform</h2>
      <a href="#how-it-works-secure-a-private-server-by-hostname-using-cloudflare-one-sase-platform">
        
      </a>
    </div>
    <p>To see this in action, let's create a policy from our earlier example: we want to grant employees in the "finance" group located in Canada access to <code><b>canada-payroll-server.acme.local</b></code>. Here’s how you do it, without ever touching an IP address.</p><p><b>Step 1: Connect your private network</b></p><p>First, the server's network needs a secure connection to Cloudflare's global network. You do this by installing our lightweight agent, <a href="https://developers.cloudflare.com/cloudflare-one/connections/connect-networks/"><u>cloudflared</u></a>, in the same local area network as the server, which creates a secure Cloudflare Tunnel. You can create a new tunnel directly from cloudflared by running <code><b>cloudflared tunnel create &lt;TUNNEL-NAME&gt;</b></code> or using your Zero Trust dashboard.</p><div>
  
</div><p>
<b>Step 2: Route the hostname to the tunnel</b></p><p>This is where the new capability comes into play. In your Zero Trust dashboard, you now establish a route that binds the <i>hostname</i> <code>canada-payroll-server.acme.local</code> directly to that tunnel. In the past, you could only route an IP address (<code>10.4.4.4)</code> or its subnet (<code>10.4.4.0/24</code>). That old method required you to create and manage those brittle IP Lists we talked about. Now, you can even route entire domains, like <code>*.acme.local</code>, directly to the tunnel, simply by creating a hostname route to <code>acme.local</code>.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3mcoBAILYENIP6kGW4tw96/bb7ec6571ae7b4f04b5dc0456f694d59/1.png" />
          </figure><p>For this to work, you must delete your private network’s subnet (in this case <code>10.0.0.0/8</code>) and <code>100.64.0.0/10</code> from the <a href="https://developers.cloudflare.com/cloudflare-one/connections/connect-devices/warp/configure-warp/route-traffic/split-tunnels/"><u>Split Tunnels Exclude</u></a> list. You also need to remove <code>.local</code> from the <a href="https://developers.cloudflare.com/cloudflare-one/connections/connect-devices/warp/configure-warp/route-traffic/local-domains/"><u>Local Domain Fallback</u></a>.</p><p>(As an aside, we note that this feature also works with domains. For example, you could bind <code>*.acme.local</code> to a single tunnel, if desired.)</p><p><b>Step 3: Write your zero trust policy</b></p><p>Now that Cloudflare knows <i>how</i> to reach your server by its name, you can write a policy to control <i>who</i> can access it. You have a couple of options:</p><ul><li><p><b>In Cloudflare Access (for HTTPS applications):</b> Write an <a href="https://developers.cloudflare.com/cloudflare-one/applications/non-http/self-hosted-private-app/"><u>Access policy</u></a> that grants employees in the “finance” group access to the private hostname <code>canada-payroll-server.acme.local</code>. This is ideal for applications accessible over HTTPS on port 443.
</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7lIZI9ThsAWtxFZZis3HtZ/08451586dbe373ff137bd9e91d23dea6/2.png" />
          </figure><p></p></li><li><p><b>In Cloudflare Gateway (for HTTPS applications):</b> Alternatively, write a <a href="https://developers.cloudflare.com/cloudflare-one/policies/gateway/"><u>Gateway policy</u></a> that grants employees in the “finance” group access to the <a href="https://developers.cloudflare.com/cloudflare-one/policies/gateway/network-policies/#sni"><u>SNI</u></a> <code>canada-payroll-server.acme.local</code>. This <a href="https://developers.cloudflare.com/cloudflare-one/policies/gateway/network-policies/protocol-detection/"><u>works</u></a> for services accessible over HTTPS on any port.
</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5GpwDZNmdzapOyjOgFFlKD/50e2d0df64d2230479ad8d0a013de24b/3.png" />
          </figure><p></p></li><li><p><b>In Cloudflare Gateway (for non-HTTP applications):</b> You can also write a <a href="https://developers.cloudflare.com/cloudflare-one/policies/gateway/"><u>Gateway policy</u></a> that blocks DNS resolution <code>canada-payroll-server.acme.local</code> for all employees except the “finance” group.</p></li></ul>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3na5Mf6UMpBcKYm6JWmnzd/5791054c944300e667c3829e9bd8c6ec/4.png" />
          </figure><p>The principle of "trust nothing" means your security posture should start by denying traffic by default. For this setup to work in a true Zero Trust model, it should be paired with a default Gateway policy that blocks all access to your internal IP ranges. Think of this as ensuring all doors to your private network are locked by default. The specific <code>allow</code> policies you create for hostnames then act as the keycard, unlocking one specific door only for authorized users.</p><p>Without that foundational "deny" policy, creating a route to a private resource would make it accessible to everyone in your organization, defeating the purpose of a least-privilege model and creating significant security risks. This step ensures that only the traffic you explicitly permit can ever reach your corporate resources.</p><p>And there you have it. We’ve walked through the entire process of writing a per-resource policy using only the server’s private hostname. No IP Lists to be seen anywhere, simplifying life for your administrators.</p>
    <div>
      <h2>Secure egress traffic to third-party applications</h2>
      <a href="#secure-egress-traffic-to-third-party-applications">
        
      </a>
    </div>
    <p>Here's another powerful use case for hostname routing: controlling outbound connections from your users to the public Internet. Some third-party services, such as banking portals or partner APIs, use an IP allowlist for security. They will only accept connections that originate from a specific, dedicated public source IP address that belongs to your company.</p><p>This common practice creates a challenge. Let's say your banking portal at <code>bank.example.com</code> requires all traffic to come from a dedicated source IP <code>203.0.113.9</code> owned by your company. At the same time, you want to enforce a zero trust policy that <i>only</i> allows your finance team to access that portal. You can't build your policy based on the bank's destination IP — you don't control it, and it could change at any moment. You have to use its hostname.</p><p>There are two ways to solve this problem. First, if your dedicated source IP is purchased from Cloudflare, you can use the <a href="https://blog.cloudflare.com/egress-policies-by-hostname/"><u>“egress policy by hostname” feature</u></a> that we announced previously. By contrast, if your dedicated source IP belongs to your organization, or is leased from cloud provider, then we can solve this problem with hostname-based routing, as shown in the figure below:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6wXu6FMiiVz4lXsESFrBTg/e1bb13e8eef0653ab311d0800d95f391/5.png" />
          </figure><p>Here’s how this works:</p><ol><li><p><b>Force traffic through your dedicated IP.</b> First, you deploy a <a href="https://developers.cloudflare.com/cloudflare-one/connections/connect-networks/"><u>Cloudflare Tunnel</u></a> in the network that owns your dedicated IP (for example, your primary VPC in a cloud provider). All traffic you send through this tunnel will exit to the Internet with <code>203.0.113.9</code> as its source IP.</p></li><li><p><b>Route the banking app to that tunnel.</b> Next, you create a hostname route in your Zero Trust dashboard. This rule tells Cloudflare: "Any traffic destined for <code>bank.example.com</code> must be sent through this specific tunnel."</p></li><li><p><b>Apply your user policies.</b> Finally, in Cloudflare Gateway, you create your granular access rules. A low-priority <a href="https://developers.cloudflare.com/cloudflare-one/policies/gateway/network-policies/"><u>network policy</u></a> blocks access to the <a href="https://developers.cloudflare.com/cloudflare-one/policies/gateway/network-policies/#sni"><u>SNI</u></a> <code>bank.example.com</code> for everyone. Then, a second, higher-priority policy explicitly allows users in the "finance" group to access the <a href="https://developers.cloudflare.com/cloudflare-one/policies/gateway/network-policies/#sni"><u>SNI</u></a> <code>bank.example.com</code>.</p></li></ol><p>Now, when a finance team member accesses the portal, their traffic is correctly routed through the tunnel and arrives with the source IP the bank expects. An employee from any other department is blocked by Gateway before their traffic even enters the tunnel. You've enforced a precise, user-based zero trust policy for a third-party service, all by using its public hostname.</p>
    <div>
      <h2>Under the hood: how hostname routing works</h2>
      <a href="#under-the-hood-how-hostname-routing-works">
        
      </a>
    </div>
    <p>To build this feature, we needed to solve a classic networking challenge. The routing mechanism for Cloudflare Tunnel is a core part of Cloudflare Gateway, which operates at both Layer 4 (TCP/UDP) and Layer 7 (HTTP/S) of the network stack.</p><p>Cloudflare Gateway must make a decision about which Cloudflare Tunnel to send traffic upon receipt of the very first IP packet in the connection. This means the decision must necessarily be made at Layer 4, where Gateway only sees the IP and TCP/UDP headers of a packet. IP and TCP/UDP headers contain the destination IP address, but do not contain destination <i>hostname</i>. The hostname is only found in Layer 7 data (like a TLS SNI field or an HTTP Host header), which isn't even available until after the Layer 4 connection is already established.</p><p>This creates a dilemma: how can we route traffic based on a hostname before we've even seen the hostname? </p>
    <div>
      <h3>Synthetic IPs to the rescue</h3>
      <a href="#synthetic-ips-to-the-rescue">
        
      </a>
    </div>
    <p>The solution lies in the fact that Cloudflare Gateway also acts as a DNS resolver. This means we see the user's <i>intent </i>— the DNS query for a hostname — <i>before</i> we see the actual application traffic. We use this foresight to "tag" the traffic using a <a href="https://blog.cloudflare.com/egress-policies-by-hostname/"><u>synthetic IP address</u></a>.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7Kd3x5SppGp8G4KZeO34n/67b338ca8e81db63e110dc89c7596bf6/6.png" />
          </figure><p>Let’s walk through the flow:</p><ol><li><p><b>DNS Query</b>. A user's device sends a DNS query for
 <code>canada-payroll-server.acme.local </code>to the Gateway resolver.</p></li><li><p><b>Private Resolution</b>. Gateway asks the <code>cloudflared </code>agent running in your private network to resolve the real IP for that hostname. Since <code>cloudflared</code> has access to your internal DNS, it finds the real private IP <code>10.4.4.4</code>, and sends it back to the Gateway resolver.</p></li><li><p><b>Synthetic Response</b>. Here's the key step. Gateway resolver <b>does not</b> send the real IP (<code>10.4.4.4</code>) back to the user. Instead, it temporarily assigns an <i>initial resolved IP</i> from a reserved Carrier-Grade NAT (CGNAT) address space (e.g., <code>100.80.10.10</code>) and sends the initial resolved IP back to the user's device. The initial resolved IP acts as a tag that allows Gateway to identify network traffic destined to <code>canada-payroll-server.acme.local</code>. The initial resolved IP is randomly selected and temporarily assigned from one of the two IP address ranges:</p><ul><li><p>IPv4: <code>100.80.0.0/16</code></p></li><li><p>IPv6: <code>2606:4700:0cf1:4000::/64</code> </p></li></ul></li><li><p><b>Traffic Arrives</b>. The user's device sends its application traffic (e.g., an HTTPS request) to the destination IP it received from Gateway resolver: the initial resolved IP <code>100.80.10.10</code>.</p></li><li><p><b>Routing and Rewriting</b>. When Gateway sees an incoming packet destined for <code>100.80.10.10</code>, it knows this traffic is for <code>canada-payroll-server.acme.local</code> and must be sent through a specific Cloudflare Tunnel. It then rewrites the destination IP on the packet back to the <i>real</i> private destination IP (<code>10.4.4.4</code>) and sends it down the correct tunnel.</p></li></ol><p>The traffic goes down the tunnel and arrives at <code>canada-payroll-server.acme.local</code> at IP (<code>10.4.4.4)</code> and the user is connected to the server without noticing any of these mechanisms. By intercepting the DNS query, we effectively tag the network traffic stream, allowing our Layer 4 router to make the right decision without needing to see Layer 7 data.</p>
    <div>
      <h2>Using Gateway Resolver Policies for fine grained control</h2>
      <a href="#using-gateway-resolver-policies-for-fine-grained-control">
        
      </a>
    </div>
    <p>The routing capabilities we've discussed provide simple, powerful ways to connect to private resources. But what happens when your network architecture is more complex? For example, what if your private DNS servers are in one part of your network, but the application itself is in another?</p><p>With Cloudflare One, you can solve this by creating policies that separate the path for DNS resolution from the path for application traffic for the very same hostname using <a href="https://developers.cloudflare.com/cloudflare-one/policies/gateway/resolver-policies"><u>Gateway Resolver Policies</u></a>. This gives you fine-grained control to match complex network topologies.</p><p>Let's walk through a scenario:</p><ul><li><p>Your private DNS resolvers, which can resolve <code><b>acme.local</b></code>, are located in your core datacenter, accessible only via <code><b>tunnel-1</b></code>.</p></li><li><p>The webserver for <code><b>canada-payroll-server.acme.local</b></code><b> </b>is hosted in a specific cloud VPC, accessible only via <code><b>tunnel-2</b></code>.</p></li></ul>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2sVMsS4DhuN2yoTlGWTK5X/e5a66330c951e7b65428f5c76b5c7b0a/7.png" />
          </figure><p>Here’s how to configure this split-path routing.</p><p><b>Step 1: Route DNS Queries via </b><code><b>tunnel-1</b></code></p><p>First, we need to tell Cloudflare Gateway how to reach your private DNS server</p><ol><li><p><b>Create an IP Route:</b> In the Networks &gt; Tunnels area of your Zero Trust dashboard, create a route for the IP address of your private DNS server (e.g., <code><b>10.131.0.5/32</b></code>) and point it to <code><b>tunnel-1</b></code><code>.</code> This ensures any traffic destined for that specific IP goes through the correct tunnel to your datacenter.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/32JcjFZXGuhDEHHlWJoF1C/4223a6f2e5b7b49015abfbfd9b4fd20f/8.png" />
          </figure><p></p></li><li><p><b>Create a Resolver Policy:</b> Go to <b>Gateway -&gt; Resolver Policies</b> and create a new policy with the following logic:</p><ul><li><p><b>If</b> the query is for the domain <code><b>acme.local</b></code> …</p></li><li><p><b>Then</b>... resolve it using a designated DNS server with the IP <code><b>10.131.0.5</b></code>.
</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2j8kYsD692tCRYcDKoDXvb/7dbb20f426ba47350fb0b2906046d5f0/9.png" />
          </figure><p></p></li></ul></li></ol><p>With these two rules, any DNS lookup for <code><b>acme.local</b></code> from a user's device will be sent through <code>tunnel-1</code> to your private DNS server for resolution.</p><p><b>Step 2: Route Application Traffic via </b><code><b>tunnel-2</b></code></p><p>Next, we'll tell Gateway where to send the actual traffic (for example, HTTP/S) for the application.</p><p><b>Create a Hostname Route:</b> In your Zero Trust dashboard, create a <b>hostname route</b> that binds <code><b>canada-payroll-server.acme.local </b></code>to <code><b>tunnel-2</b></code>.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3Ufzpsb1FUYrM39gMiyovs/c5d10828f58b0e7c854ff9fa721e1757/10.png" />
          </figure><p>This rule instructs Gateway that any application traffic (like HTTP, SSH, or any TCP/UDP traffic) for <code><b>canada-payroll-server.acme.local</b></code> must be sent through <code><b>tunnel-2</b></code><b> </b>leading to your cloud VPC.</p><p>Similarly to a setup without Gateway Resolver Policy, for this to work, you must delete your private network’s subnet (in this case <code>10.0.0.0/8</code>) and <code>100.64.0.0/10</code> from the <a href="https://developers.cloudflare.com/cloudflare-one/connections/connect-devices/warp/configure-warp/route-traffic/split-tunnels/"><u>Split Tunnels Exclude</u></a> list. You also need to remove <code>.local</code> from the <a href="https://developers.cloudflare.com/cloudflare-one/connections/connect-devices/warp/configure-warp/route-traffic/local-domains/"><u>Local Domain Fallback</u></a>.</p><p><b>Putting It All Together</b></p><p>With these two sets of policies, the "synthetic IP" mechanism handles the complex flow:</p><ol><li><p>A user tries to access <code>canada-payroll-server.acme.local</code>. Their device sends a DNS query to Cloudflare Gateway Resolver.</p></li><li><p>This DNS query matches a Gateway Resolver Policy, causing Gateway Resolver to forward the DNS query through <code>tunnel-1</code> to your private DNS server (<code>10.131.0.5</code>).</p></li><li><p>Your DNS server responds with the server’s actual private destination IP (<code>10.4.4.4</code>).</p></li><li><p>Gateway receives this IP and generates a “synthetic” initial resolved IP (<code>100.80.10.10</code>) which it sends back to the user's device.</p></li><li><p>The user's device now sends the HTTP/S request to the initial resolved IP (<code>100.80.10.10</code>).</p></li><li><p>Gateway sees the network traffic destined for the initial resolved IP (<code>100.80.10.10</code>) and, using the mapping, knows it's for <code>canada-payroll-server.acme.local</code>.</p></li><li><p>The Hostname Route now matches. Gateway sends the application traffic through tunnel-2 and rewrites its destination IP to the webserver’s actual private IP (<code>10.4.4.4</code>).</p></li><li><p>The <code>cloudflared</code> agent at the end of tunnel-2 forwards the traffic to the application's destination IP (<code>10.4.4.4</code>), which is on the same local network.</p></li></ol><p>The user is connected, without noticing that DNS and application traffic have been routed over totally separate private network paths. This approach allows you to support sophisticated split-horizon DNS environments and other advanced network architectures with simple, declarative policies.</p>
    <div>
      <h2>What onramps does this support?</h2>
      <a href="#what-onramps-does-this-support">
        
      </a>
    </div>
    <p>Our hostname routing capability is built on the "synthetic IP" (also known as <i>initially resolved IP</i>) mechanism detailed earlier, which requires specific Cloudflare One products to correctly handle both the DNS resolution and the subsequent application traffic. Here’s a breakdown of what’s currently supported for connecting your users (on-ramps) and your private applications (off-ramps).</p>
    <div>
      <h4><b>Connecting Your Users (On-Ramps)</b></h4>
      <a href="#connecting-your-users-on-ramps">
        
      </a>
    </div>
    <p>For end-users to connect to private hostnames, the feature currently works with <a href="https://developers.cloudflare.com/cloudflare-one/connections/connect-devices/warp/"><b><u>WARP Client</u></b></a>, agentless <a href="https://developers.cloudflare.com/cloudflare-one/connections/connect-devices/agentless/pac-files/"><b><u>PAC files</u></b></a> and <a href="https://developers.cloudflare.com/cloudflare-one/policies/browser-isolation/"><b><u>Browser Isolation</u></b></a>.</p><p>Connectivity is also possible when users are behind <a href="https://developers.cloudflare.com/magic-wan/"><b><u>Magic WAN</u></b></a> (in active-passive mode) or <a href="https://developers.cloudflare.com/cloudflare-one/connections/connect-networks/private-net/warp-connector/"><b><u>WARP Connector</u></b></a>, but it requires some additional configuration. To ensure traffic is routed correctly, you must update the routing table on your device or router to send traffic for the following destinations through Gateway:</p><ul><li><p>The initially resolved IP ranges: <code>100.80.0.0/16</code> (IPv4) and <code>2606:4700:0cf1:4000::/64</code> (IPv6).</p></li><li><p>The private network CIDR where your application is located (e.g., <code>10.0.0.0/8)</code>.</p></li><li><p>The IP address of your internal DNS resolver.</p></li><li><p>The Gateway DNS resolver IPs: <code>172.64.36.1</code> and <code>172.64.36.2</code>.</p></li></ul><p>Magic WAN customers will also need to point their DNS resolver to these Gateway resolver IPs and ensure they are running Magic WAN tunnels in active-passive mode: for hostname routing to work, DNS queries and the resulting network traffic must reach Cloudflare over the same Magic WAN tunnel. Currently, hostname routing will not work if your end users are at a site that has more than one Magic WAN tunnel actively transiting traffic at the same time.</p>
    <div>
      <h4><b>Connecting Your Private Network (Off-Ramps)</b></h4>
      <a href="#connecting-your-private-network-off-ramps">
        
      </a>
    </div>
    <p>On the other side of the connection, hostname-based routing is designed specifically for applications connected via <a href="https://developers.cloudflare.com/cloudflare-one/connections/connect-networks/"><b><u>Cloudflare Tunnel</u></b></a> (<code>cloudflared</code>). This is currently the only supported off-ramp for routing by hostname.</p><p>Other traffic off-ramps, while fully supported for IP-based routing, are not yet compatible with this specific hostname-based feature. This includes using Magic WAN, WARP Connector, or WARP-to-WARP connections as the off-ramp to your private network. We are actively working to expand support for more on-ramps and off-ramps in the future, so stay tuned for more updates.</p>
    <div>
      <h2>Conclusion</h2>
      <a href="#conclusion">
        
      </a>
    </div>
    <p>By enabling routing by hostname directly within Cloudflare Tunnel, we’re making security policies simpler, more resilient, and more aligned with how modern applications are built. You no longer need to track ever-changing IP addresses. You can now build precise, per-resource authorization policies for HTTPS applications based on the one thing that should matter: the name of the service you want to connect to. This is a fundamental step in making a zero trust architecture intuitive and achievable for everyone.</p><p>This powerful capability is available today, built directly into Cloudflare Tunnel and free for all Cloudflare One customers.</p><p>Ready to leave IP Lists behind for good? Get started by exploring our <a href="https://developers.cloudflare.com/cloudflare-one/connections/connect-networks/private-net/cloudflared/connect-private-hostname/"><u>developer documentation</u></a> to configure your first hostname route. If you're new to <a href="https://developers.cloudflare.com/cloudflare-one/"><u>Cloudflare One</u></a>, you can sign up today and begin securing your applications and networks in minutes.</p> ]]></content:encoded>
            <category><![CDATA[Cloudflare Tunnel]]></category>
            <category><![CDATA[SASE]]></category>
            <category><![CDATA[Cloudflare One]]></category>
            <category><![CDATA[Cloudflare Network]]></category>
            <category><![CDATA[Cloudflare Gateway]]></category>
            <category><![CDATA[Egress]]></category>
            <category><![CDATA[Zero Trust]]></category>
            <category><![CDATA[Access Control Lists (ACLs)]]></category>
            <category><![CDATA[Hostnames]]></category>
            <guid isPermaLink="false">gnroEH7P2oE00Ba0wJLHT</guid>
            <dc:creator>Nikita Cano</dc:creator>
            <dc:creator>Sharon Goldberg</dc:creator>
        </item>
        <item>
            <title><![CDATA[Introducing simple and secure egress policies by hostname in Cloudflare’s SASE platform]]></title>
            <link>https://blog.cloudflare.com/egress-policies-by-hostname/</link>
            <pubDate>Mon, 07 Jul 2025 13:00:00 GMT</pubDate>
            <description><![CDATA[ Cloudflare's SASE platform now offers egress policies by hostname, domain, content category, and application in open beta. ]]></description>
            <content:encoded><![CDATA[ <p>Cloudflare’s <a href="https://www.cloudflare.com/learning/access-management/what-is-sase/"><u>SASE</u></a> platform is on a mission to strengthen our platform-wide support for hostname- and domain-based policies. This mission is being driven by enthusiastic demands from our customers, and boosted along the way by several interesting engineering challenges. Today, we’re taking a deep dive into the first milestone of this mission, which we recently released in open beta: <a href="https://developers.cloudflare.com/cloudflare-one/policies/gateway/egress-policies/"><u>egress policies</u></a> by hostname, domain, <a href="https://developers.cloudflare.com/cloudflare-one/policies/gateway/domain-categories/#content-categories"><u>content category</u></a>, and <a href="https://developers.cloudflare.com/cloudflare-one/policies/gateway/application-app-types/"><u>application</u></a>. Let’s dive right in! </p>
    <div>
      <h2>Egress policies and IP ACLs</h2>
      <a href="#egress-policies-and-ip-acls">
        
      </a>
    </div>
    <p>Customers use our <a href="https://developers.cloudflare.com/learning-paths/secure-internet-traffic/build-egress-policies/"><u>egress policies</u></a> to control how their organization's Internet traffic connects to external services. An egress policy allows a customer to control the source IP address their traffic uses, as well as the geographic location that their traffic uses to egress onto the public Internet. Control of the source IP address is especially useful when accessing external services that apply policies to traffic based on source IPs, using IP Access Control Lists (ACLs). Some services use IP ACLs because they improve security, while others use them because they are explicitly required by regulation or compliance frameworks. </p><p>(That said, it's important to clarify that we do not recommend relying on IP ACLs as the only security mechanism used to gate access to a resource. Instead, IP ACLs should be used in combination with strong authentication like <a href="https://www.cloudflare.com/learning/access-management/what-is-sso/"><u>Single Sign On (SSO)</u></a>, <a href="https://www.cloudflare.com/learning/access-management/what-is-multi-factor-authentication/"><u>Multi Factor Authentication (MFA)</u></a>, <a href="https://fidoalliance.org/passkeys/"><u>passkeys</u></a>, etc.).</p><p>Let’s make the use case for egress policies more concrete with an example. </p><p>Imagine that Acme Co is a company that has purchased its own <a href="https://developers.cloudflare.com/cloudflare-one/policies/gateway/egress-policies/dedicated-egress-ips/"><u>dedicated egress</u></a> IP address <code>203.0.113.9</code> from Cloudflare. Meanwhile, imagine a regulated banking application (<code>https://bank.example.com</code><u>)</u> that only grants access to the corporate account for Acme Co when traffic originates from source IP address <code>203.0.113.9</code>. Any traffic with a different source IP will be prevented from accessing Acme Co’s corporate account. In this way, the banking application uses IP ACLs to ensure that only employees from Acme Co can access Acme Co’s corporate account. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7KwZQRJksemP5QwXzT0S2P/2a45d3ac7581da31485a6d15c5ba6b03/image3.png" />
          </figure>
    <div>
      <h2>Egress policies by hostname</h2>
      <a href="#egress-policies-by-hostname">
        
      </a>
    </div>
    <p>Continuing our example, suppose that Acme Co wants to ensure that the banking application is off limits to all of its employees except those on its finance team. To accomplish this, Acme Co wants to write an egress policy that allows members of the finance team to egress from <code>203.0.113.9</code> when accessing <code>https://bank.example.com</code>, but employees outside of finance will not egress from <code>203.0.113.9</code> when attempting to access <code>https://bank.example.com</code>.  </p><p>As shown in the figure above, the combination of the banking application's IP ACLs and Acme Co’s egress policies ensures that <code>https://bank.example.com</code> is only accessible to its finance employees at Acme Co. </p><p>While this all sounds great, until now, this scenario was fairly difficult to achieve on <a href="https://www.cloudflare.com/zero-trust/products/"><u>Cloudflare’s SASE platform</u></a>. While we have long supported <a href="https://developers.cloudflare.com/cloudflare-one/policies/gateway/egress-policies/"><u>egress policies</u></a> by user groups and other user attributes, we did not support writing egress policies by hostname. Instead, customers had to resort to writing egress policies by destination IP addresses.</p><p>To understand why customers have been clamoring for egress policies by hostname, let’s return to our example: </p><p>In our example, Acme Co wants to write a policy that allows only the finance team to access <code>https://bank.example.com</code>. In the past, in the absence of egress policies by hostname, Acme Co would need to write its egress policy in terms of the destination IP address of the banking application. </p><p>But how does Acme Co know the destination IP address of this banking application? The first problem is that the destination IP address belongs to an external service that is not controlled by Acme Co, and the second problem is that this IP address could change frequently, especially if the banking application uses <a href="https://en.wikipedia.org/wiki/Ephemeral_architecture"><u>ephemeral infrastructure</u></a> or sits behind a <a href="https://www.cloudflare.com/learning/cdn/glossary/reverse-proxy/"><u>reverse proxy</u></a> or <a href="https://www.cloudflare.com/learning/cdn/what-is-a-cdn/"><u>CDN</u></a>. Keeping up with changes to the destination IP address of an external service led some of our customers to write their own homegrown scripts that continuously update destination <a href="https://developers.cloudflare.com/cloudflare-one/policies/gateway/lists/"><u>IP Lists</u></a> which are then fed to our egress policies using Cloudflare’s <a href="https://developers.cloudflare.com/api/resources/zero_trust/"><u>API</u></a>.</p><p>With this new feature, we do away with all these complications and simply allow our customers to write egress policies by hostname. </p>
    <div>
      <h2>Egress policies by domains, categories and applications too</h2>
      <a href="#egress-policies-by-domains-categories-and-applications-too">
        
      </a>
    </div>
    <p>Before we continue, we should note that this new feature also supports writing egress policies by:</p><ul><li><p>Domain: For example, we can now write an egress policy for <code>*.bank.example.com</code>, rather than an individual policy for each hostname (<code>bank.example.com</code>, <code>app.acmeco.bank.example.com</code>, <code>auth.bank.example.com</code>, etc.)</p></li><li><p>Category: For example, we can now write a single egress policy to control the egress IP address that employees use when accessing a site in the Cryptocurrency content category, rather than an individual policy for every Cryptocurrency website.</p></li><li><p>Application: For example, we can write a single egress policy for Slack, without needing to know all the different host and domain names (e.g. <code>app.slack.com</code>, <code>slack.com</code>, <code>acmeco.slack.com</code>, <code>slack-edge.com</code>) that Slack uses to serve its application.</p></li></ul><p>Here’s an example of writing an egress policy by application:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1so8jKsDbeWSfJAxh3yE9V/176e682aa1b0617fde4dd90732930460/image6.png" />
          </figure><p><sup><i>A view of the Cloudflare </i></sup><a href="https://dash.cloudflare.com/"><sup><i><u>dashboard</u></i></sup></a><sup><i> showing how to write an egress policy for a set of users and Applications. The policy is applied to users in the “finance” user group when accessing Applications including Microsoft Team, Slack, BambooHR and Progressive, and it determines the source IP that traffic uses when it egresses to the public Internet.</i></sup></p>
    <div>
      <h2>Why was this so hard to build?</h2>
      <a href="#why-was-this-so-hard-to-build">
        
      </a>
    </div>
    <p>Now let’s get into the engineering challenges behind this feature.</p><p>Egress polices are part of<a href="https://www.cloudflare.com/products/zero-trust/gateway/"> <u>Cloudflare Gateway</u></a>. Cloudflare Gateway is a<a href="https://www.cloudflare.com/learning/access-management/what-is-a-secure-web-gateway/"> <u>Secure Web Gateway (SWG)</u></a> that operates as both a <a href="https://www.cloudflare.com/learning/ddos/glossary/open-systems-interconnection-model-osi/"><u>layer 4 (L4)</u></a> and <a href="https://www.cloudflare.com/learning/ddos/glossary/open-systems-interconnection-model-osi/"><u>layer 7 (L7)</u></a> <a href="https://developers.cloudflare.com/learning-paths/replace-vpn/configure-device-agent/enable-proxy/"><u>proxy</u></a>. In other words, Cloudflare Gateway intercepts traffic by inspecting it at the transport layer (layer 4, including TCP and UDP), as well as at the application layer (layer 7, including HTTP).</p><p>The problem is that egress policies must necessarily be evaluated at layer 4, rather than at layer 7. Why? Because egress policies are used to select the source IP address for network traffic, and Cloudflare Gateway must select the source IP address for traffic <i>before</i> it creates the connection to the external service <code>bank.example.com</code>. If Gateway changes the source IP address in the middle of the connection, the connection will be broken. Therefore, Gateway must apply egress policies before it sends the very first packet in the connection. For instance, Gateway must apply egress policies before it sends the TCP SYN, which of course happens well before it sends any layer 7 information (e.g. HTTP). (See <a href="https://developers.cloudflare.com/cloudflare-one/policies/gateway/order-of-enforcement/"><u>here</u></a> for more information on Gateway’s order of enforcement for its policies.)</p><p>The bottom line is that Gateway has no other information to use when applying the egress policy, other than the information in the IP header and the L4 (e.g. TCP) header of an IP packet. As you can see for the TCP/IPv4 packet below, a destination hostname is not part of the IP or TCP header in a packet. That's why we previously were not able to support egress policies by hostname, and instead required administrators to write egress policies by destination IP address.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5LS9YhSoRJzwBG18wt55Fa/e0a7251ef8a7fe15c9c3ccc42b0e7fb6/image4.png" />
          </figure>
    <div>
      <h2>So how did we build the feature?</h2>
      <a href="#so-how-did-we-build-the-feature">
        
      </a>
    </div>
    <p>We took advantage of the fact that <a href="https://developers.cloudflare.com/cloudflare-one/policies/gateway/"><u>Cloudflare Gateway</u></a> also operates its own <a href="https://www.cloudflare.com/learning/access-management/what-is-dns-filtering/"><u>DNS resolver</u></a>. Every time an end user wants to access a destination hostname, the Gateway resolver first receives a DNS query for that hostname before sending its network traffic to the destination hostname. </p><p>To support egress policies by hostname, Gateway associates the DNS query for the hostname with the IP address  and TCP/UDP information in the network connection to the hostname. Specifically, Gateway will map the destination IP address in the end-user’s network connection to the hostname in the DNS query using a “synthetic IP” mechanism that is best explained using a diagram:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4cyJ0nD9cpXDqmo7FQJ0Ew/f9d93a8721645845c2036021dad57c27/image2.png" />
          </figure><p>Let’s walk through the flow:</p><p>1. When the end user makes a DNS query for <code>bank.example.com</code>, that DNS query is sent to the Gateway resolver.</p><p>2. The Gateway resolver does a public DNS lookup to associate bank.example.com to its destination IP address, which is <code>96.7.128.198</code>.</p><p>3. However, the Gateway resolver will not respond to the DNS query using the real destination IP <code>96.7.128.198</code>. Instead, it responds with an <i>initial resolved IP address</i> of <code>100.80.10.10</code>. This is not the real IP address for <code>bank.example.com</code>; instead, it acts as a tag that allows Gateway to identify network traffic destined to <code>bank.example.com</code>.  The initial resolved IP is randomly selected and temporarily assigned from one of the two IP address ranges below, which correspond to the Carrier Grade Network Address Translation (CGNAT) IP address spaces as defined in <a href="https://datatracker.ietf.org/doc/html/rfc6598"><u>RFC 6598</u></a> and <a href="https://datatracker.ietf.org/doc/rfc6264/"><u>RFC 6264</u></a>, respectively.</p><p>IPv4: 100.80.0.0/16</p><p>IPv6: 2606:4700:0cf1:4000::/64 </p><p>4. Gateway has now associated the initial resolved IP address <code>100.80.10.10</code>, with the hostname <code>bank.example.com</code>. Thus, when Gateway now sees network traffic to destination IP address <code>100.80.10.10</code>, Gateway recognizes it and applies the egress policy for bank.example.com. </p><p>5. After applying the egress policy, Gateway will rewrite the initially resolved address IP <code>100.80.10.10</code>, on the network traffic with the actual IP address <code>96.7.128.198</code> for <code>bank.example.com</code>, and send it out the egress IP address so that it can reach its destination.</p><p>The network traffic now has the correct destination IP address, and egresses according to the policy for bank.example.com, and all is well! </p>
    <div>
      <h2>Making it work for domains, categories and applications</h2>
      <a href="#making-it-work-for-domains-categories-and-applications">
        
      </a>
    </div>
    <p>So far, we’ve seen how the mechanism works with individual hostnames (i.e. Fully Qualified Domain Names (FQDNs) like <code>bank.example.com</code><u>)</u>. What about egress policies for domains and subdomains like <code>*.bank.example.com</code>? What about <a href="https://developers.cloudflare.com/cloudflare-one/policies/gateway/domain-categories/#content-categories"><u>content categories</u></a> and <a href="https://developers.cloudflare.com/cloudflare-one/policies/gateway/application-app-types/"><u>applications</u></a>, which are essentially sets of hostnames grouped together?</p><p>We are able to support these use cases because (returning to our example above) Gateway temporarily assigns the initial resolved IP address <code>100.80.10.10</code> to the hostname <code>bank.example.com</code> for a short period of time. After this short time period, the initial resolved IP address is released and returned into the pool of available addresses (in <code>100.80.0.0/16</code>), where it can be assigned to another hostname in the future.</p><p>In other words, we use a random dynamic assignment of initial resolved IP addresses, rather than statically associating a single initial resolved IP address with a single hostname. The result is that we can apply IPv4 egress policies to a very large number of hostnames, rather than being limited by the 65,536 IP addresses available in the <code>100.80.0.0/16</code> IPv4 address block.</p><p>Randomly assigning the initial resolved IP address also means that we can apply a single egress policy for a wildcard like <code>*.bank.example.com</code> to any traffic we happen to come across, such as traffic for <code>acmeco.bank.example.com</code> or <code>auth.bank.example.com</code>. A static mapping would require the customer to write a different policy for each individual hostname, which is clunkier and more difficult to manage.</p><p>Thus, by using dynamic assignments of initial resolved IP addresses, we simplify our customers’ egress policies and all is well!</p><p>Actually, not quite. There’s one other problem we need to solve.</p>
    <div>
      <h2>Landing on the same server</h2>
      <a href="#landing-on-the-same-server">
        
      </a>
    </div>
    <p>Cloudflare has an <a href="https://www.cloudflare.com/network"><u>extensive global network</u></a>, with servers running our software stack in over 330 cities in 125 countries. Our architecture is such that sharing strongly-consistent storage across those servers (even within a single data center) comes with some performance and reliability costs. For this reason, we decided to build this feature under the assumption that state could not be shared between any Cloudflare servers, even servers in the same data center.</p><p>This assumption created an interesting challenge for us. Let’s see why.</p><p>Returning to our running example, suppose that the end user’s DNS traffic lands on one Cloudflare server while the end user’s network traffic lands on a different Cloudflare server. Those servers do not share state.  Thus, it’s not possible to associate the mapping from hostname to its actual destination IP address (<code>bank.example.com</code> = <code>96.7.128.198</code>) which was obtained from the DNS traffic, with the initial resolved IP that is used by the network traffic (i.e. <code>100.80.10.10</code>). Our mechanism would break down and traffic would be dropped, as shown in the figure below.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/76uOsvToz6PnHVjprOKYGy/d978576f2a1d8b6a246431035ecf7a30/Landing_on_the_same_server.png" />
          </figure><p>We solve this problem by ensuring that DNS traffic and network traffic land on the same Cloudflare server. In particular, we require DNS traffic to go into the same tunnel as network traffic so that both traffic flows land on the same Cloudflare server. For this reason, egress policies by hostname are only supported when end users connect to the Cloudflare network using one of the following on-ramps:</p><ul><li><p>The WARP client (which we recently <a href="https://developers.cloudflare.com/cloudflare-one/changelog/warp/#2025-05-14"><u>upgraded</u></a> to send <a href="https://developers.cloudflare.com/cloudflare-one/connections/connect-devices/warp/configure-warp/route-traffic/warp-architecture/#overview"><u>DNS traffic inside the WARP tunnel</u></a>)</p></li><li><p>PAC files</p></li><li><p>Browser Isolation</p></li></ul><p>We are actively working to expand support of this feature to more onramps. </p>
    <div>
      <h2>What’s next?</h2>
      <a href="#whats-next">
        
      </a>
    </div>
    <p>There’s a lot more coming. Besides expanding support for more onramps, we also plan to extend this support to hostname-based rulesets in more parts of Cloudflare’s SASE. Stand by for more updates from us on this topic. All of these new features will rely on the “initial resolved IP” mechanism that we described above, empowering our customers to simplify their rulesets and enforce tighter security policies in their Cloudflare SASE deployments.</p><p>Don't wait to gain granular control over your network traffic: log in to your Cloudflare <a href="https://dash.cloudflare.com/"><u>dashboard</u></a> to explore the beta release of <a href="https://developers.cloudflare.com/cloudflare-one/policies/gateway/egress-policies/"><u>egress policies</u></a> by hostname / domain / category / application and bolster your security strategy with <a href="https://developers.cloudflare.com/reference-architecture/diagrams/sase/"><u>Cloudflare SASE</u></a>.</p> ]]></content:encoded>
            <category><![CDATA[Cloudflare Gateway]]></category>
            <category><![CDATA[Cloudflare One]]></category>
            <category><![CDATA[Egress]]></category>
            <category><![CDATA[SASE]]></category>
            <category><![CDATA[Zero Trust]]></category>
            <category><![CDATA[Access Control Lists (ACLs)]]></category>
            <category><![CDATA[Categories]]></category>
            <category><![CDATA[Hostnames]]></category>
            <guid isPermaLink="false">1NxtPefzr7flsiIe8gZ43L</guid>
            <dc:creator>Ankur Aggarwal</dc:creator>
            <dc:creator>Sharon Goldberg</dc:creator>
            <dc:creator>João Paiva</dc:creator>
            <dc:creator>Alyssa Wang</dc:creator>
        </item>
        <item>
            <title><![CDATA[Simplify allowlist management and lock down origin access with Cloudflare Aegis]]></title>
            <link>https://blog.cloudflare.com/aegis-deep-dive/</link>
            <pubDate>Thu, 20 Mar 2025 13:00:00 GMT</pubDate>
            <description><![CDATA[ Cloudflare Aegis provides dedicated egress IPs for Zero Trust origin access strategies, now supporting BYOIP and customer-facing configurability, with observability of Aegis IP utilization soon. ]]></description>
            <content:encoded><![CDATA[ <p>Today, we’re taking a deep dive into <a href="https://developers.cloudflare.com/aegis/"><u>Aegis</u></a>, Cloudflare’s origin protection product, to help you understand what the product is, how it works, and how to take full advantage of it for locking down access to your origin. We’re excited to announce the availability of <a href="https://developers.cloudflare.com/byoip/"><u>Bring Your Own IPs (BYOIP)</u></a> for Aegis, a customer-accessible Aegis API, and a gradual rollout for observability of Aegis IP utilization.</p><p>If you are new to Cloudflare Aegis, let’s take a step back and understand the product’s purpose and security benefits, process, and how it came to be. </p>
    <div>
      <h3>Origin protection then…</h3>
      <a href="#origin-protection-then">
        
      </a>
    </div>
    <p>Allowlisting a specific set of IP addresses has long existed as one of the simplest ways of restricting access to a server. This firewall mechanism is a starting state that just about every server supports. As we built Cloudflare’s network, one of the first features that customers requested was the ability to restrict access to their origin, so only Cloudflare could make requests to it. Back then, the most natural way to support this was to tell our customers which IP addresses belong to us, so they could allowlist those in their origin firewall. To that end, we have published our <a href="https://www.cloudflare.com/ips/"><u>IP address ranges</u></a>, providing an easy configuration to ensure that all traffic accessing your origin comes from Cloudflare’s network.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4I2fP03AeszxuHL78Ap9lA/bf75dcb9259b8b97b73f55831b3c019f/BLOG-2609_2.png" />
          </figure><p>However, Cloudflare’s IP ranges are used across multiple Cloudflare services and customers, so restricting access to the full list doesn’t necessarily give customers the security benefit they need. With the <a href="https://www.cloudflare.com/2024-api-security-management-report/#:~:text=Cloudflare%20serves%20over%2050%20million,billion%20cyber%20threats%20each%20day."><u>frequency</u></a> and <a href="https://blog.cloudflare.com/how-cloudflare-auto-mitigated-world-record-3-8-tbps-ddos-attack/"><u>scale</u></a> of IP-based and DDoS attacks every day, origin protection is absolutely paramount. Some customers have the need for more stringent security precautions to ensure that traffic is only coming from Cloudflare’s network and, more specifically, only coming from their zones within Cloudflare.</p>
    <div>
      <h3>Origin protection now…</h3>
      <a href="#origin-protection-now">
        
      </a>
    </div>
    <p>Cloudflare has built out additional services to lock down access to your origin, like <a href="https://developers.cloudflare.com/ssl/origin-configuration/authenticated-origin-pull/"><u>authenticated origin pulls</u></a> (mTLS) and <a href="https://developers.cloudflare.com/cloudflare-one/connections/connect-networks/"><u>Cloudflare Tunnels</u></a>, that no longer rely on IP addresses as an indicator of identity. These are part of a global effort towards <a href="https://www.cloudflare.com/learning/security/glossary/what-is-zero-trust/#:~:text=Zero%20Trust%20security%20means%20that,shown%20to%20prevent%20data%20breaches."><u>Zero Trust security</u></a>: whereas the Internet used to operate under a trust-but-verify model, we aim to operate as nothing is trusted, and everything is verified. </p><p>Having non-ephemeral IP addresses — upon which the firewall allowlist mechanism relies — does not quite fit the Zero Trust system. Although mTLS and similar solutions present a more modern approach to origin security, they aren’t always feasible for customers, depending on their hardware or system architecture. </p><p>We launched <a href="https://blog.cloudflare.com/cloudflare-aegis/"><u>Cloudflare Aegis</u></a> in March 2023 for customers seeking an intermediary security solution. Aegis provides a dedicated IP address, or set of addresses, from which Cloudflare sends requests, allowing you to further lock down your origin’s layer 3 firewall. Aegis also simplifies management by only requiring you to allowlist a small number of IP addresses. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3KmeBCAqzygLJWR7oMf7Mw/5303403c266e80d271160b9c32cbd764/BLOG-2609_3.png" />
          </figure><p>Normally, Cloudflare’s <a href="https://www.cloudflare.com/ips/"><u>publicly listed IP ranges</u></a> are used to egress from Cloudflare’s network to the customer origin. With these IP addresses distributed across Cloudflare’s network, the customer traffic can egress from many servers to the customer origin.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3RbmfIwMajVSjnHdEBN1Y/f99b2f41cb289df93f90c1960bf6a497/BLOG-2609_4.png" />
          </figure><p>With Aegis, a customer does not necessarily have an Aegis IP address on every server if they are using IPv4. That means requests must be routed through Cloudflare’s network to a server where Aegis IP addresses are present before the traffic can egress to the customer origin. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2TdNsZJmckJZuVCmAyCGbA/d7ed52f94c8a6fc375363bf011e40229/BLOG-2609_5.png" />
          </figure>
    <div>
      <h3>How requests are routed with Aegis</h3>
      <a href="#how-requests-are-routed-with-aegis">
        
      </a>
    </div>
    <p>A few terms, before we begin:</p><ul><li><p>Anycast: a technology where each of our data centers “announces” and can handle the same IP address ranges</p></li><li><p>Unicast: a technology where each server is given its own, unique <i>unicast</i> IP address</p></li></ul><p>Dedicated egress Aegis IPs are located in a particular set of specific data centers. This list is handpicked by the customer, in conversation with Cloudflare, to be geographically close to their origin servers for optimal security and performance in tandem. </p><p>Aegis relies on a technology called <a href="https://blog.cloudflare.com/cloudflare-servers-dont-own-ips-anymore/#soft-unicast-is-indistinguishable-from-magic"><u>soft-unicasting,</u> which allows</a> us to share a /32 egress IPv4 amongst many servers, thereby enabling us to spread a single subnet across many data centers. Then, the traffic going back from the origin servers (the return path) is routed to their closest data center. Once in Cloudflare's network, our in-house <a href="https://blog.cloudflare.com/unimog-cloudflares-edge-load-balancer/"><u>L4 XDP-based load balancer, Unimog,</u></a> ensures that the return packets make it back to the machine that connected to the origin servers at the start.</p><p>This supports fast, local, and reliable egress from Cloudflare’s network. With this configuration, we essentially use <a href="https://www.cloudflare.com/learning/cdn/glossary/anycast-network/"><u>Anycast</u></a> at the <a href="https://www.cloudflare.com/learning/security/glossary/what-is-bgp/"><u>BGP layer</u></a> before using an IP and port range to reach a specific machine in the correct data center. Across Cloudflare’s network, we use a significant range of egress IPs to cover all data centers and machines. Since Aegis customers only have a few IPv4 addresses, the range is limited to a few data centers rather than Cloudflare’s entire egress IP range.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6Y2sZmlh9uy6tRAiJhCZOp/ae186c36f30ba80500b615451952dfd7/BLOG-2609_6.png" />
          </figure>
    <div>
      <h3>The capacity issue</h3>
      <a href="#the-capacity-issue">
        
      </a>
    </div>
    <p>Every IP address has <a href="https://www.cloudflare.com/learning/network-layer/what-is-a-computer-port/#:~:text=There%20are%2065%2C535%20possible%20port,File%20Transfer%20Protocol%20(FTP)."><u>65,535 ports</u></a>. A request egresses from exactly one port on the Aegis IP address to exactly one port on the origin IP address. </p><p>Each TCP request consists of a 4-way tuple that contains:</p><ol><li><p>Source IP address</p></li><li><p>Source port</p></li><li><p>Destination IP address</p></li><li><p>Destination port</p></li></ol><p>A <a href="https://blog.cloudflare.com/everything-you-ever-wanted-to-know-about-udp-sockets-but-were-afraid-to-ask-part-1/"><u>UDP request</u></a> can also consist of a 4-way tuple (if it’s connected) or a 2-way tuple (if it’s unconnected), simply including a bind IP address and port. Aegis supports both TCP and UDP traffic — in either case, the requests rely upon IP:port pairings between the source and destination. </p><p>When a request reaches the origin, it opens a <i>connection</i>, through which data can pass between the source and destination. One source port can sustain multiple connections at a time, <i>only</i> if the destination ip:ports are different. </p><p>Normally at Cloudflare, an IP address establishes connections to a variety of different destination IP ports or addresses to support high traffic volumes. With Aegis, that is no longer the case. The challenge with Aegis IP capacity is exactly that: all the traffic is egressing to the same (or a small set of) origin IP address(es) from the same (or a small set of) source IP address(es). That means Aegis IP addresses have capacity constraints associated with them.</p><p>The number of <i>concurrent connections</i> is the number of simultaneous connections for a given 4-way tuple. Between one client and one server, the volume of concurrent connections is inherently limited by the number of ports on an IP address to 65,535 — each source ip:port can only support a single outbound connection per destination IP:port. In practice, that maximum number of concurrent connections is often lower due to assignments of port ranges across many servers and imperfect load distribution. </p><p>For planning purposes, we use an estimate of ~80% of the IP capacity (the volume of concurrent connections a source IP address can support to a destination IP address) to protect against overload, in case of traffic spikes. If every port on an IP address is maintaining a concurrent connection, that address would reach and exceed capacity — it would be overloaded with port usage exhaustion. Requests may then be dropped since no new connections can be established. To build in resiliency, we only plan to support 40k concurrent connections per Aegis IP address per origin.</p>
    <div>
      <h3>Aegis with IPv6</h3>
      <a href="#aegis-with-ipv6">
        
      </a>
    </div>
    <p>Each customer who onboards with Cloudflare Aegis receives two <a href="https://www.ripe.net/about-us/press-centre/understanding-ip-addressing/#:~:text=of%20IPv6%20addresses.-,IPv6%20Relative%20Network%20Sizes,-/128"><u>/64 prefixes</u></a> to be globally allocated and announced. That means, outside of Cloudflare’s China Network, every Cloudflare data center has hundreds or even thousands of addresses reserved for egressing your traffic directly to your origin. Without Aegis, any data center in Cloudflare’s Anycast network can serve as a point of egress – so we built Aegis with IPv6 to preserve that level of resiliency and performance. The sheer scale of IPv6, with its available address space, allows us to cushion Aegis’ capacity to a point far beyond any reasonable concern. Globally allocating and announcing your Aegis IPv6 addresses maintains all of Cloudflare’s functionality as a reverse proxy without inducing additional friction.</p>
    <div>
      <h3>Aegis with IPv4</h3>
      <a href="#aegis-with-ipv4">
        
      </a>
    </div>
    <p>Although using IPv6 with Aegis facilitates the best possible speed and resiliency for your traffic, we recognize the transition from IPv4 to IPv6 can be challenging for some customers. Moreover, some customers prefer Aegis IPv4 for granular control over their traffic’s physical egress locations. Still, IPv4 space is more limited and more expensive — while all Cloudflare Aegis customers simply receive two dedicated /64s for IPv6, enabling Aegis with IPv4 requires a touch more tailoring. When you onboard to Aegis, we work with you to determine the ideal number of IPv4 addresses for your Aegis configuration to maintain optimal performance and resiliency, while also ensuring cost efficiency. </p><p>Naturally, this introduces a bottleneck — whereas every Cloudflare data center can serve as a point of egress with Aegis IPv6, only a small fraction will have that capability with Aegis IPv4. We aim to mitigate this impact by careful provisioning of the IPv4 addresses. </p><p>Now that BYOIP for Aegis is supported, you can also onboard an entire IPv4 <a href="https://www.ripe.net/about-us/press-centre/understanding-ip-addressing/#:~:text=in%20that%20%E2%80%9Cblock%E2%80%9D.-,IPv4,-The%20size%20of"><u>/24</u></a> prefix or IPv6 /64 for Aegis, allowing for a cost-effective configuration with a much higher volume of capacity.</p><p>When we launched Aegis, each IP address was allocated to one data center, requiring at least two IPv4 addresses for appropriate resiliency. To reduce the number of IP addresses necessary in your layer 3 firewall allowlist, and to manage the cost to the customer of leasing IPs, we expanded our Aegis functionality so that one address can be announced from up to four data centers. To do this, we essentially slice the available IP port range into four subsets and provision each at a unique data center. </p><p>A quick refresher: when a request travels through Cloudflare, it first hits our network via an <i>ingress data center</i>. The ingress data center is generally near the eyeball, who is sending the request. Then, the request is routed following BGP – or <a href="https://developers.cloudflare.com/argo-smart-routing/"><u>Argo Smart Routing</u></a>, when enabled – to an <i>exit, or egress, data center</i>. The exit data center will generally fall in close geographic proximity to the request’s destination, which is the customer origin. This mitigates latency induced by the final hop from Cloudflare’s network to your origin.</p><p>With Aegis, the possible exit data centers are limited to the data centers in which an Aegis IP address has been allocated. For IPv6, this is a non-issue, since every data center outside our China Network is covered. With IPv4, however, the exit data centers are limited to a much smaller number (4 x the number of Aegis IPs). Aegis IP addresses are allocated, then, to data centers in close geographic proximity to your origin(s). This maximizes the likelihood that whichever data centers would ordinarily have been selected as the egress data center are already announcing Aegis IP addresses. Theoretically, no extra hop is necessary from the optimal exit data center to an Aegis-enabled data center – they are one and the same. In practice, this cannot be guaranteed 100% of the time because optimal routes are ever-changing. We recommend IPv6 to ensure optimal performance because of this possibility of an extra hop with IPv4.</p><p>A brief comparison, to summarize:</p><table><tr><th><p>
</p></th><th><p><b>Aegis IPv4</b></p></th><th><p><b>Aegis IPv6</b></p></th></tr><tr><td><p>Physical points of egress</p></td><td><p>4 physical data center sites (1-2 cities near origin) per IP address</p></td><td><p>All 300+ Cloudflare <a href="https://www.cloudflare.com/network/"><u>locations</u></a> (excluding China network)</p></td></tr><tr><td><p>Capacity</p></td><td><p>One IPv4 address per 40,000 concurrent connections per origin</p></td><td><p>Two /64 prefixes for all Aegis customers (&gt;36 quintillion IP addresses)</p><p>~50,000x capacity of IPv4 config</p></td></tr><tr><td><p>Pricing model</p></td><td><p>Monthly fee based on IPv4 leases or BYOIP for Aegis prefix fees</p></td><td><p>Included with product purchase or BYOIP for Aegis prefix fees</p></td></tr></table><p>Now, with Aegis analytics coming soon, customers can monitor and manage their IP address usage by Cloudflare data centers in aggregate. Every Cloudflare data center will now run a service with the sole purpose of calculating and reporting Aegis usage for each origin IP:port at regular intervals. Written to an internal database, these reports will be aggregated and exposed to customers via Cloudflare’s <a href="https://developers.cloudflare.com/analytics/graphql-api/"><u>GraphQL Analytics API</u></a>. Several aggregation functions will be available, such as average usage over a period of time, or total summed usage.</p><p>This will allow customers to track their own IP address usage to further optimize the distribution of traffic and addresses across different points of presence for IPv4. Additionally, the improved observability will support customer-created notifications via RSS feeds such that you can design your own notification thresholds for port usage.</p>
    <div>
      <h3>How Aegis benefits from connection reuse &amp; coalescence</h3>
      <a href="#how-aegis-benefits-from-connection-reuse-coalescence">
        
      </a>
    </div>
    <p>As we mentioned earlier, requests egress from the source IP address to the destination IP address only when a connection has been established between the two. In early Internet protocols, requests and connections were 1:1. Now, once that connection is open, it can remain open and support hundreds or thousands of requests between that source and destination via <i>connection reuse</i> and <i>connection coalescing</i>. </p><p>Connection reuse, implemented by <a href="https://datatracker.ietf.org/doc/html/rfc2616?cf_history_state=%7B%22guid%22%3A%22C255D9FF78CD46CDA4F76812EA68C350%22%2C%22historyId%22%3A41%2C%22targetId%22%3A%223BBCDD89688CD49F2C3350ED8037BC6F%22%7D"><u>HTTP/1.1</u></a>, allows for requests with the same source ip:port and destination IP:port to pass through the same connection to the origin. A “simple” website by modern standards can send hundreds of requests just to load initially; by streamlining these into a single origin connection, connection reuse reduces the latency derived from constantly opening and closing new connections between two endpoints. Still, any request from a different domain would need to create a new, unique connection to communicate with the origin. </p><p>As of <a href="https://datatracker.ietf.org/doc/html/rfc7540"><u>HTTP/2</u></a>, connection coalescing can group requests from different domains into one connection if the requests have the same destination IP address and the server certificate is authoritative for both domains. Depending on the traffic patterns routing from the eyeball to an Aegis IP address, the volume of connection reuse &amp; coalescence can vary. One connection most likely facilitates the traffic of many requests, but each connection requires at least one request to open it in the first place. Therefore, the worst possible ratio between concurrent connections and concurrent requests is 1:1. </p><p>In practice, a 1:1 ratio between connections and requests almost <i>never</i> happens. Connection reuse and connection coalescence are very common but highly variable, due to sporadic traffic patterns. We size our Aegis IP address allocations accordingly, erring on the conservative side to minimize risk of capacity overload. With the proper number of dedicated egress IP addresses and optimal allocation to Cloudflare points of presence, we are able to lock down your origin with IPv4 addresses to block malicious layer 7 traffic and reduce overall load to your origin. </p><p>Connection reuse and coalescence pairs well with Aegis to reduce load on the origin’s side as well. Because a request can be reused if it comes from the same source IP:port and shares a destination IP:port, routing traffic from a reduced number of source IP addresses (Aegis addresses, in this case) to your origin facilitates a smaller number of total connections. Not only does this improve security by limiting open connection access, but also it reduces latency since fewer connections need to be opened. Maintaining fewer connections is also less resource intensive — more connections means more CPU and more memory handling the inbound requests. By reducing the number of connections to the origin through reuse and coalescence, HTTP/2 lowers the overall cost of operation by optimizing resource usage. </p>
    <div>
      <h3>Recap and recommendations</h3>
      <a href="#recap-and-recommendations">
        
      </a>
    </div>
    <p>Cloudflare Aegis locks down your origin by restricting access via your origin’s layer 3 firewall. By routing traffic from Cloudflare’s network to your origin through dedicated egress IP addresses, you can ensure that requests coming from Cloudflare are legitimate customer traffic. With a simple flip-switch configuration — allow listing your Aegis IP addresses in your origin’s firewall — you can block excessive noise and bad actors from access. So, to help you take full advantage of Aegis, let’s recap:</p><ul><li><p>Concurrent connections can be, at worst, a 1:1 ratio to concurrent requests.</p></li><li><p>Cloudflare bases our IP address usage recommendations on 40,000 concurrent connections to minimize risk of capacity overload.</p></li><li><p>Each Aegis IP address supports an estimated 40,000 concurrent connections per origin IP address.</p></li></ul><p>Additionally, we’re excited to now support:</p><ul><li><p><a href="https://developers.cloudflare.com/api/resources/zones/subresources/settings/methods/edit/"><u>Public Aegis API</u></a> </p></li><li><p><a href="https://developers.cloudflare.com/aegis/setup/"><u>BYOIP for Aegis</u></a> </p></li><li><p>Customer-facing Aegis observability (coming soon via gradual rollout)</p></li></ul><p>For customers leasing Cloudflare-owned Aegis IP addresses, the Aegis API will allow you to enable and disable Aegis on zones within your parent account (parent being the account which owns the IP lease). If you deploy your Aegis IP addresses across multiple accounts, you’ll still rely on Cloudflare’s account team to enable and disable Aegis on zones within those additional accounts.</p><p>For customers who leverage BYOIP for Aegis, the Aegis API will allow you to enable and disable Aegis on zones within your parent account <i>and</i> within any accounts to which you <a href="https://developers.cloudflare.com/byoip/concepts/prefix-delegations/#:~:text=BYOIP%20supports%20prefix%20delegations%2C%20which,service%20used%20with%20the%20prefix."><u>delegate prefix permissions</u></a>. We recommend BYOIP for Aegis for improved configurability and cost efficiency. </p><table><tr><th><p>
</p></th><th><p><b>BYOIP</b></p></th><th><p><b>Cloudflare-owned IPs</b></p></th></tr><tr><td><p>Enable Aegis on zones on parent account</p></td><td><p>✓</p></td><td><p>✓</p></td></tr><tr><td><p>Enable Aegis on zones beyond parent account</p></td><td><p>✓</p></td><td><p>✗</p></td></tr><tr><td><p>Disable Aegis on zones on parent account</p></td><td><p>✓</p></td><td><p>✓</p></td></tr><tr><td><p>Disable Aegis on zones beyond parent account</p></td><td><p>✓</p></td><td><p>✗</p></td></tr><tr><td><p>Access Aegis analytics via the API</p></td><td><p>✓</p></td><td><p>✓</p></td></tr></table><p>With the improved Aegis observability, all Aegis customers will be able to monitor their port usage by IP address, account, zone, and data centers in aggregate via the API. You will also be able to ingest these metrics to configure your own, customizable alerts based on certain port usage thresholds. Alongside the new configurability of Aegis, this visibility will better equip customers to manage their Aegis deployments themselves and alert <i>us</i> to any changes, rather than the other way around.</p><p>We also have a few adjacent recommendations to optimize your Aegis configuration. We generally encourage the following best practices for security hygiene for your origin and traffic as well.</p><ol><li><p><b>IPv6 Compatibility</b>: if your origin(s) support IPv6, you will experience even better resiliency, performance, and availability with your dedicated egress IP addresses at a lower overall cost.</p></li><li><p><a href="https://www.cloudflare.com/learning/performance/http2-vs-http1.1/"><b><u>HTTP/2</u></b></a><b> or </b><a href="https://www.cloudflare.com/learning/performance/what-is-http3/"><b><u>HTTP/3</u></b></a><b> adoption</b>: by supporting connection reuse and coalescence, you will reduce overall load to your origin and latency in the path of your request.</p></li><li><p><b>Multi-level origin protection</b>: while Aegis protects your origin at the application level, it pairs well with <a href="https://blog.cloudflare.com/access-aegis-cni/"><u>Access and CNI</u></a>, <a href="https://developers.cloudflare.com/ssl/origin-configuration/authenticated-origin-pull/"><u>Authenticated Origin Pulls</u></a>, and/or other Cloudflare products to holistically protect, verify, and facilitate your traffic from edge to origin.</p></li></ol><p>If you or your organization want to enhance security and lock down your origin with dedicated egress IP addresses reach out to your account team to onboard today. </p> ]]></content:encoded>
            <category><![CDATA[Security Week]]></category>
            <category><![CDATA[Aegis]]></category>
            <category><![CDATA[Egress]]></category>
            <category><![CDATA[IPv4]]></category>
            <category><![CDATA[IPv6]]></category>
            <guid isPermaLink="false">LPhv5n2cp5pkZBwAC8hN0</guid>
            <dc:creator>Mia Malden</dc:creator>
            <dc:creator>Adrien Vasseur</dc:creator>
        </item>
        <item>
            <title><![CDATA[Cloudflare R2 and MosaicML enable training LLMs on any compute, anywhere in the world, with zero switching costs]]></title>
            <link>https://blog.cloudflare.com/cloudflare-r2-mosaicml-train-llms-anywhere-faster-cheaper/</link>
            <pubDate>Tue, 16 May 2023 13:00:54 GMT</pubDate>
            <description><![CDATA[ Together, Cloudflare and MosaicML give customers the freedom to train LLMs on any compute, anywhere in the world, with zero switching costs. That means faster, cheaper training runs, and no vendor lock in. ]]></description>
            <content:encoded><![CDATA[ <p></p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5BYHRApu97ZOlXvHf0cIMs/4f30b2009add920757a79e9f6c6ca1b5/111.png" />
            
            </figure><p>Building the large language models (LLMs) and diffusion models that power <a href="https://www.cloudflare.com/learning/ai/what-is-generative-ai/">generative AI</a> requires massive infrastructure. The most obvious component is compute – hundreds to thousands of GPUs – but an equally critical (and often overlooked) component is the <b>data storage infrastructure.</b> Training datasets can be terabytes to petabytes in size, and this data needs to be read in parallel by thousands of processes. In addition, model checkpoints need to be saved frequently throughout a training run, and for LLMs these checkpoints can each be hundreds of gigabytes!</p><p>To <a href="https://r2-calculator.cloudflare.com/">manage storage costs</a> and scalability, many machine learning teams have been moving to <a href="https://www.cloudflare.com/learning/cloud/what-is-object-storage/">object storage</a> to host their datasets and checkpoints. Unfortunately, most <a href="https://www.cloudflare.com/developer-platform/products/r2/">object store providers</a> use egress fees to “lock in” users to their platform. This makes it very difficult to leverage GPU capacity across multiple cloud providers, or take advantage of lower / dynamic pricing elsewhere, since the data and model checkpoints are too expensive to move. At a time when cloud GPUs are scarce, and new hardware options are entering the market, it’s more important than ever to stay flexible.</p><p>In addition to high egress fees, there is a technical barrier to object-store-centric machine learning training. Reading and writing data between object storage and compute clusters requires high throughput, efficient use of network bandwidth, determinism, and elasticity (the ability to train on different #s of GPUs). Building training software to handle all of this correctly and reliably is hard!</p><p>Today, we’re excited to show how MosaicML’s tools and Cloudflare R2 can be used together to address these challenges. First, with MosaicML’s open source <a href="https://github.com/mosaicml/streaming">StreamingDataset</a> and <a href="https://github.com/mosaicml/composer">Composer</a> libraries, you can easily stream in training data and read/write model checkpoints back to R2. All you need is an Internet connection. Second, thanks to R2’s zero-egress pricing, you can start/stop/move/resize jobs in response to GPU availability and prices across compute providers, without paying any data transfer fees. The MosaicML training platform makes it dead simple to orchestrate such training jobs across multiple clouds.</p><p>Together, Cloudflare and MosaicML give you the freedom to train LLMs on <i>any</i> compute, <i>anywhere</i> in the world, with zero switching costs. That means faster, cheaper training runs, and no vendor lock in :)</p><blockquote><p><i>“With the MosaicML training platform, customers can efficiently use R2 as the durable storage backend for training LLMs on any compute provider with zero egress fees. AI companies are facing outrageous cloud costs, and they are on the hunt for the tools that can provide them with the speed and flexibility to train their best model at the best price.”</i> – <b>Naveen Rao, CEO and co-founder, MosaicML</b></p></blockquote>
    <div>
      <h3>Reading data from R2 using StreamingDataset</h3>
      <a href="#reading-data-from-r2-using-streamingdataset">
        
      </a>
    </div>
    <p>To read data from R2 efficiently and deterministically, you can use the MosaicML <a href="https://github.com/mosaicml/streaming">StreamingDataset</a> library. First, write your training data (images, text, video, anything!) into <code>.mds</code> shard files using the provided Python API:</p>
            <pre><code>import numpy as np
from PIL import Image
from streaming import MDSWriter

# Local or remote directory in which to store the compressed output files
data_dir = 'path-to-dataset'

# A dictionary mapping input fields to their data types
columns = {
    'image': 'jpeg',
    'class': 'int'
}

# Shard compression, if any
compression = 'zstd'

# Save the samples as shards using MDSWriter
with MDSWriter(out=data_dir, columns=columns, compression=compression) as out:
    for i in range(10000):
        sample = {
            'image': Image.fromarray(np.random.randint(0, 256, (32, 32, 3), np.uint8)),
            'class': np.random.randint(10),
        }
        out.write(sample)</code></pre>
            <p>After your dataset has been converted, you can upload it to R2. Below we demonstrate this with the <code>awscli</code> command line tool, but you can also use `wrangler `or any <a href="https://www.cloudflare.com/developer-platform/solutions/s3-compatible-object-storage/">S3-compatible tool</a> of your choice. StreamingDataset will also support direct cloud writing to R2 soon!</p>
            <pre><code>$ aws s3 cp --recursive path-to-dataset s3://my-bucket/folder --endpoint-url $S3_ENDPOINT_URL</code></pre>
            <p>Finally, you can read the data into any device that has read access to your R2 bucket. You can fetch individual samples, loop over the dataset, and feed it into a standard PyTorch dataloader.</p>
            <pre><code>from torch.utils.data import DataLoader
from streaming import StreamingDataset

# Make sure that R2 credentials and $S3_ENDPOINT_URL are set in your environment    
# e.g. export S3_ENDPOINT_URL="https://[uid].r2.cloudflarestorage.com"

# Remote path where full dataset is persistently stored
remote = 's3://my-bucket/folder'

# Local working dir where dataset is cached during operation
local = '/tmp/path-to-dataset'

# Create streaming dataset
dataset = StreamingDataset(local=local, remote=remote, shuffle=True)

# Let's see what is in sample #1337...
sample = dataset[1337]
img = sample['image']
cls = sample['class']

# Create PyTorch DataLoader
dataloader = DataLoader(dataset)</code></pre>
            <p>StreamingDataset comes out of the box with high performance, elastic determinism, fast resumption, and multi-worker support. It also uses smart shuffling and distribution to ensure that download bandwidth is minimized. Across a variety of workloads such as LLMs and diffusion models, we find that there is no impact on training throughput (no dataloader bottleneck) when training from object stores like R2. For more information, check out the StreamingDataset <a href="https://www.mosaicml.com/blog/mosaicml-streamingdataset">announcement blog</a>!</p>
    <div>
      <h3>Reading/writing model checkpoints to R2 using Composer</h3>
      <a href="#reading-writing-model-checkpoints-to-r2-using-composer">
        
      </a>
    </div>
    <p>Streaming data into your training loop solves half of the problem, but how do you load/save your model checkpoints? Luckily, if you use a training library like <a href="https://github.com/mosaicml/composer">Composer</a>, it’s as easy as pointing at an R2 path!</p>
            <pre><code>from composer import Trainer
...

# Make sure that R2 credentials and $S3_ENDPOINT_URL are set in your environment
# e.g. export S3_ENDPOINT_URL="https://[uid].r2.cloudflarestorage.com"

trainer = Trainer(
        run_name='mpt-7b',
        model=model,
        train_dataloader=train_loader,
        ...
        save_folder=s3://my-bucket/mpt-7b/checkpoints,
        save_interval='1000ba',
        # load_path=s3://my-bucket/mpt-7b-prev/checkpoints/ep0-ba100-rank0.pt,
    )</code></pre>
            <p>Composer uses asynchronous uploads to minimize wait time as checkpoints are being saved during training. It also works out of the box with multi-GPU and multi-node training, and <b>does not require a shared file system.</b> This means you can skip setting up an expensive EFS/NFS system for your compute cluster, saving thousands of dollars or more per month on public clouds. All you need is an Internet connection and appropriate credentials – your checkpoints arrive safely in your R2 bucket giving you scalable and secure storage for your private models.</p>
    <div>
      <h3>Using MosaicML and R2 to train anywhere efficiently</h3>
      <a href="#using-mosaicml-and-r2-to-train-anywhere-efficiently">
        
      </a>
    </div>
    <p>Using the above tools together with Cloudflare R2 enables users to run training workloads on any compute provider, with total freedom and zero switching costs.</p><p>As a demonstration, in Figure X we use the MosaicML training platform to launch an LLM training job starting on Oracle Cloud Infrastructure, with data streaming in and checkpoints uploaded back to R2. Part way through, we pause the job and seamlessly resume on a different set of GPUs on Amazon Web Services. Composer loads the model weights from the last saved checkpoint in R2, and the streaming dataloader instantly resumes to the correct batch. Training continues deterministically. Finally, we move again to Google Cloud to finish the run.</p><p>As we train our LLM across three cloud providers, the only costs we pay are for GPU compute and data storage. No egress fees or lock in thanks to Cloudflare R2!</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6d7rJLiQ1wI12thUIr792s/2eacae82cc201106faed1af1b079b4d2/image2-19.png" />
            
            </figure><p><i>Using the MosaicML training platform with Cloudflare R2 to run an LLM training job across three different cloud providers, with zero egress fees.</i></p>
            <pre><code>$ mcli get clusters
NAME            PROVIDER      GPU_TYPE   GPUS             INSTANCE                   NODES
mml-1            MosaicML   │  a100_80gb  8             │  mosaic.a100-80sxm.1        1    
                            │  none       0             │  cpu                        1    
gcp-1            GCP        │  t4         -             │  n1-standard-48-t4-4        -    
                            │  a100_40gb  -             │  a2-highgpu-8g              -    
                            │  none       0             │  cpu                        1    
aws-1            AWS        │  a100_40gb  ≤8,16,...,32  │  p4d.24xlarge               ≤4   
                            │  none       0             │  cpu                        1    
oci-1            OCI        │  a100_40gb  8,16,...,64   │  oci.bm.gpu.b4.8            ≤8  
                            │  none       0             │  cpu                        1    

$ mcli create secret s3 --name r2-creds --config-file path/to/config --credentials-file path/to/credentials
✔  Created s3 secret: r2-creds      

$ mcli create secret env S3_ENDPOINT_URL="https://[uid].r2.cloudflarestorage.com"
✔  Created environment secret: s3-endpoint-url      
               
$ mcli run -f mpt-125m-r2.yaml --follow
✔  Run mpt-125m-r2-X2k3Uq started                                                                                    
i  Following run logs. Press Ctrl+C to quit.                                                                            
                                                                                                                        
Cloning into 'llm-foundry'...</code></pre>
            <p><i>Using the MCLI command line tool to manage compute clusters, secrets, and submit runs.</i></p>
            <pre><code>### mpt-125m-r2.yaml ###
# set up secrets once with `mcli create secret ...`
# and they will be present in the environment in any subsequent run

integrations:
- integration_type: git_repo
  git_repo: mosaicml/llm-foundry
  git_branch: main
  pip_install: -e .[gpu]

image: mosaicml/pytorch:1.13.1_cu117-python3.10-ubuntu20.04

command: |
  cd llm-foundry/scripts
  composer train/train.py train/yamls/mpt/125m.yaml \
    data_remote=s3://bucket/path-to-data \
    max_duration=100ba \
    save_folder=s3://checkpoints/mpt-125m \
    save_interval=20ba

run_name: mpt-125m-r2

gpu_num: 8
gpu_type: a100_40gb
cluster: oci-1  # can be any compute cluster!</code></pre>
            <p><i>An MCLI job template. Specify a run name, a Docker image, a set of commands, and a compute cluster to run on.</i></p>
    <div>
      <h3>Get started today!</h3>
      <a href="#get-started-today">
        
      </a>
    </div>
    <p>The MosaicML platform is an invaluable tool to take your training to the next level, and in this post, we explored how Cloudflare R2 empowers you to train models on your own data, with any compute provider – or all of them. By eliminating egress fees, R2’s storage is an exceptionally cost-effective complement to MosaicML training, providing maximum autonomy and control. With this combination, you can switch between cloud service providers to fit your organization’s needs over time.</p><p>To learn more about using MosaicML to train custom state-of-the-art AI on your own data visit <a href="https://www.mosaicml.com/">here</a> or <a href="https://docs.google.com/forms/d/e/1FAIpQLSepW7QB3Xkv6T7GJRwrR9DmGAEjm5G2lBxJC7PUe3JXcBZYbw/viewform">get in touch</a>.</p>
    <div>
      <h3>Watch on Cloudflare TV</h3>
      <a href="#watch-on-cloudflare-tv">
        
      </a>
    </div>
    <div></div><p></p> ]]></content:encoded>
            <category><![CDATA[Developer Week]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Partners]]></category>
            <category><![CDATA[Egress]]></category>
            <category><![CDATA[Connectivity Cloud]]></category>
            <category><![CDATA[R2]]></category>
            <category><![CDATA[LLM]]></category>
            <guid isPermaLink="false">4ETryNsT8L8QFX8tPNzeye</guid>
            <dc:creator>Abhinav Venigalla (Guest Author)</dc:creator>
            <dc:creator>Phillip Jones</dc:creator>
            <dc:creator>Abhi Das</dc:creator>
        </item>
        <item>
            <title><![CDATA[Reduce origin load, save on cloud egress fees, and maximize cache hits with Cache Reserve]]></title>
            <link>https://blog.cloudflare.com/cache-reserve-open-beta/</link>
            <pubDate>Tue, 15 Nov 2022 14:00:00 GMT</pubDate>
            <description><![CDATA[ Today we’re extremely excited to announce that Cache Reserve is graduating to open beta – users will now be able to test it and integrate it into their content delivery strategy without any additional waiting ]]></description>
            <content:encoded><![CDATA[ <p><i></i></p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/ni8e1xP2TTTgsWfQun7u9/0c3e07191798bf7ac736d9b54350c2f0/Cache-Reserve-Open-Beta.png" />
            
            </figure><p>Earlier this year, we introduced Cache Reserve. Cache Reserve helps users serve content from Cloudflare’s cache for longer by using <a href="https://www.cloudflare.com/products/r2/">R2</a>’s persistent data storage. Serving content from Cloudflare’s cache benefits website operators by reducing their bills for <a href="https://www.cloudflare.com/learning/cloud/what-are-data-egress-fees/">egress fees</a> from origins, while also benefiting website visitors by having content load faster.</p><p>Cache Reserve has been in <a href="/introducing-cache-reserve/">closed beta</a> for a few months while we’ve collected feedback from our initial users and continued to develop the product. After several rounds of iterating on this feedback, today we’re extremely excited to announce that <b>Cache Reserve is graduating to open beta</b> – users will now be able to test it and integrate it into their content delivery strategy without any additional waiting.</p><p>If you want to see the benefits of Cache Reserve for yourself and give us some feedback– you can go to the Cloudflare dashboard, navigate to the Caching section and enable Cache Reserve by pushing <a href="https://dash.cloudflare.com/caching/cache-reserve">one button</a>.</p>
    <div>
      <h2>How does Cache Reserve fit into the larger picture?</h2>
      <a href="#how-does-cache-reserve-fit-into-the-larger-picture">
        
      </a>
    </div>
    <p>Content served from Cloudflare’s cache begins its journey at an origin server, where the content is hosted. When a request reaches the origin, the origin compiles the content needed for the response and sends it back to the visitor.</p><p>The distance between the visitor and the origin can affect the performance of the asset as it may travel a long distance for the response. This is also where the user is charged a fee to move the content from where it’s stored on the origin to the visitor requesting the content. These fees, known as “bandwidth” or “egress” fees, are familiar monthly line items on the invoices for users that host their content on cloud providers.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7p6Nnmjna0afwsk0flu98t/eba76d569819cc15ca1c803590caf7e3/Response-Flow.png" />
            
            </figure><p>Cloudflare’s CDN sits between the origin and visitor and evaluates the origin’s response to see if it can be cached. If it can be added to Cloudflare’s cache, then the next time a request comes in for that content, Cloudflare can respond with the cached asset, which means there's no need to send the request to the origin– reducing egress fees for our customers. We also cache content in data centers close to the visitor to improve the performance and cut down on the transit time for a response.</p><p>To help assets remain cached for longer, a few years ago we introduced <a href="/introducing-smarter-tiered-cache-topology-generation/">Tiered Cache</a> which organizes all of our 250+ global data centers into a hierarchy of lower-tiers (generally closer to visitors) and upper-tiers (generally closer to origins). When a request for content cannot be served from a lower-tier’s cache, the upper-tier is checked before going to the origin for a fresh copy of the content. Organizing our data centers into tiers helps us cache content in the right places for longer by putting multiple caches between the visitor’s request and the origin.</p><p><b>Why do cache misses occur?</b>Misses occur when Cloudflare cannot serve the content from cache and must go back to the origin to retrieve a fresh copy. This can happen when a customer sets the <a href="https://developers.cloudflare.com/cache/about/cache-control/">cache-control</a> time to signify when the content is out of date (stale) and needs to be <a href="https://developers.cloudflare.com/cache/about/cache-control/#revalidation">revalidated</a>. The other element at play – how long the network wants content to remain cached – is a bit more complicated and can fluctuate depending on eviction criteria.</p><p><a href="https://www.cloudflare.com/learning/cdn/what-is-a-cdn/">CDNs</a> must consider whether they need to evict content early to optimize storage of other assets when cache space is full. At Cloudflare, we prioritize eviction based on how recently a piece of cached content was requested by using an algorithm called “least recently used” or <a href="https://en.wikipedia.org/wiki/Cache_replacement_policies#Least_recently_used_(LRU)">LRU</a>. This means that even if cache-control signifies that a piece of content should be cached for many days, we may still need to evict it earlier (if it is least-requested in that cache) to cache more popular content.</p><p>This works well for most customers and website visitors, but is often a point of confusion for people wondering why content is unexpectedly displaying a miss. If eviction did not happen then content would need to be cached in data centers that were further away from visitors requesting that data, harming the performance of the asset and injecting inefficiencies into how Cloudflare’s network operates.</p><p>Some customers, however, have large libraries of content that may not be requested for long periods of time. Using the traditional cache, these assets would likely be evicted and, if requested again, served from the origin. Keeping assets in cache requires that they remain popular on the Internet which is hard given what’s popular or current is constantly changing. Evicting content that becomes cold means additional origin egress for the customer if that content needs to be pulled repeatedly from the origin.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3iewRr0TSvgJQweqFGos7r/bc11bdf11f206f58767966cba190da20/Cache-Reserve-Response-Flow.png" />
            
            </figure><p><b>Enter Cache Reserve</b>This is where Cache Reserve shines. Cache Reserve serves as the ultimate upper-tier data center for content that might otherwise be evicted from cache. Once <a href="https://developers.cloudflare.com/cache/about/cache-reserve/#cache-reserve-asset-eligibility">admitted</a> to Cache Reserve, content can be stored for a much longer period of time– 30 days by <a href="https://developers.cloudflare.com/cache/about/cache-reserve/">default</a>. If another request comes in during that period, it can be extended for another 30 days (and so on) or until cache-control signifies that we should no longer serve that content from cache. Cache Reserve serves as a safety net to backstop all cacheable content, so customers don't have to worry about unwanted cache eviction and origin egress fees.</p>
    <div>
      <h2>How does Cache Reserve save egress?</h2>
      <a href="#how-does-cache-reserve-save-egress">
        
      </a>
    </div>
    <p>The promise of Cache Reserve is that hit ratios will increase and egress fees from origins will decrease for long tail content that is rarely requested and may be evicted from cache.</p><p>However, there are additional egress savings built into the product. For example, objects are written to Cache Reserve on misses. This means that when fetching the content from the origin on a cache miss, we both use that to respond to a request while also writing the asset to Cache Reserve, so customers won’t experience egress from serving that asset for a long time.</p><p>Cache Reserve is designed to be used with tiered cache enabled for maximum origin shielding. When there is a cache miss in both the lower and upper tiers, Cache Reserve is checked and if there is a hit, the response will be cached in both the lower and upper tier on its way back to the visitor without the origin needing to see the request or serve any additional data.</p><p>Cache Reserve accomplishes these origin egress savings for a low price, based on R2 costs. For more information on Cache Reserve prices and operations, please see the documentation <a href="https://developers.cloudflare.com/cache/about/cache-reserve/#pricing">here</a>.</p>
    <div>
      <h2>Scaling Cache Reserve on Cloudflare’s developer platform</h2>
      <a href="#scaling-cache-reserve-on-cloudflares-developer-platform">
        
      </a>
    </div>
    <p>When we first announced Cache Reserve, the response was overwhelming. Over 20,000 users wanted access to the beta, and we quickly made several interesting discoveries about how people wanted to use Cache Reserve.</p><p>The first big challenge we found was that users hated egress fees as much as we do and wanted to make sure that as much content as possible was in Cache Reserve. During the closed beta we saw usage above 8,000 PUT operations per second sustained, and objects served at a rate of over 3,000 GETs per second. We were also caching around 600Tb for some of our large customers. We knew that we wanted to open the product up to anyone that wanted to use it and in order to scale to meet this demand, we needed to make several changes quickly. So we turned to Cloudflare’s developer platform.</p><p>Cache Reserve stores data on R2 using its <a href="https://developers.cloudflare.com/r2/data-access/s3-api/api/">S3-compatible API</a>. Under the hood, R2 handles all the complexity of an <a href="https://www.cloudflare.com/learning/cloud/what-is-object-storage/">object storage</a> system using our performant and scalable developer primitives: <a href="https://developers.cloudflare.com/workers/">Workers</a> and <a href="https://developers.cloudflare.com/workers/runtime-apis/durable-objects/">Durable Objects</a>. We decided to use developer platform tools because it would allow us to implement different scaling strategies quickly. The advantage of building on the Cloudflare developer platform is that Cache Reserve was easily able to experiment to see how we could best distribute the high load we were seeing, all while shielding the complexity of how Cache Reserve works from users.  </p><p>With the single press of a button, Cache Reserve performs these functions:</p><ul><li><p>On a cache miss, <a href="/how-we-built-pingora-the-proxy-that-connects-cloudflare-to-the-internet/">Pingora</a> (our new L7 proxy) reaches out to the origin for the content and writes the response to R2. This happens while the content continues its trip back to the visitor (thereby avoiding needless latency).</p></li><li><p>Inside R2, a Worker writes the content to R2’s persistent data storage while also keeping track of the important metadata that Pingora sends about the object (like origin headers, freshness values, and retention information) using Durable Objects storage.</p></li><li><p>When the content is next requested, Pingora looks up where the data is stored in R2 by computing the cache key. The cache key’s hash determines both the object name in R2 and which bucket it was written to, as each zone’s assets are sharded across multiple buckets to distribute load.</p></li><li><p>Once found, Pingora attaches the relevant metadata and sends the content from R2 to the nearest upper-tier to be cached, then to the lower-tier and finally back to the visitor.</p></li></ul>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7A0KCP536ALqWhmE5xLR6g/5a4cab50d1d9cdad46a83456d7b21824/Screen-Shot-2022-11-14-at-4.31.20-PM.png" />
            
            </figure><p>This is magic! None of the above needs to be managed by the user. By bringing together R2, Workers, Durable Objects, Pingora, and Tiered Cache we were able to quickly build and make changes to Cache Reserve to scale as needed…</p>
    <div>
      <h2>What’s next for Cache Reserve</h2>
      <a href="#whats-next-for-cache-reserve">
        
      </a>
    </div>
    <p>In addition to the work we’ve done to scale Cache Reserve, opening the product up also opens the door to more features and integrations across Cloudflare. We plan on putting additional analytics and metrics in the hands of Cache Reserve users, so they know precisely what’s in Cache Reserve and how much egress it’s saving them. We also plan on building out more complex integrations with R2 so if customers want to begin managing their storage, they are able to easily make that transition. Finally, we’re going to be looking into providing more options for customers to control precisely what is eligible for Cache Reserve. These features represent just the beginning for how customers will control and customize their cache on Cloudflare.</p>
    <div>
      <h2>What’s some of the feedback been so far?</h2>
      <a href="#whats-some-of-the-feedback-been-so-far">
        
      </a>
    </div>
    <blockquote><p>As a long time Cloudflare customer, we were eager to deploy Cache Reserve to provide cost savings and improved performance for our end users. Ensuring our application always performs optimally for our global partners and delivery riders is a primary focus of Delivery Hero. With Cache Reserve our cache hit ratio improved by 5% enabling us to scale back our infrastructure and simplify what is needed to operate our global site and provide additional cost savings.<b>Wai Hang Tang</b>, Director of Engineering at <a href="https://www.deliveryhero.com/">Delivery Hero</a></p></blockquote><blockquote><p>Anthology uses Cloudflare's global cache to drastically improve the performance of content for our end users at schools and universities. By pushing a single button to enable Cache Reserve, we were able to provide a great experience for teachers and students and reduce two-thirds of our daily egress traffic.<b>Paul Pearcy</b>, Senior Staff Engineer at <a href="https://www.anthology.com/blackboard">Anthology</a></p></blockquote><blockquote><p>At Enjoei we’re always looking for ways to help make our end-user sites faster and more efficient. By using Cloudflare Cache Reserve, we were able to drastically improve our cache hit ratio by more than 10% which reduced our origin egress costs. Cache Reserve also improved the performance for many of our merchants’ sites in South America, which improved their SEO and discoverability across the Internet (Google, Criteo, Facebook, Tiktok)– and it took no time to set it up.<b>Elomar Correia</b>, Head of DevOps SRE | Enterprise Solutions Architect at <a href="https://www.enjoei.com.br/">Enjoei</a></p></blockquote><blockquote><p>In the live events industry, the size and demand for our cacheable content can be extremely volatile, which causes unpredictable swings in our egress fees. Additionally, keeping data as close to our users as possible is critical for customer experience in the high traffic and low bandwidth scenarios our products are used in, such as conventions and music festivals. Cache Reserve helps us mitigate both of these problems with minimal impact on our engineering teams, giving us more predictable costs and lower latency than existing solutions.<b>Jarrett Hawrylak</b>, VP of Engineering | Enterprise Ticketing at <a href="https://www.patrontechnology.com/">Patron Technology</a></p></blockquote>
    <div>
      <h2>How can I use it today?</h2>
      <a href="#how-can-i-use-it-today">
        
      </a>
    </div>
    <p>As of today, Cache Reserve is in open beta, meaning that it’s available to anyone who wants to use it.</p><p>To use the Cache Reserve:</p><ul><li><p>Simply go to the Caching tile in the dashboard.</p></li><li><p>Navigate to the <a href="https://dash.cloudflare.com/caching/cache-reserve">Cache Reserve page</a> and push the enable data sync button (or purchase button).</p></li></ul><p>Enterprise Customers can work with their Cloudflare Account team to access Cache Reserve.</p><p>Customers can ensure Cache Reserve is working by looking at the baseline metrics regarding how much data is cached and how many operations we’ve seen in the Cache Reserve section of the dashboard. Specific requests served by Cache Reserve are available by using <a href="https://developers.cloudflare.com/logs/reference/log-fields/zone/http_requests/">Logpush v2</a> and finding HTTP requests with the field “CacheReserveUsed.”</p><p>We will continue to make sure that we are quickly triaging the feedback you give us and making improvements to help ensure Cache Reserve is easy to use, massively beneficial, and your choice for reducing egress fees for cached content.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6AtiPruR4tHDw87XV1NdfE/0ac9a73b571bc43b41cc56449fd5b6eb/Screen-Shot-2022-11-10-at-12.00.31-PM.png" />
            
            </figure>
    <div>
      <h2>Try it out</h2>
      <a href="#try-it-out">
        
      </a>
    </div>
    <p>We’ve been so excited to get Cache Reserve in more people’s hands. There will be more exciting developments to Cache Reserve as we continue to invest in giving you all the tools you need to build your perfect cache.</p><p>Try Cache Reserve today and <a href="https://discord.com/invite/aTsevRH3pG">let us know</a> what you think.</p> ]]></content:encoded>
            <category><![CDATA[Developer Week]]></category>
            <category><![CDATA[Cache Reserve]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[CDN]]></category>
            <category><![CDATA[undefined]]></category>
            <category><![CDATA[Egress]]></category>
            <guid isPermaLink="false">6tIlsBJozEhRpkQAVk3Axo</guid>
            <dc:creator>Alex Krivit</dc:creator>
        </item>
        <item>
            <title><![CDATA[Cloudflare Gateway dedicated egress and egress policies]]></title>
            <link>https://blog.cloudflare.com/gateway-dedicated-egress-policies/</link>
            <pubDate>Thu, 23 Jun 2022 13:27:35 GMT</pubDate>
            <description><![CDATA[ Cloudflare Gateway customers can now utilize dedicated egress IPs and soon will be able to control how these IPs are applied via egress policies ]]></description>
            <content:encoded><![CDATA[ 
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2ieRNok92g8GFFvha3iB5a/98615d1ecc4215a46842d1c0142e2c67/image1-37.png" />
            
            </figure><p>Today, we are highlighting how Cloudflare enables administrators to create security policies while using dedicated source IPs. With on-premise appliances like legacy VPNs, firewalls, and <a href="https://www.cloudflare.com/learning/access-management/what-is-a-secure-web-gateway/">secure web gateways (SWGs)</a>, it has been convenient for organizations to rely on allowlist policies based on static source IPs. But these hardware appliances are hard to manage/scale, come with inherent vulnerabilities, and struggle to support globally distributed traffic from remote workers.</p><p>Throughout this week, we’ve <a href="https://www.cloudflare.com/cloudflare-one-week/">written</a> about how to transition away from these legacy tools towards Internet-native <a href="https://www.cloudflare.com/learning/security/glossary/what-is-zero-trust/">Zero Trust security</a> offered by services like Cloudflare Gateway, our <a href="https://www.cloudflare.com/products/zero-trust/gateway/">SWG</a>. As a critical service natively integrated with the rest of our broader Zero Trust platform, Cloudflare Gateway also enables traffic filtering and routing for recursive DNS, <a href="https://www.cloudflare.com/learning/access-management/what-is-ztna/">Zero Trust network access</a>, <a href="https://www.cloudflare.com/learning/access-management/what-is-browser-isolation/">remote browser isolation</a>, and inline <a href="https://www.cloudflare.com/learning/access-management/what-is-a-casb/">CASB</a>, among other functions.</p><p>Nevertheless, we recognize that administrators want to maintain the convenience of source IPs as organizations transition to cloud-based proxy services. In this blog, we describe our approach to offering dedicated IPs for egressing traffic and share some upcoming functionality to empower administrators with even greater control.</p>
    <div>
      <h3>Cloudflare’s dedicated egress IPs</h3>
      <a href="#cloudflares-dedicated-egress-ips">
        
      </a>
    </div>
    <p>Source IPs are still a popular method of verifying that traffic originates from a known organization/user when accessing applications and third party destinations on the Internet. When organizations use Cloudflare as a secure web gateway, user traffic is proxied through our global network, where we apply filtering and routing policies at the closest data center to the user. This is especially powerful for globally distributed workforces or roaming users. Administrators do not have to make updates to static IP lists as users travel, and no single location becomes a bottleneck for user traffic.</p><p>Today the source IP for proxied traffic is one of two options:</p><ul><li><p>Device client (WARP) Proxy IP – Cloudflare forward proxies traffic from the user using an IP from the default IP range shared across all Zero Trust accounts</p></li><li><p>Dedicated egress IP – Cloudflare provides customers with a dedicated IP (IPv4 and IPv6) or range of IPs geolocated to one or more Cloudflare network locations</p></li></ul><p>The WARP Proxy IP range is the default egress method for all Cloudflare Zero Trust customers. It is a great way to preserve the privacy of your organization as user traffic is sent to the nearest Cloudflare network location which ensures the most performant Internet experience. But setting source IP security policies based on this default IP range does not provide the granularity that admins often require to filter their user traffic.</p><p>Dedicated egress IPs are useful in situations where administrators want to allowlist traffic based on a persistent identifier. As their name suggests, these dedicated egress IPs are exclusively available to the assigned customer—and not used by any other customers routing traffic through Cloudflare’s network.</p><p>Additionally, leasing these dedicated egress IPs from Cloudflare helps avoid any privacy concerns which arise when carving them out from an organization’s own IP ranges. And furthermore, alleviates the need to protect your any of the IP ranges that are assigned to your on-premise VPN appliance from DDoS attacks or otherwise.</p><p>Dedicated egress IPs are available as add-on to for any Cloudflare Zero Trust enterprise-contracted customer. Contract customers can select the specific Cloudflare data centers used for their dedicated egress, and all subscribing customers receive at least two IPs to start, so user traffic is always routed to the closest dedicated egress data center for performance and resiliency. Finally, organizations can egress their traffic through Cloudflare’s dedicated IPs via their preferred on-ramps. These include Cloudflare’s device client (WARP), proxy endpoints, GRE and IPsec on-ramps, or any of our 1600+ peering network locations, including major ISPs, cloud providers, and enterprises.</p>
    <div>
      <h3>Customer use cases today</h3>
      <a href="#customer-use-cases-today">
        
      </a>
    </div>
    <p>Cloudflare customers around the world are taking advantage of Gateway dedicated egress IPs to streamline application access. Below are three most common use cases we’ve seen deployed by customers of varying sizes and across industries:</p><ul><li><p><b>Allowlisting access to apps from third parties:</b> Users often need to access tools controlled by suppliers, partners, and other third party organizations. Many of those external organizations still rely on source IP to authenticate traffic. Dedicated egress IPs make it easy for those third parties to fit within these existing constraints.</p></li><li><p><b>Allowlisting access to SaaS apps:</b> Source IPs are still commonly used as a defense-in-depth layer for how users access SaaS apps, alongside other more advanced measures like <a href="https://www.cloudflare.com/learning/access-management/what-is-multi-factor-authentication/">multi-factor authentication</a> and <a href="https://www.cloudflare.com/learning/access-management/what-is-an-identity-provider/">identity provider checks</a>.</p></li><li><p><b>Deprecating VPN usage:</b> Often hosted VPNs will be allocated IPs within the customers advertised IP range. The security flaws, performance limitations, and administrative complexities of VPNs are well-documented in our <a href="/how-to-augment-or-replace-your-vpn">recent Cloudflare blog</a>. To ease customer migration, users will often choose to maintain any IP allowlist processes in place today.</p></li></ul><p>Through this, administrators are able to maintain the convenience of building policies with fixed, known IPs, while accelerating performance for end users by routing through Cloudflare’s global network.</p>
    <div>
      <h3>Cloudflare Zero Trust egress policies</h3>
      <a href="#cloudflare-zero-trust-egress-policies">
        
      </a>
    </div>
    <p>Today, we are excited to announce an upcoming way to build more granular policies using Cloudflare’s dedicated egress IPs. With a forthcoming egress IP policy builder in the Cloudflare Zero Trust dashboard, administrators can specify which IP is used for egress traffic based on identity, application, network and geolocation attributes.</p><p>Administrators often want to route only certain traffic through dedicated egress IPs—whether for certain applications, certain Internet destinations, and certain user groups. Soon, administrators can set their preferred egress method based on a wide variety of selectors such as application, content category, domain, user group, destination IP, and more. This flexibility helps organizations take a layered approach to security, while also maintaining high performance (often via dedicated IPs) to the most critical destinations.</p><p>Furthermore, administrators will be able to use the egress IP policy builder to geolocate traffic to any country or region where Cloudflare has a presence. This geolocation capability is particularly useful for globally distributed teams which require geo-specific experiences.</p><p>For example, a large media conglomerate has marketing teams that would verify the layouts of digital advertisements running across multiple regions. Prior to partnering with Cloudflare, these teams had clunky, manual processes to verify their ads were displaying as expected in local markets: either they had to ask colleagues in those local markets to check, or they had to spin up a VPN service to proxy traffic to the region. With an egress policy these teams would simply be able to match a custom test domain for each region and egress using their dedicated IP deployed there.</p>
    <div>
      <h3>What’s Next</h3>
      <a href="#whats-next">
        
      </a>
    </div>
    <p>You can take advantage of Cloudflare’s dedicated egress IPs by adding them onto a Cloudflare Zero Trust <a href="https://www.cloudflare.com/products/zero-trust/plans/enterprise/">Enterprise plan</a> or contacting your account team. If you would like to be contacted when we release the Gateway egress policy builder, <a href="http://www.cloudflare.com/zero-trust/lp/egress-policies-beta">join the waitlist here</a>.</p> ]]></content:encoded>
            <category><![CDATA[Cloudflare One Week]]></category>
            <category><![CDATA[Cloudflare One]]></category>
            <category><![CDATA[Zero Trust]]></category>
            <category><![CDATA[Cloudflare Gateway]]></category>
            <category><![CDATA[Egress]]></category>
            <guid isPermaLink="false">4eW5y859iFlWmPFC0ENX2b</guid>
            <dc:creator>Ankur Aggarwal</dc:creator>
            <dc:creator>James Chang</dc:creator>
        </item>
        <item>
            <title><![CDATA[Workers, Now Even More Unbound: 15 Minutes, 100 Scripts, and No Egress Fees]]></title>
            <link>https://blog.cloudflare.com/workers-now-even-more-unbound/</link>
            <pubDate>Thu, 18 Nov 2021 14:00:18 GMT</pubDate>
            <description><![CDATA[ Workers is now even more Unbound, with no egress, more execution time, and more scripts. ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Our mission is to enable developers to build their applications, end to end, on our platform, and ruthlessly eliminate limitations that may get in the way. Today, we're excited to announce you can build large, data-intensive applications on our network, all without breaking the bank; starting today, we're dropping <a href="https://www.cloudflare.com/learning/cloud/what-are-data-egress-fees/">egress fees</a> to zero.</p>
    <div>
      <h3>More Affordable: No Egress Fees</h3>
      <a href="#more-affordable-no-egress-fees">
        
      </a>
    </div>
    <p>Building more on any platform historically comes with a caveat — high data transfer cost. These costs often come in the form of egress fees. Especially in the case of data intensive workloads, egress data transfer costs can come at a high premium, depending on the provider.</p><p>What exactly are data egress fees? They are the costs of retrieving data from a cloud provider. Cloud infrastructure providers generally pay for bandwidth based on capacity, but often bill customers based on the amount of data transferred. Curious to learn more about what this means for end users? We recently wrote an analysis of <a href="/aws-egregious-egress/">AWS’ Egregious Egress</a> — a good read if you would like to learn more about the ‘Hotel California’ model AWS has spun up. Effectively, data egress fees lock you into their platform, making you choose your provider based not on which provider has the best infrastructure for your use case, but instead choosing the provider where your data resides.</p><p>At Cloudflare, we’re working to flip the script for our customers. Our recently announced <a href="/introducing-r2-object-storage/">R2 Storage</a> waives the data egress fees other providers implement for similar products. Cloudflare is a founding member of the <a href="https://www.cloudflare.com/bandwidth-alliance/">Bandwidth Alliance</a>, aiming to help our mutual customers overcome these data transfer fees.</p><p>We’re keeping true to our mission and, effective immediately, dropping all Egress Data Transfer fees associated with <a href="https://developers.cloudflare.com/workers/platform/pricing#pricing-1">Workers Unbound</a> and <a href="https://developers.cloudflare.com/workers/learning/using-durable-objects">Durable Objects</a>. If you’re using Workers Unbound today, your next bill will no longer include Egress Data Transfer fees. If you’re not using Unbound yet, now is a great time to experiment. With Workers Unbound, get access to longer CPU time limits and pay only for what you use, and don’t worry about the data transfer cost. When paired with Bandwidth Alliance partners, this is a cost-effective solution for any data intensive workloads.</p>
    <div>
      <h3>More Unbound: 15 Minutes</h3>
      <a href="#more-unbound-15-minutes">
        
      </a>
    </div>
    <p>This week has been about defining what the future of computing is going to look like. Workers are great for your latency sensitive workloads, with zero-milliseconds cold start times, fast global deployment, and the power of Cloudflare’s network. But Workers are not limited to lightweight tasks — we want you to run your heavy workloads on our platform as well. That’s why we’re announcing you can now use up to 15 minutes of CPU time on your Workers! You can run your most compute-intensive tasks on Workers using Cron Triggers. To get started, head to the <b>Settings</b> tab in your Worker and select the ‘Unbound’ usage model.</p><div></div><p>Once you’ve confirmed your Usage Model is Unbound, switch to the <b>Triggers</b> tab and click <b>Add Cron Trigger</b>. You’ll see a ‘Maximum Duration’ is listed, indicating whether your schedule is eligible for 15 Minute workloads.</p><div></div>
    <div>
      <h3>Wait, there’s more (literally!)</h3>
      <a href="#wait-theres-more-literally">
        
      </a>
    </div>
    <p>That’s not all. As a platform, it is validating to see our customers want to grow even more with us, and we’ve been working to address these restrictions. That’s why, starting today, all customers will be allowed to deploy up to 100 Worker scripts. With the introduction of <a href="/introducing-worker-services">Services</a>, that represents up to 100 environments per account. This higher limit will allow our customers to migrate more use cases to the Workers platform.</p><p>We’re also delighted to announce that, alongside this increase, the Workers platform will plan to support scripts larger in size. This increase will allow developers to build Workers with more libraries and new possibilities, like running Golang with WASM. Check out an <a href="https://esbuild.developers.workers.dev/">example of esbuild running on a Worker</a>, in a script that’s just over 2MB compressed. If you’re interested in larger script sizes, sign up <a href="https://www.cloudflare.com/larger-scripts-on-workers-early-access/">here</a>.</p><p>The future of cloud computing is here, and it’s on Cloudflare. Workers has always been the <a href="https://developers.cloudflare.com/workers/learning/security-model">secure</a>, <a href="/cloudflare-workers-the-fast-serverless-platform/">fast</a> serverless offering, and has recently been named a <a href="/forrester-wave-edge-development-2021/">leader</a> in the space. Now, it is even more affordable and flexible too.</p><p>We can’t wait to see what ambitious projects our customers build. Developers are now better positioned than ever to deploy large and complex applications on Cloudflare. Excited to build using Workers, or get engaged with the community? Join our <a href="https://discord.gg/rH4SsffFcc">Discord server</a> to keep up with the latest on Cloudflare Workers.</p> ]]></content:encoded>
            <category><![CDATA[Full Stack Week]]></category>
            <category><![CDATA[Egress]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Workers Unbound]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <guid isPermaLink="false">6FeUc0F2arVJbpfBky0Jeu</guid>
            <dc:creator>Kabir Sikand</dc:creator>
        </item>
        <item>
            <title><![CDATA[AWS’s Egregious Egress]]></title>
            <link>https://blog.cloudflare.com/aws-egregious-egress/</link>
            <pubDate>Fri, 23 Jul 2021 13:01:00 GMT</pubDate>
            <description><![CDATA[ Amazon’s says: “We strive to offer our customers the lowest possible prices, the best available selection, and the utmost convenience.” When it comes to egress, their prices are far from the lowest. ]]></description>
            <content:encoded><![CDATA[ <p></p><p>When web hosting services first emerged in the mid-1990s, you paid for everything on a separate meter: bandwidth, storage, CPU, and memory. Over time, customers grew to hate the nickel-and-dime nature of these fees. The market evolved to a fixed-fee model. Then came Amazon Web Services.</p><p>AWS was a huge step forward in terms of flexibility and scalability, but a massive step backward in terms of pricing. Nowhere is that more apparent than with their data transfer (bandwidth) pricing. If you look at the (ironically named) <a href="https://calculator.s3.amazonaws.com/index.html#key=files/calc-917de608a77af5087bde42a1caaa61ea1c0123e8&amp;v=ver20210710c7">AWS Simple Monthly Calculator</a> you can calculate the price they charge for bandwidth for their typical customer. The price varies by region, which shouldn't surprise you because the cost of transit is dramatically different in different parts of the world.</p>
    <div>
      <h3>Charging for Stocks, Paying for Flows</h3>
      <a href="#charging-for-stocks-paying-for-flows">
        
      </a>
    </div>
    <p>AWS charges customers based on the amount of data delivered — 1 terabyte (TB) per month, for example. To visualize that, imagine data is water. AWS fills a bucket full of water and then charges you based on how much water is in the bucket. This is known as charging based on “stocks.”</p><p>On the other hand, AWS pays for bandwidth based on the capacity of their network. The base unit of wholesale bandwidth is priced as one Megabit per second per month (1 Mbps). Typically, a provider like AWS, will pay for bandwidth on a monthly fee based on the number of Mbps that their network uses at its peak capacity. So, extending the analogy, AWS doesn't pay for the amount of water that ends up in their customers' buckets, but rather the capacity based on the diameter of the “hose” that is used to fill them. This is known as paying for “flows.”</p>
    <div>
      <h3>Translating Flows to Stocks</h3>
      <a href="#translating-flows-to-stocks">
        
      </a>
    </div>
    <p>You can translate between flow and stock pricing by knowing that a 1 Mbps connection (think of it as the "hose") can transfer 0.3285 TB (328GB) if utilized to its fullest capacity over the course of a month (think of it as running the "hose" at full capacity to fill the "bucket" for a month).<sup>1</sup> AWS obviously has more than 1 Mbps of capacity — they can certainly transfer more than 0.3285 TB per month — but you can use this as the base unit of their bandwidth costs, and compare it against what they charge a customer to deliver 1 Terabyte (1TB), in order to figure out the AWS bandwidth markup.</p><p>One more subtlety to be as accurate as possible. Wholesale bandwidth is also billed at the 95th percentile. That effectively cuts off the peak hour or so of use every day. That means a 1 Mbps connection running at 100% can actually likely transfer closer to 0.3458 TB (346GB) per month.</p><p>Two more factors are important: utilization and regional costs. AWS can't run all their connections at 100% utilization 24x7 for a month. Instead, they'll have some average utilization per transit connection in any month. It’s reasonable to estimate that they likely run at between 20% and 40% average utilization. That would be a typical average utilization range for the industry. The higher their utilization, the more efficient they are, the lower their costs, and the higher their effective customer markup will be.</p><p>To be conservative, we’ve assumed that AWS’s average utilization is the bottom of that range (20%), but you can <a href="https://docs.google.com/spreadsheets/d/11InPX3PDntyC1-F-tWeNwpoztYozf7JeC7OE6iZrc_0/edit?usp=sharing">download the raw data</a> and adjust the assumptions however you think makes sense.</p><p>We have a good sense of the wholesale prices of bandwidth in different regions around the world based on what Cloudflare sees in the market when we buy bandwidth ourselves. We’d imagine AWS gets at least as good of pricing as we do. We’ve included a rough estimate of these prices in the calculation, rounding up on the wholesale price wherever there was a question (which makes AWS look better).</p>
    <div>
      <h3>Massive Markups</h3>
      <a href="#massive-markups">
        
      </a>
    </div>
    <p>Based on these assumptions, here's our best estimate of AWS’s effective markup for egress bandwidth on a per-region basis.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1qM5lTL8RfJFVfqsECMu4V/07bc22f4964beed98ce88a8fcc55ac72/image4-12.png" />
            
            </figure>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4rt8aZ0kxhYsucX3CbVUQg/3885c745e5f4b57c6964b5ba77ef5e0e/image1-19.png" />
            
            </figure><p>Don’t rest easy, South Korea with your <i>merely</i> 357% markup. The general rule of thumb appears to be that the older a market is, the more Amazon wrings from its customers in egregious <a href="https://www.cloudflare.com/learning/cloud/what-are-data-egress-fees/">egress markups</a> — and the Seoul availability zone is only a bit over four years old. Winter, unfortunately, inevitably seems to come to AWS customers.</p>
    <div>
      <h3>AWS Stands Alone In Not Passing On Savings to Customers</h3>
      <a href="#aws-stands-alone-in-not-passing-on-savings-to-customers">
        
      </a>
    </div>
    <p>Remember, this is for the transit bandwidth that AWS is paying for. For the bandwidth that they exchange with a network like Cloudflare, where they are directly connected (settlement-free peered) over a private network interface (PNI), there are no meaningful incremental costs and their effective margins are nearly infinite. Add in the effect of rebates Amazon collects from colocation providers who charge cross connect fees to customers, and the effective markup is likely even higher.</p><p>Some other cloud providers take into account that their costs are lower when passing over peering connections. Both Microsoft Azure and Google Cloud will substantially discount egress charges for their mutual Cloudflare customers. Members of the <a href="https://www.cloudflare.com/bandwidth-alliance/">Bandwidth Alliance</a> — Alibaba, Automattic, Backblaze, Cherry Servers, Dataspace, DNS Networks, DreamHost, HEFICED, Kingsoft Cloud, Liquid Web, Scaleway, Tencent, Vapor, Vultr, Wasabi, and Zenlayer — waive bandwidth charges for mutual Cloudflare customers.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/57Ans32VnbvStvYPES35y/9427ae5824ed7fd3de4dcf40056ca4b9/image6-9.png" />
            
            </figure><p>At this point, the majority of hosting providers in the industry either substantially discount or entirely waive <a href="https://www.cloudflare.com/the-net/cloud-egress-fees-challenge-future-ai/">egress fees</a> when sending traffic from their network to a peer like Cloudflare. AWS is the notable exception in the industry. It's worth noting that we invited AWS to be a part of the Bandwidth Alliance, and they politely declined.</p><p>It seems like a no-brainer that if we’re not paying for the bandwidth costs, and the hosting provider isn’t paying for the bandwidth costs, customers shouldn’t be charged for the bandwidth costs at the same rate as if the traffic was being sent over the public Internet. Unfortunately, Amazon’s supposed obsession over doing the right thing for customers doesn’t extend to egress charges.</p>
    <div>
      <h3>Artificially Held High</h3>
      <a href="#artificially-held-high">
        
      </a>
    </div>
    <p>Amazon’s mission statement is: “We strive to offer our customers the lowest possible prices, the best available selection, and the utmost convenience.” And yet, when it comes to egress, their prices are far from the lowest possible.</p><p>During the last ten years, industry wholesale transit prices have fallen an average of 23% annually. Compounded over that time, wholesale bandwidth is 93% less expensive than 10 years ago. However, AWS’s egress fees over that same period have fallen by only 25%.</p><p>And, since 2018, the egress fees AWS charges in North America and Europe have not dropped a penny even as wholesale prices in those markets over the same time period have fallen by more than half.</p>
    <div>
      <h3>AWS’s Hotel California Pricing</h3>
      <a href="#awss-hotel-california-pricing">
        
      </a>
    </div>
    <p>Another oddity of AWS’s pricing is that they charge for data transferred out of their network but not for data transferred into their network. If the only time you’ve paid for bandwidth is with your residential Internet connection, then this may make some sense. Because of some technical limitations of the cable network, download bandwidth is typically higher than upload bandwidth on cable modem connections. But that’s not how wholesale bandwidth is bought or sold.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6a6sRKh6RUWexw1melI1so/eca2ebb408ead8b465dc751d54d25727/image5-10.png" />
            
            </figure><p>Wholesale bandwidth isn’t like your home cable connection. Instead, it’s symmetrical. That means that if you purchase a 1 Mbps (1 Megabit per second) connection, then you have the capacity to send 1 Megabit out and receive another 1 Megabit in every second. If you receive 1 Mbps in <i>and simultaneously</i> 1 Mbps out, you pay the same price as if you receive 1 Mbps in and 0 Mbps out or 0 Mbps in and 1 Mbps out. In other words, ingress (data sent to AWS) doesn’t cost them any more or less than egress (data sent from AWS). And yet, they charge customers more to take data out than put it in. It’s a head scratcher.</p><p>We’ve tried to be charitable in trying to understand why AWS would charge this way. Disappointingly, there just doesn't seem to be an innocent explanation. As we dug in, even things like writes versus reads and the wear they put on storage media, as well as the challenges of capacity planning for storage capacity, suggest that AWS should charge less for egress than ingress.</p><p>But they don’t.</p><p>The only rationale we can reasonably come up with for AWS’s egress pricing: locking customers into their cloud, and making it prohibitively expensive to get customer data back out. So much for being customer-first.</p>
    <div>
      <h3>But… But… But…</h3>
      <a href="#but-but-but">
        
      </a>
    </div>
    <p>AWS may object that this doesn't take into account the cost of things like metro dark fiber between data centers, amortized optical and other networking equipment, and cross connects. In our experience, those costs amount to a rounding error of less than one cent per Mbps when operating at AWS-like scale. And these prices have been falling at a similar rate to the decline in the price of bandwidth over the past 10 years. Yet AWS’s egress prices have barely budged.</p><p>All the data above is derived from what’s published on AWS’s simple pricing calculator. There’s no doubt that some large customers are able to negotiate lower prices. But these are the prices charged to small businesses and startups by default. And, when we’ve reviewed pricing even with large AWS customers, the egress fees remain egregious.</p>
    <div>
      <h3>It’s Not Too Late!</h3>
      <a href="#its-not-too-late">
        
      </a>
    </div>
    <p>We have a lot of mutual customers who use Cloudflare and AWS. They’re a great service, and we want to support our mutual customers and provide services in a way that meets their needs and is always as secure, fast, reliable, and efficient as possible. We remain hopeful that AWS will do the right thing, lower their egress fees, join the Bandwidth Alliance — following the lead of the majority of the rest of the hosting industry — and pass along savings from peering with Cloudflare and other networks to all their customers.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1bTWj1lyCvJKiTwIQU7mqq/a9389d3240f8959ecc37002acf342259/image3.gif" />
            
            </figure><p>.......</p><p><sup>1</sup>Here’s the calculation to convert a 1 Mbps flow into TB stocks: 1 Mbps @ 100% for 1 month = (1 million bits per second) * (60 seconds / minute) * (60 minutes / hour) * (730 hours on average/month) divided by (eight bits / byte) divided by 10^12 (to convert bytes to Terabytes) = 0.3285 TB/month.</p> ]]></content:encoded>
            <category><![CDATA[Bandwidth Alliance]]></category>
            <category><![CDATA[Egress]]></category>
            <category><![CDATA[AWS]]></category>
            <guid isPermaLink="false">2eK8lAEdsHVrkxAqbqb9OH</guid>
            <dc:creator>Matthew Prince</dc:creator>
            <dc:creator>Nitin Rao</dc:creator>
        </item>
        <item>
            <title><![CDATA[Empowering customers with the Bandwidth Alliance]]></title>
            <link>https://blog.cloudflare.com/empowering-customers-with-the-bandwidth-alliance/</link>
            <pubDate>Fri, 23 Jul 2021 12:59:54 GMT</pubDate>
            <description><![CDATA[ High egress costs from a Cloud limit customers choice in Cloud services. Cloudflare’s Bandwidth Alliance with our cloud partners provides an alternative to this lock-in. Our customers have benefitted from this both via discounted egress costs and choice of storage providers. ]]></description>
            <content:encoded><![CDATA[ 
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/34WQalyr8Qvxvt8c8XU8Ze/ca96bdabbb6d22434cc183c7cc247992/image1-16.png" />
            
            </figure>
    <div>
      <h3>High Egress Fees</h3>
      <a href="#high-egress-fees">
        
      </a>
    </div>
    <p>Debates over the benefits and drawbacks of walled gardens versus open ecosystems have carried on since the beginnings of the tech industry. As applied to the Internet, we don’t think there’s much to debate. There’s a reason why it’s easier today than ever before to start a company online: open standards. They’ve encouraged a flourishing of technical innovation, made the Internet faster and safer, and easier and less expensive for anyone to have an Internet presence.</p><p>Of course, not everyone likes competition. Breaking open standards — with proprietary ones — is a common way to stop competition. In the cloud industry, a more subtle way to gain power over customers and lock them in has emerged. Something that isn’t obvious at the start: high egress fees.</p><p>You probably won’t notice them when you embark on your cloud journey. And if you need to bring data into your environment, there’s no data charge. But say you want to get that data out? Or go multi-cloud, and work with another cloud provider who is best-in-class? That’s when the charges start rolling in.</p><p>To make matters worse, as the number and diversity of applications in your IT stack increases, the lock-in power of egress fees increases as well. As more data needs to travel between more applications across clouds and there is more data to move to a newer, better cloud service, the egress tax increases, further locking you in. You lose the ability to choose best-of-breed services or to negotiate prices with your provider.</p>
    <div>
      <h3>Why We Launched The Bandwidth Alliance</h3>
      <a href="#why-we-launched-the-bandwidth-alliance">
        
      </a>
    </div>
    <p>This is not a better Internet. So wherever we can, we're on the lookout for ways to prevent this from happening — in this case, with our Bandwidth Alliance partners. We launched the <a href="https://www.cloudflare.com/bandwidth-alliance/">Bandwidth Alliance</a> in late 2018 with over fifteen cloud providers who also believe in an open Internet where data can flow freely. In short, partners in the Bandwidth Alliance have agreed to reduce egress fees for data transfer — either in their entirety or at a steep discount.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/uERBb7sB9xhNDsScJ2iQE/a3bb75d6c5a437a09800f5e21f20f2b9/image2-18.png" />
            
            </figure>
    <div>
      <h3>How did we do this — the power of Cloudflare’s network</h3>
      <a href="#how-did-we-do-this-the-power-of-cloudflares-network">
        
      </a>
    </div>
    <p>Say you're hosted in a facility in Ashburn, Virginia and a user visits your service from Sydney, Australia. There is a cost to moving the data between the two places. In this example, a cloud provider would use their own global backbone to carry the traffic across the United States and then across the Pacific, eventually handing it off to the users’ ISP. Someone has to maintain the infrastructure that hauls that traffic more than 9,000 miles from Ashburn to Sydney.</p><p>Cloudflare has more than 206 data centers globally in almost every major city. Our network automatically receives traffic at whatever data center is nearest to the user and then carries it to the data center closest to where the origin is hosted.</p><p>As part of the Bandwidth Alliance, this traffic is then delivered to the partner data center via private network interconnects (PNI) or peered connections. These PNIs typically occur within the same facility through a fiber optic cable between routers, or via a dedicated connection between two facilities at a very low cost. Unlike when there's a transit provider in between, there's no middleman, so neither Cloudflare nor our partners bear incremental costs for transferring the data over this PNI.</p><p>Cloudflare is one of the most interconnected networks in the world, peering with over 9,500 networks globally, including major ISPs, cloud providers, and enterprises. Cloudflare is connected with partners in many global regions via Private Interconnections, Internet exchanges with private peering, and via public peering.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/mc0nndo9pITnFkDqX8lSS/f17034bab3d4da4232fe9d75f8c545e0/image3-10.png" />
            
            </figure>
    <div>
      <h3>Customer benefit</h3>
      <a href="#customer-benefit">
        
      </a>
    </div>
    <p>Since its inception, the Bandwidth Alliance program has provided many customers significant benefits: both in egress cost savings and more importantly, in choice across their needs of compute, storage, and other services. Providing this choice and preventing vendor lock-in has allowed our customers to choose the right product for their use case while benefiting from significant savings.</p><p>We looked at a sample set of our customers benefiting from the Bandwidth Alliance and estimated their egress savings based on the amount of data (GB) flowing from the origin to us. We estimated the potential savings using the \$0.08/GB retail price vs. the discounted \$0.04/GB for large amounts of data transferred. Of course, customers could save more by using one of our partners with whom the cost is $0/GB. We compared the savings to the amount of money these customers spend on us. These savings are in the range of 7.5% to 27% or, in other words, for every \$1 spent on Cloudflare customers are saving up to \$0.27 — that is a no-brainer offer to take advantage of.</p><p>The Bandwidth Alliance also offers customers the option to choose a cloud that meets their price and feature requirements. For a media delivery use case, choosing the best storage provider and Cloudflare has allowed one customer to save up to <a href="https://www.cloudflare.com/case-studies/nodecraft-bandwidth-alliance/">85% in storage costs</a>. Another customer who went with a best-of-breed solution across storage and the Cloudflare global network <a href="https://www.cloudflare.com/case-studies/pippa-partners-with-digitalocean-and-cloudflare/">reduced their overall cloud bill by 50%</a>. Customers appreciate these benefits of choice:</p><blockquote><p><i>“We were looking at moving our data from one storage provider to another, and it was a total no-brainer to use a service that was part of the Bandwidth Alliance. It really makes sense for anyone looking at any cloud service, especially one that’s pushing a lot of traffic.” — James Ross, Co-founder/CTO, Nodecraft</i></p></blockquote><p>Earlier this month we made it even easier for our joint customers with Microsoft Azure to realize the benefits of discounted egress to Cloudflare with <a href="/discounted-egress-for-cloudflare-customers-from-microsoft-azure-is-now-available/">Microsoft Azure’s Data Transfer Routing Preference</a>. With a few clicks on the Azure dashboard, Cloudflare customer egress bills will automatically be discounted. It’s been exciting to hear positive customer feedback:</p><blockquote><p><i>"Before taking advantage of the Routing Preference by Azure via Cloudflare, Egress fees were one of the key reasons that restricted us from having more multi-cloud solutions since it can be high and unpredictable at times as the traffic scales. Enabling Routing Preference on the Azure dashboard was quick and easy. It was a one-and-done effort, and we get discounted Egress rates on every Azure bill.”  — Darin MacRae, Chief Architect / Cloud Computing, MyRadar.com</i></p></blockquote><p>If you’re looking to find the right set of cloud storage, networking and security solutions to meet your needs, consider the Bandwidth Alliance as an alternative to being locked-in to a single platform. We hope it helps.</p> ]]></content:encoded>
            <category><![CDATA[Bandwidth Alliance]]></category>
            <category><![CDATA[Egress]]></category>
            <category><![CDATA[Connectivity Cloud]]></category>
            <guid isPermaLink="false">1fNgbSUwZ1TfXmaKOLxEin</guid>
            <dc:creator>Arjunan Rajeswaran</dc:creator>
        </item>
        <item>
            <title><![CDATA[Cloudflare customers can now use Microsoft Azure Data Transfer Routing Preference to enjoy lower data transfer costs]]></title>
            <link>https://blog.cloudflare.com/discounted-egress-for-cloudflare-customers-from-microsoft-azure-is-now-available/</link>
            <pubDate>Tue, 06 Jul 2021 12:59:03 GMT</pubDate>
            <description><![CDATA[ Cloudflare customers can now choose to reduce data transfer cost via the Microsoft Azure Routing Preference Program with just a few clicks. ]]></description>
            <content:encoded><![CDATA[ 
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1i3qj7pOnOdTPlAYE65tfG/96d66b7387b6f044f9615ecc117c6aa0/image5-1.png" />
            
            </figure><p>Today, we are excited to announce that Cloudflare customers can choose Microsoft Azure with a lower cost data transfer solution via the Microsoft <a href="https://docs.microsoft.com/en-us/azure/virtual-network/routing-preference-overview"><b>Routing Preference</b></a> service. Mutual customers can benefit from lower cost and predictable performance across our interconnected networks. Microsoft Azure has developed a seamless process to allow customers to choose this cost optimized routing solution.  We have customers using this new integration today and are excited to make this generally available to all our customers and prospects.</p>
    <div>
      <h3>The power of interconnected networks</h3>
      <a href="#the-power-of-interconnected-networks">
        
      </a>
    </div>
    <p>So how are we able to enable this great solution for our customers? The answer lies in our globally interconnected network.</p><p>Cloudflare is one of the most interconnected networks in the world, peering with over 9,500 networks globally, including major ISPs, cloud providers, and enterprises. We currently interconnect with Azure through private or public peering across all major regions — including private interconnections at key locations (see below).</p><p>Private Network Interconnects typically occur within the same facility through a fiber optic cable between routers for the two networks; peered connections occur at Internet exchanges offering high performance and availability. We are actively working on expanding on this interconnectivity between Azure and Cloudflare for our customers.</p><p>In addition to the private interconnections, we also have <b>five Internet exchanges with private peering, and over 108 public peering links with Azure</b></p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/Oy7Sz83gx7Yk2YPIEjPe7/207e74571721c887c171caa825db89bb/image2.png" />
            
            </figure><p>Wondering what this really means? Let’s look at an example. Say an Internet visitor is in Sydney and requests content from an origin that's hosted in an Azure location in Chicago. When the visitor makes a request, Cloudflare automatically carries it to the Cloudflare data center in Sydney. The traffic is then routed over Cloudflare’s network all the way to Chicago where the origin is hosted on Azure. The request is then handed over to an Azure data center <b>over our private interconnections.</b></p><p>On the way back (egress path), the request is handed over from Azure network to Cloudflare at the origin in Chicago via our private interconnection (without involving any ISP). Then it’s carried entirely over the Cloudflare network to Sydney and back to the visitor.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3UAKwOR1Nf5Iql6fuxlIzy/fe1431e1482125175f334647f76820bd/image4.png" />
            
            </figure>
    <div>
      <h3>Why does the Internet need this?</h3>
      <a href="#why-does-the-internet-need-this">
        
      </a>
    </div>
    <p>Customer choice. That’s an important ingredient to help build a better Internet for our customers — free of vendor lock-in, and with open Internet standards. We’ve worked with the Azure team to enable this interconnectivity, giving the customers the flexibility to choose multiple best-of-breed products without having to worry about high data transfer costs.</p><p>What is even more exciting is working with Microsoft, a company that shares our philosophy of promoting customer flexibility and helping customers resist vendor lock-in:</p><blockquote><p><i>“Microsoft Azure is committed to offering services that make it easy to use offerings from industry leaders like Cloudflare - enabling choice to address customer’s business need."</i>- <i>Jeff Cohen, Partner Group Program Manager for Azure Networking.</i></p></blockquote>
    <div>
      <h3>Easy for customers to get started</h3>
      <a href="#easy-for-customers-to-get-started">
        
      </a>
    </div>
    <p>Cloudflare customers now have the option to leverage Azure routing preference and as a result use both platforms for their respective features and services offering the most secure and performant solution.</p><p>Most importantly customers can avail of this <a href="https://azure.microsoft.com/en-us/pricing/details/bandwidth/#:~:text=Internet%20Egress%20(Routed%20via%20Routing%20preference%20transit%20ISP%20network)">lower cost</a> solution with just three simple steps.</p><p>Step 1: Choose Internet routing on your Azure dashboard for origin in Azure storage:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4EGe7dubWEVv7qmoKvBfe0/be8c0748da992e088ec8f21c27c1c5a2/image3.png" />
            
            </figure><p>Step 2: Enable Internet routing on your Firewall and virtual network tab:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6GgMOZngzIyS7hd6wFgvhJ/e08e3a3bae361ceaedc0664316cc4ee6/image6-1.png" />
            
            </figure><p>Step 3: Enter your updated endpoint urls from Azure into your Cloudflare dashboard:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/16tz2GrZH3vqp0HpoqOcO3/1936323139e2de751a57a4e240e04e4d/image1-3.png" />
            
            </figure><p>Once enabled, the discounting is automatic and ongoing from the next monthly bill. Further details on the discounted rates can be found in <a href="https://azure.microsoft.com/en-us/pricing/details/bandwidth/">Azure’s Bandwidth pricing</a>.</p><p>A number of customers are already enjoying these benefits:</p><blockquote><p><i>“Enabling cost-optimized egress by Cloudflare and Azure via Routing Preference from the Azure dashboard has been very smooth for us with minimal effort. Cloudflare was proactive in reaching out with its customer-centric approach."</i>- <i>Joakim Jamte, Engineering Manager, Bannerflow</i></p></blockquote><blockquote><p><i>"Before taking advantage of the Routing Preference by Azure via Cloudflare, Egress fees were one of the key reasons that restricted us from having more multi-cloud solutions since it can be high and unpredictable at times as the traffic scales. Enabling Routing Preference on the Azure dashboard was quick and easy. It was a one-and-done effort and we get discounted Egress rates on every Azure bill.”</i>- <i>Darin MacRae, Chief Architect / Cloud Computing, MyRadar.com</i></p></blockquote><blockquote><p><i>"Along with Cloudflare's excellent security features and high performing CDN, the data transfer rates from Azure's Routing Preference enabled by Cloudflare make the offer very compelling. Enabling and receiving the discount was very easy and helped us optimize our investment without any effort.”</i>- <i>Arthur Roodenburg, CIO, Act-3D B.V.</i></p></blockquote><p>We’re pleased today to offer this benefit to all Cloudflare customers. If you are interested in taking advantage of Routing Preference please <a href="https://www.cloudflare.com/integrations/microsoft-azure/#cdn-interconnect-program">reach out</a>.</p> ]]></content:encoded>
            <category><![CDATA[Egress]]></category>
            <category><![CDATA[Connectivity Cloud]]></category>
            <guid isPermaLink="false">6zT2nh8KIA8v2QZXrvEati</guid>
            <dc:creator>Deeksha Lamba</dc:creator>
            <dc:creator>Abhi Das</dc:creator>
        </item>
    </channel>
</rss>