
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/">
    <channel>
        <title><![CDATA[ The Cloudflare Blog ]]></title>
        <description><![CDATA[ Get the latest news on how products at Cloudflare are built, technologies used, and join the teams helping to build a better Internet. ]]></description>
        <link>https://blog.cloudflare.com</link>
        <atom:link href="https://blog.cloudflare.com/" rel="self" type="application/rss+xml"/>
        <language>en-us</language>
        
        <lastBuildDate>Mon, 06 Apr 2026 03:01:16 GMT</lastBuildDate>
        <item>
            <title><![CDATA[Extending Private Network Load Balancing load balancing to Layer 4 with Spectrum]]></title>
            <link>https://blog.cloudflare.com/extending-local-traffic-management-load-balancing-to-layer-4-with-spectrum/</link>
            <pubDate>Fri, 31 May 2024 13:00:07 GMT</pubDate>
            <description><![CDATA[ Cloudflare is adding support for all TCP and UDP traffic to our Private Network Load Balancing load balancing solution, extending the benefits of Private Network Load Balancing to more than just  ]]></description>
            <content:encoded><![CDATA[ <p></p><p>In 2023, Cloudflare <a href="https://blog.cloudflare.com/elevate-load-balancing-with-private-ips-and-cloudflare-tunnels-a-secure-path-to-efficient-traffic-distribution/"><u>introduced a new load balancing solution</u></a>, supporting Private Network Load Balancing. This gives organizations a way to balance HTTP(S) traffic between private or internal servers within a region-specific data center. Today, we are thrilled to be able to extend those samecapabilities to non-HTTP(S) traffic. This new feature is enabled by the integration of Cloudflare Spectrum, Cloudflare Tunnels, and Cloudflare load balancers and is available to enterprise customers. Our customers can now use Cloudflare load balancers for all TCP and UDP traffic destined for private IP addresses, eliminating the need for expensive on-premise load balancers.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6wjcoenAQ9NFW4PyZiqjCQ/9921257aea4486200be51f070c1cb090/image1-15.png" />
            
            </figure>
    <div>
      <h3>A quick primer</h3>
      <a href="#a-quick-primer">
        
      </a>
    </div>
    <p>In this blog post, we will be referring to <a href="https://www.cloudflare.com/learning/performance/what-is-load-balancing/">load balancers</a> at either layer 4 or layer 7. This is, of course, referring to layers of the <a href="https://www.cloudflare.com/learning/ddos/glossary/open-systems-interconnection-model-osi/">OSI model</a> but more specifically, the ingress path that is being used to reach the load balancer. <a href="https://www.cloudflare.com/learning/ddos/what-is-layer-7/">Layer 7</a>, also known as the Application Layer, is where the HTTP(S) protocol exists. Cloudflare is well known for our layer 7 capabilities, which are built around speeding up and protecting websites which run over HTTP(S). When we refer to layer 7 load balancers, we are referring to HTTP(S)-based services. Our layer 7 stack allows Cloudflare to apply services like CDN, WAF, Bot Management, DDoS protection, and more to a customer's website or application to improve performance, availability, and security.</p><p>Layer 4 load balancers operate at a lower level of the OSI model, called the <a href="https://www.cloudflare.com/learning/ddos/glossary/open-systems-interconnection-model-osi/#:~:text=4.%20The%20transport%20layer">Transport Layer</a>, which means they can be used to support a much broader set of services and protocols. At Cloudflare, our public layer 4 load balancers are enabled by a Cloudflare product called <a href="https://developers.cloudflare.com/spectrum/">Spectrum</a>. Spectrum works as a layer 4 reverse proxy. This places Cloudflare in front of any <a href="https://www.cloudflare.com/learning/ddos/what-is-a-ddos-attack/">DDoS attacks</a> that may be launched against Spectrum-proxied services, and by using Spectrum in front of your application, your private origin IP address is concealed, which also prevents bad actors from discovering and attacking your origin’s IP address directly.</p><p>Services that use TCP or UDP for transport can leverage Spectrum with a Cloudflare load balancer. Layer 4 load balancing allows us to support other application layer protocols such as SSH, FTP, NTP, and SMTP since they operate over TCP and UDP. Given the breadth of services and protocols this represents, the treatment provided is more generalized. Cloudflare Spectrum supports features such as TLS/SSL offloading, DDoS protection, <a href="https://www.cloudflare.com/application-services/products/argo-smart-routing/">Argo Smart Routing</a>, and session persistence with our layer 4 load balancers.</p>
    <div>
      <h3>Cloudflare’s current load balancing capabilities</h3>
      <a href="#cloudflares-current-load-balancing-capabilities">
        
      </a>
    </div>
    <p>Before we dig into the new features we are announcing, it's important to understand what Cloudflare load balancing supports today and the challenges our customers face with regard to their load balancing needs.</p><p>There are three main load balancing traffic flows that Cloudflare supports today:</p><ol><li><p>Internet-facing load balancers connecting to publicly accessible origins operating at layer 7, which supports HTTP(S)</p></li><li><p>Internet-facing load balancers connecting to publicly accessible origins operating at layer 4 (Spectrum), which supports all TCP-based and UDP-based services such as SSH, FTP, NTP, SMTP, etc.</p></li><li><p>Publicly accessible load balancers connecting to <b>private</b> origins operating at layer 7 HTTP(S) over Cloudflare Tunnels</p></li></ol><p>One of the biggest advantages Cloudflare’s load balancing solutions offer our customers is that there is no hardware to purchase or maintain. Hardware-based load balancers are expensive to purchase, license, operate, and upgrade. “Need more bandwidth? Just buy and install this additional module.” “Need more features? Just buy and install this new license.” “Oh, your hardware load balancer is End-of-Life? Just purchase an entire new kit which we will EOL in a few years!” The upgrade or refresh cycle on a fully integrated hardware load balancer setup can take years and, by the time you finish the planning, implementation, and cutover, it might actually be time to start planning the next refresh.</p><p>Cloudflare eliminates all these concerns and lets you focus on innovation and growth. Your load balancers exist in every Cloudflare data center across the globe, in <a href="https://www.cloudflare.com/network/">over 300 cities</a>, with virtually unlimited scale and capacity. You never need to worry about bandwidth constraints, deployment locations, extra hardware modules, downtime, upgrades, or maintenance windows ever again. With Cloudflare’s global Anycast network, every customer connects to a nearby Cloudflare data center and load balancer, where relevant policies, rules, and steering are applied.</p>
    <div>
      <h3>Load balancing more than websites with Cloudflare Spectrum</h3>
      <a href="#load-balancing-more-than-websites-with-cloudflare-spectrum">
        
      </a>
    </div>
    <p>Today, we are excited to announce that Cloudflare Spectrum can now support load balancing traffic to private networks. The addition of private IP origin support for Cloudflare load balancers is very powerful and that's why we are extending that support to load balancing with Cloudflare <a href="https://developers.cloudflare.com/spectrum/">Spectrum</a> as well. This means that any set of private or internal applications that use TCP or UDP can now be locally load balanced via Cloudflare. These services will also benefit from Spectrum’s layer 3/4 DDoS protection and can leverage other features like session persistence without compromising security. So while the ingress to these load balancers is public, the origins to which they distribute traffic can all be private, inaccessible from the public Internet.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/63C3GATpDsujBJLBaboweL/b6a7adeda6c0b3800f45c3f7eb83bf6e/image3-7.png" />
            
            </figure><p>Ordinarily, load balancing to private networks would require expensive on-premise hardware or costly direct physical connections to cloud providers. But, by using Spectrum as the ingress path for TCP and UDP load balancing, customers can keep their origins completely protected and unreachable from the Internet and allow access exclusively through their Cloudflare load balancer – no expensive hardware required. Customers no longer need to manage complex ACLs or security settings to make sure only certain source IP addresses are connecting to the origins. These private origins can be hosted in private data centers, a public cloud, a private cloud, or on-premise.</p>
    <div>
      <h3>How we enabled Spectrum to support private networks</h3>
      <a href="#how-we-enabled-spectrum-to-support-private-networks">
        
      </a>
    </div>
    <p>All of our changes to create this feature center around integrations with Apollo, the unifying service created by the Cloudflare Zero Trust team. You can read their <a href="/from-ip-packets-to-http-the-many-faces-of-our-oxy-framework/">previous blog post on the Oxy framework</a> for more details on how Zero Trust handles and routes traffic. Apollo accepts incoming traffic from supported on-ramps, applies Zero Trust logic as configured by the customer, and then routes the traffic to egress via supported off-ramps. For example, Apollo enables clients connected securely using Cloudflare’s WARP client to communicate over Cloudflare Tunnels with private origins in a customer’s data center. Now, Apollo is being extended to do more.</p><p>When a user creates a load balanced Spectrum app, they choose a hostname and port, and select a Cloudflare load balancer as their origin. This allocates a hostname which will resolve to an IP address where Spectrum will listen for incoming traffic on the customer-configured port. Spectrum makes a call to Cloudflare's internal load balancing service, Director, which responds with the appropriate endpoint, to which Spectrum will proxy the connection. Previously, load balanced Spectrum apps only supported publicly addressable origins. Now, if the response from Director indicates that the traffic is destined for a private origin, Spectrum passes the private origin's IP address and <a href="https://developers.cloudflare.com/cloudflare-one/connections/connect-networks/private-net/tunnel-virtual-networks/">virtual network</a> ID to Apollo, which then proxies the traffic to the customer's private origin.</p><p>In short, new integrations between our Spectrum service and Apollo and between Apollo and Director have allowed us to expand our load balancing offerings not only to layer 4, but also enable us to leverage virtual networks to keep load balanced traffic private and off the public Internet. This also sets the stage for integrating load balancing with other traffic on-ramps and off-ramps, such as WARP, in the future. It also opens the door to a number of exciting possibilities like load balancing authenticated device traffic to private networks or even load balancing internal traffic that is never exposed to the public Internet.</p>
    <div>
      <h3>Looking to the future</h3>
      <a href="#looking-to-the-future">
        
      </a>
    </div>
    <p>We are excited to be releasing this new load balancing feature which enables Cloudflare Spectrum to reach private IP endpoints. Cloudflare load balancers now support steering any TCP or UDP-based protocols over Cloudflare Tunnels to private IP endpoints, which are otherwise not accessible via the public Internet. You can learn more about how to configure this feature on our <a href="https://developers.cloudflare.com/load-balancing/local-traffic-management/">load balancing documentation</a> pages.</p><p>We are just getting started with our private network  load balancing support. There is so much more to come including support for load balancing internal traffic, enhanced layer 4 session affinity, new steering methods, additional traffic ingress methods, and more!</p><p>
</p> ]]></content:encoded>
            <category><![CDATA[Spectrum]]></category>
            <category><![CDATA[Load Balancing]]></category>
            <category><![CDATA[Cloudflare Zero Trust]]></category>
            <category><![CDATA[Private Network]]></category>
            <category><![CDATA[Private IP]]></category>
            <guid isPermaLink="false">6xgIcezZBRXIokMo0e7gMH</guid>
            <dc:creator>Chris Ward</dc:creator>
            <dc:creator>Brian Batraski</dc:creator>
            <dc:creator>Mathew Jacob</dc:creator>
        </item>
        <item>
            <title><![CDATA[Elevate load balancing with Private IPs and Cloudflare Tunnels: a secure path to efficient traffic distribution]]></title>
            <link>https://blog.cloudflare.com/elevate-load-balancing-with-private-ips-and-cloudflare-tunnels-a-secure-path-to-efficient-traffic-distribution/</link>
            <pubDate>Fri, 08 Sep 2023 13:00:01 GMT</pubDate>
            <description><![CDATA[ We are extremely excited to announce a new addition to our Load Balancing solution, Private Network Load Balancing with deep integrations with Zero Trust!
 ]]></description>
            <content:encoded><![CDATA[ <p></p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6ILMvjlajN04XDxywfXL1a/fcaf05a762de5ac61baa3b001fd4edc7/image7-1.png" />
            
            </figure><p>In the dynamic world of modern applications, efficient load balancing plays a pivotal role in delivering exceptional user experiences. Customers commonly leverage load balancing, so they can efficiently use their existing infrastructure resources in the best way possible. Though, load balancing is not a ‘one-size-fits-all, out of the box’ solution for everyone. As you go deeper into the details of your traffic shaping requirements and as your architecture becomes more complex, different flavors of load balancing are usually required to achieve these varying goals, such as steering between datacenters for public traffic, creating high availability for critical internal services with private IPs, applying steering between servers in a single datacenter, and more. We are extremely excited to announce a new addition to our Load Balancing solution, Private Network Load Balancing with deep integrations with Zero Trust!  </p><p>A common problem businesses run into is that almost no providers can satisfy all these requirements, resulting in a growing list of vendors to manage disparate data sources to get a clear view of your traffic pipeline, and investment into incredibly expensive hardware that is complicated to set up and maintain. Not having a single source of truth to dwindle down ‘time to resolution’ and a single partner to work with in times when things are not operating within the ideal path can be the difference between a proactive, healthy growing business versus one that is reactive and constantly having to put out fires. The latter can result in extreme slowdown to developing amazing features/services, reduction in revenue, tarnishing of brand trust, decreases in adoption - the list goes on!</p><p>For eight years, we have provided top-tier global traffic load balancing (GTM) capabilities to thousands of customers across the globe. But why should the steering intelligence, failover, and reliability we guarantee stop at the front door of the selected datacenter and only operate with public traffic? We came to the conclusion that we should go even further. Today is the start of a long series of new features that allow traffic steering, failover, session persistence, SSL/TLS offloading and much more to take place between servers after datacenter selection has occurred! Instead of relying <i>only</i> on the relative weight to determine which server traffic should be sent to, you can now bring the same intelligent steering policies, such as <a href="https://developers.cloudflare.com/load-balancing/understand-basics/traffic-steering/origin-level-steering/least-outstanding-requests-pools/"><u>least outstanding requests steering</u></a> or <a href="https://developers.cloudflare.com/load-balancing/understand-basics/traffic-steering/origin-level-steering/hash-origin-steering/"><u>hash steering</u></a>, to any of your many data centers. This also means you have a single partner for <b>all</b> of your load balancing initiatives and a single pane of glass to inform business decisions! Cloudflare is thrilled to introduce the powerful combination of private IP support for Load Balancing with Cloudflare Tunnels and Private Network Load Balancing, offering customers a solution that blends unparalleled efficiency, security, flexibility, and privacy.</p>
    <div>
      <h3>What is a load balancer?</h3>
      <a href="#what-is-a-load-balancer">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2batHEyi8vzCMEjgy2E2rD/601bc0a81e751b02dc332cc531623546/pasted-image-0.png" />
            
            </figure><p>A Cloudflare load balancer directs a request from a user to the appropriate origin pool within a data center</p><p>Load balancing — functionality that’s been around for the last 30 years to help businesses leverage their existing infrastructure resources. <a href="https://www.cloudflare.com/learning/performance/what-is-load-balancing/">Load balancing</a> works by proactively steering traffic away from unhealthy origin servers and — for more advanced solutions — intelligently distributing traffic load based on different steering <a href="https://developers.cloudflare.com/load-balancing/understand-basics/traffic-steering/">algorithms</a>. This process ensures that errors aren’t served to end users and empowers businesses to tightly couple overall business objectives to their traffic behavior. Cloudflare Load Balancing has made it simpler and easier to securely and reliably manage your traffic across multiple data centers around the world. With Cloudflare Load Balancing, your traffic will be directed reliably regardless of the scale of traffic or where it originates with customizable steering, affinity and failover. This clearly has an advantage over a physical load balancer since it can be configured easily and traffic doesn’t have to reach one of your data centers to be routed to another location, introducing single points of failure and significant <a href="https://www.cloudflare.com/learning/performance/glossary/what-is-latency/">latency</a>. When compared with other global traffic management load balancers, Cloudflare’s Load Balancing offering is easier to set up, simpler to understand, and is fully integrated with the Cloudflare platform as one single product for all load balancing needs.</p>
    <div>
      <h3>What are Cloudflare Tunnels?</h3>
      <a href="#what-are-cloudflare-tunnels">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1Pn6iWjweXKPpg0Kl00s0W/6a0ef1886f7ce5ea114e010e9b5a32eb/Group-3345--1-.png" />
            
            </figure><p>Origins and servers of various types can be connected to Cloudflare using Cloudflare Tunnel. Users can also secure their traffic using WARP, allowing traffic to be secured and managed end to end through Cloudflare.‌ ‌</p><p>In 2018, Cloudflare introduced <a href="https://www.cloudflare.com/products/tunnel">Cloudflare Tunnels</a>, a private, secure connection between your data center and Cloudflare. Traditionally, from the moment an Internet property is deployed, developers spend an exhaustive amount of time and energy locking it down through access control lists, rotating IP addresses, or more complex solutions like <a href="https://www.cloudflare.com/learning/network-layer/what-is-gre-tunneling/">GRE tunnels</a>. We built Tunnel to help alleviate that burden. With Tunnels, users can create a private link from their origin server directly to Cloudflare without exposing your services directly to the public internet or allowing incoming connections in your data center’s firewall. Instead, this private connection is established by running a lightweight daemon, <code>cloudflared</code>, in your data center, which creates a secure, outbound-only connection. This means that only traffic that you’ve configured to pass through Cloudflare can reach your private origin.</p>
    <div>
      <h3>Unleashing the potential of Cloudflare Load Balancing with Cloudflare Tunnels</h3>
      <a href="#unleashing-the-potential-of-cloudflare-load-balancing-with-cloudflare-tunnels">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4fxiGzB4slJt02ZgQppT1d/1eadd4e57c66458024c2247080c7a07d/After-Elevate-Load-Balancing-with-Private-IP-Support-and-Cloudflare-Tunnels.png" />
            
            </figure><p>Cloudflare Load Balancing can easily and securely direct a user’s request to a specific origin within your private data center or public cloud using Cloudflare Tunnels</p><p>Combining Cloudflare Tunnels with Cloudflare Load Balancing allows you to remove your physical load balancers from your data center and have your Cloudflare load balancer reach out to your servers directly via their private IP addresses with health checks, steering, and all other Load Balancing features currently available. Instead of configuring your on-premise load balancer to expose each service and then updating your Cloudflare load balancer, you can configure it all in one place. This means that from the end-user to the server handling the request, all your configuration can be done in a single place – the Cloudflare dashboard. On top of this, you can say goodbye to the multi hundred thousand dollar price tag to hardware appliances, the incredible management overhead and investing in a solution that has a time limit for its delivered value.</p><p>Load Balancing serves as the backbone for online services, ensuring seamless traffic distribution across servers or data centers. Traditional load balancing techniques often require exposing services on a data center’s public IP addresses, forcing organizations to create complex configurations vulnerable to security risks and potential data exposure. By harnessing the power of private IP support for Load Balancing in conjunction with Cloudflare Tunnels, Cloudflare is revolutionizing the way businesses <a href="https://www.cloudflare.com/application-services/solutions/">protect and optimize their applications</a>. With clear steps to <a href="https://developers.cloudflare.com/cloudflare-one/connections/connect-networks/install-and-setup/tunnel-guide/">install</a> the cloudflared agent to connect your private network to Cloudflare’s network via Cloudflare Tunnels, directly and securely routing traffic into your data centers becomes easier than ever before!</p>
    <div>
      <h3>Publicly exposing services in private data centers is complicated</h3>
      <a href="#publicly-exposing-services-in-private-data-centers-is-complicated">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/fGhHqn2IZj4gMUL24c7lb/16ec2f39fbb2fe37fd26eff7f6d2479f/Before-Elevate-Load-Balancing-with-Private-IP-Support-and-Cloudflare-Tunnels_-A-Secure-Path-to-Efficient-Traffic-Distributio.png" />
            
            </figure><p><sup>A visitor’s request hits a global traffic management (GTM) load balancer directing the request to a data center, then a firewall, then a local load balancer and then an origin</sup></p><p>Load balancing within a private data center can be expensive and difficult to manage. The idea of keeping security first while ensuring ease of use and flexibility for your internal workforce is a tricky balance to strike. It’s not only the ‘how’ of securely exposing internal services, but how to best balance traffic between servers at a single location within your private network!</p><p>In a private data center, even a very simple website can be fairly complex in terms of networking and configuration. Let’s walk through a simple example of a customer device connecting to a website. A customer device performs a DNS lookup for the business’s website and receives an IP address corresponding to a customer data center. The customer then makes an HTTPS request to that IP address, passing the original hostname via <a href="https://www.cloudflare.com/learning/ssl/what-is-sni/">Server Name Indication</a> (SNI). That load balancer forwards that request to the corresponding origin server and returns the response to the customer device.</p><p>This example doesn’t have any advanced functionality and the stack is already difficult to configure:</p><ul><li><p>Expose the service or server on a private IP.</p></li><li><p>Configure your data center’s networking to expose the LB on a public IP or IP range.</p></li><li><p>Configure your load balancer to forward requests for that hostname and/or public IP to your server’s private IP.</p></li><li><p>Configure a DNS record for your domain to point to your load balancer’s public IP.</p></li></ul><p>In large enterprises, each of these configuration changes likely requires approval from several stakeholders and modified through different repositories, websites and/or private web interfaces. Load balancer and networking configurations are often maintained as complex configuration files for Terraform, Chef, Puppet, Ansible or a similar infrastructure-as-code service. These configuration files can be syntax checked or tested but are rarely tested thoroughly prior to deployment. Each deployment environment is often unique enough that thorough testing is often not feasible given the time and hardware requirements needed to do so. This means that changes to these files can negatively affect other services within the data center. In addition, opening up an ingress to your data center <a href="https://www.cloudflare.com/learning/security/what-is-an-attack-surface/">widens the attack surface</a> for varying security risks such as <a href="https://www.cloudflare.com/learning/ddos/what-is-a-ddos-attack/">DDoS attacks</a> or catastrophic data breaches. To make things worse, each vendor has a different interface or API for configuring their devices or services. For example, some <a href="https://www.cloudflare.com/learning/dns/glossary/what-is-a-domain-name-registrar/">registrars</a> only have XML APIs while others have JSON REST APIs. Each device configuration may have different Terraform providers or Ansible playbooks. This results in complex configurations accumulating over time that are difficult to consolidate or standardize, inevitably resulting in technical debt.</p><p>Now let’s add additional origins. For each additional origin for our service, we’ll have to go set up and expose that origin and configure the physical load balancer to use our new origin. Now let’s add another data center. Now we need another solution to distribute across our data centers. This results in one system for global traffic management and another, separate system, for local traffic. . These solutions have in the past come from different vendors and will have to be configured in different ways even though they should serve the same purpose: load balancing. This makes managing your web traffic unnecessarily difficult. Why should you have to configure your origins in two different load balancers? Why can’t you manage all the traffic for all the origins for a service in the same place?</p>
    <div>
      <h3>Simpler and better: Load Balancing with Tunnels</h3>
      <a href="#simpler-and-better-load-balancing-with-tunnels">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5uBuu8as4xTsf4QLZtpjt7/b361efe686342832c1c1de254e22cccf/pasted-image-0--1-.png" />
            
            </figure><p>Cloudflare Load Balancing can manage traffic for all your offices, data centers, remote users, public clouds, private clouds and hybrid clouds in one place‌ ‌</p><p>With Cloudflare Load Balancing and Cloudflare Tunnel, you can manage all your public and private origins in one place: the Cloudflare dashboard. Cloudflare load balancers can be easily configured using the Cloudflare dashboard or the Cloudflare API. There’s no need to <a href="https://www.cloudflare.com/learning/access-management/what-is-ssh/">SSH</a> or open a remote desktop to modify load balancer configurations for your public or private servers. All configurations can be done through the dashboard UI or Cloudflare API, with full parity between the two.</p><p>With Cloudflare Tunnel <a href="https://developers.cloudflare.com/cloudflare-one/connections/connect-networks/install-and-setup/tunnel-guide/">set up and running</a> in your data center, everything is ready to connect your origin server to Cloudflare network and load balancers. You do not need to configure any ingress to your data center since Cloudflare Tunnel operates only over outbound connections and can securely reach out to privately addressed services inside your data center. To expose your service to Cloudflare, you just <a href="https://developers.cloudflare.com/cloudflare-one/connections/connect-networks/private-net/tunnel-virtual-networks/#route-ips-over-virtual-networks">set up your private IP range to be routed over that tunnel</a>. Then, you can create a Cloudflare load balancer and input the corresponding private IP address and <a href="https://developers.cloudflare.com/load-balancing/understand-basics/traffic-management/">virtual network ID into your origin pool</a>. After that, Cloudflare manages the DNS and load balancing across your private servers. Now your origin is receiving traffic exclusively via Cloudflare Tunnel and your physical load balancer is no longer needed!</p><p>This groundbreaking integration enables organizations to deploy load balancers while keeping their applications securely shielded from the public Internet. The customer’s traffic passes through Cloudflare’s data centers, allowing customers to continue to take full advantage of Cloudflare’s security and performance services. Also, by leveraging Cloudflare Tunnels, traffic between Cloudflare and customer origins remains isolated within trusted networks, bolstering privacy, security, and peace of mind.</p>
    <div>
      <h3>The advantages of Private IP support with Cloudflare Tunnels</h3>
      <a href="#the-advantages-of-private-ip-support-with-cloudflare-tunnels">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4zxhWEuT3ZCugzEXSDSHiL/ce99377d9cdbd29aebe95130ac8dd6d1/pasted-image-0--2-.png" />
            
            </figure><p><sup>Cloudflare Load Balancing works in conjunction with all the security and privacy products that Cloudflare has to offer including DDoS protection, Web Application Firewall and Bot Management</sup></p><p><b>Unified Traffic Management : </b>All the features and ease of use that were part of Cloudflare Load Balancing for Global Traffic Management are also available with Private Network Load Balancing. You can configure your public and private origins in one dashboard as opposed to several services and vendors. Now, all your private origins can benefit from the features that Cloudflare Load Balancing is known for: instant failover, customizable steering between data centers, ease of use, custom rules and configuration updates in a matter of seconds. They will also benefit from our newer features including least connection steering, least outstanding request steering, and session affinity by header. This is just a small subset of the expansive feature set for Load Balancing. See our <a href="https://developers.cloudflare.com/load-balancing/"><u>dev docs</u></a> for more features and details on the offering.</p><p><b>Enhanced Security</b>: By combining private IP support with Cloudflare Tunnels, organizations can fortify their security posture and protect sensitive data. With private IP addresses and encrypted connections via Cloudflare Tunnel, the risk of unauthorized access and potential attacks is significantly reduced – traffic remains within trusted networks. You can also configure <a href="https://developers.cloudflare.com/cloudflare-one/policies/access/">Cloudflare Access</a> to add single sign-on support for your application and restrict your application to a subset of authorized users. In addition, you still benefit from Firewall rules, Rate Limiting rules, Bot Management, DDoS protection and all the other Cloudflare products available today allowing comprehensive security configurations.</p><p><b>Uncompromising Privacy</b>: As data privacy continues to take center stage, businesses must ensure the confidentiality of user information. Cloudflare's private IP support with Cloudflare Tunnels enables organizations to segregate applications and keep sensitive data within their private network boundaries. Custom rules also allow you to direct traffic for specific devices to specific data centers. For example, you can use custom rules to direct traffic from Eastern and Western Europe to your European data centers, so you can easily keep those users’ data within Europe. This minimizes the exposure of data to external entities, preserving user privacy and complying with strict privacy regulations across different geographies.</p><p><b>Flexibility &amp; Reliability</b>: Scale and adaptability are some of the major foundations of a well-operating business. <a href="https://www.cloudflare.com/learning/access-management/how-to-implement-zero-trust/">Implementing solutions</a> that fit your business’ needs today is not enough. Customers must find solutions that meet their needs for the next three or more years. The blend of Load Balancing with Cloudflare Tunnels within our <a href="https://www.cloudflare.com/zero-trust/solutions/">Zero Trust solution</a> lends to the very definition of flexibility and reliability! Changes to load balancer configurations propagate around the world in a matter of seconds, making load balancers an effective way to respond to incidents. Also, instant failover, health monitoring, and steering policies all help to maintain high availability for your applications, so you can deliver the reliability that your users expect. This is all in addition to best in class <a href="https://www.cloudflare.com/learning/security/glossary/what-is-zero-trust/">Zero Trust</a> capabilities that are deeply integrated such as, but not limited to Secure Web Gateway (SWG), remote browser isolation, network logs and data loss prevention.</p><p><b>Streamlined Infrastructure</b>: Organizations can consolidate their network architecture and establish secure connections across distributed environments. This unification reduces complexity, lowers operational overhead, and facilitates efficient resource allocation. Whether you need to apply a global traffic manager to intelligently direct traffic between datacenters within your private network, or steer between specific servers after datacenter selection has taken place, there is now a clear, single lens to manage your global and local traffic, regardless of whether the source or destination of the traffic is public or private. Complexity can be a large hurdle in achieving and maintaining fast, agile business units. Consolidating into a single provider, like Cloudflare, that provides security, reliability, and observability will not only save significant cost but allows your teams to move faster and focus on growing their business, enhancing critical services, and developing incredible features, rather than taping together infrastructure that may not work in a few years. Leave the heavy lifting to us, and let us empower you and your team to focus on creating amazing experiences for your employees and end-users.</p><p>The lack of agility, flexibility, and lean operations of hardware appliances for local traffic does not justify the hundreds of thousands of dollars spent on them, along with the huge overhead of managing CPU, memory, power, cooling, etc. Instead, we want to help businesses move this logic to the cloud by abstracting away the needless overhead and bringing more focus back to teams to do what they do best, building amazing experiences, and allowing Cloudflare to do what we do best, protecting, accelerating, and building heightened reliability. Stay tuned for more updates on Cloudflare's Private Network Load Balancing and how it can reduce architecture complexity while bringing more insight, security, and control to your teams. In the meantime, check out our new <a href="https://cf-assets.www.cloudflare.com/slt3lc6tev37/7siMQh0goJJnH4PYbAzOxC/f4a66ebdf20cca2ec85c2b9261fb8a38/Optimize-Web-Performance.pdf"><u>whitepaper</u></a>!</p>
    <div>
      <h3>Looking to the future</h3>
      <a href="#looking-to-the-future">
        
      </a>
    </div>
    <p>Cloudflare's impactful solution, private IP support for Load Balancing with Cloudflare Tunnels as part of the Zero Trust solution, reaffirms our commitment to providing cutting-edge tools that prioritize security, privacy, and performance. By leveraging private IP addresses and secure tunnels, Cloudflare empowers businesses to fortify their network infrastructure while ensuring compliance with regulatory requirements. With enhanced security, uncompromising privacy, and streamlined infrastructure, load balancing becomes a powerful driver of efficient and secure public or private services.</p><p>As a business grows and its systems scale up, they'll need the features that Cloudflare Load Balancing is known for: health monitoring, steering, and failover. As availability requirements increase due to growing demands and standards from end-users, customers can add health checks, enabling automatic failover to healthy servers when an unhealthy server begins to fail. When the business begins to receive more traffic from around the world, they can create new pools for different regions and use dynamic steering to reduce latency between the user and the server. For intensive or long-running requests, such as complex datastore queries, customers can benefit from leveraging <a href="https://developers.cloudflare.com/load-balancing/understand-basics/traffic-steering/origin-level-steering/least-outstanding-requests-pools/"><u>least outstanding requests steering</u></a> to reduce the number of concurrent requests per server. Before, this could all be done with publicly addressable IPs, but it is now available for pools with public IPs, private servers, or combinations of the two. Private Network Load Balancing  is live and ready to use today! Check out our <a href="https://developers.cloudflare.com/load-balancing/understand-basics/traffic-management/"><u>dev docs for instructions on how to get started</u></a>.</p><p>Stay tuned for our next addition to add new Load Balancing onramp support for Spectrum and WARP with Cloudflare Tunnels with private IPs for your <a href="https://developers.cloudflare.com/fundamentals/get-started/concepts/network-layers/">Layer 4</a> traffic, allowing us to support TCP and UDP applications in your private data centers!</p> ]]></content:encoded>
            <category><![CDATA[Load Balancing]]></category>
            <category><![CDATA[Cloudflare Tunnel]]></category>
            <category><![CDATA[Traffic]]></category>
            <guid isPermaLink="false">6WBtRI0c6K4SqCsCAd3hgn</guid>
            <dc:creator>Brian Batraski</dc:creator>
            <dc:creator>Mathew Jacob</dc:creator>
        </item>
        <item>
            <title><![CDATA[Building Waiting Room on Workers and Durable Objects]]></title>
            <link>https://blog.cloudflare.com/building-waiting-room-on-workers-and-durable-objects/</link>
            <pubDate>Wed, 16 Jun 2021 13:43:09 GMT</pubDate>
            <description><![CDATA[ Our product is specifically built for customers who experience high volumes of traffic, so we needed to run code at the edge in a highly scalable manner. Cloudflare has a great culture of building upon its own products, so we naturally thought of Workers. ]]></description>
            <content:encoded><![CDATA[ <p></p><p>In January, we announced the Cloudflare <a href="/cloudflare-waiting-room/">Waiting Room</a>, which has been available to select customers through <a href="/project-fair-shot/">Project Fair Shot</a> to help COVID-19 vaccination web applications handle demand. Back then, we mentioned that our system was built on top of Cloudflare Workers and the then brand new <a href="/introducing-workers-durable-objects/">Durable Objects</a>. In the coming days, we are making Waiting Room available to customers on our Business and Enterprise plans. As we are expanding availability, we are taking this opportunity to share how we came up with this design.</p>
    <div>
      <h2>What does the Waiting Room do?</h2>
      <a href="#what-does-the-waiting-room-do">
        
      </a>
    </div>
    <p>You may have seen lines of people queueing in front of stores or other buildings during sales for a new sneaker or phone. That is because stores have restrictions on how many people can be inside at the same time. Every store has its own limit based on the size of the building and other factors. If more people want to get inside than the store can hold, there will be too many people in the store.</p><p>The same situation applies to web applications. When you build a web application, you have to budget for the infrastructure to run it. You make that decision according to how many users you think the site will have. But sometimes, the site can see surges of users above what was initially planned. This is where the Waiting Room can help: it stands between users and the web application and automatically creates an orderly queue during traffic spikes.</p><p>The main job of the Waiting Room is to protect a customer’s application while providing a good user experience. To do that, it must make sure that the number of users of the application around the world does not exceed limits set by the customer. Using this product should not degrade performance for end users, so it should not add significant latency and should admit them automatically. In short, this product has three main requirements: respect the customer’s limits for users on the web application, keep latency low, and provide a seamless end user experience.</p><p>When there are more users trying to access the web application than the limits the customer has configured, new users are given a cookie and greeted with a waiting room page. This page displays their estimated wait time and automatically refreshes until the user is automatically admitted to the web application.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/hNkOnNnIvELvBuHVYlTWz/8871260737d007f14de2d61e3a6f371b/image2-3.png" />
            
            </figure>
    <div>
      <h2>Configuring Waiting Rooms</h2>
      <a href="#configuring-waiting-rooms">
        
      </a>
    </div>
    <p>The important configurations that define how the waiting room operates are:</p><ol><li><p><i>Total Active Users</i> - the total number of active users that can be using the application at any given time</p></li><li><p><i>New Users Per Minute</i> - how many new users per minute are allowed into the application, and</p></li><li><p><i>Session Duration</i> - how long a user session lasts. Note: the session is renewed as long as the user is active. We terminate it after <i>Session Duration</i> minutes of inactivity.</p></li></ol>
    <div>
      <h2>How does the waiting room work?</h2>
      <a href="#how-does-the-waiting-room-work">
        
      </a>
    </div>
    <p>If a web application is behind Cloudflare, every request from an end user to the web application will go to a Cloudflare data center close to them. If the web application enables the waiting room, Cloudflare issues a ticket to this user in the form of an encrypted cookie.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2CXv57H2vr7pw2HBQtvvJN/ff8fb77e20f48dab2bccea0a0e947b42/image6-1.png" />
            
            </figure><p>Waiting Room Overview</p><p>At any given moment, every waiting room has a limit on the number of users that can go to the web application. This limit is based on the customer configuration and the number of users currently on the web application. We refer to the number of users that can go into the web application at any given time as the number of user slots. The total number of users slots is equal to the limit configured by the customer minus the total number of users that have been let through.</p><p>When a traffic surge happens on the web application the number of user slots available on the web application keeps decreasing. Current user sessions need to end before new users go in. So user slots keep decreasing until there are no more slots. At this point the waiting room starts queueing.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3YupAjYjEvXX15xjhGJQQH/af04e39e638aef93901e4b6fa4195600/image1-2.png" />
            
            </figure><p>The chart above is a customer's traffic to a web application between 09:40 and 11:30. The configuration for total active users is set to 250 users (yellow line). As time progresses there are more and more users on the application. The number of user slots available (orange line) in the application keeps decreasing as more users get into the application (green line). When there are more users on the application, the number of slots available decreases and eventually users start queueing (blue line). Queueing users ensures that the total number of active users stays around the configured limit.</p><p>To effectively calculate the user slots available, every service at the edge data centers should let its peers know how many users it lets through to the web application.</p><p>Coordination within a data center is faster and more reliable than coordination between many different data centers. So we decided to divide the user slots available on the web application to individual limits for each data center. The advantage of doing this is that only the data center limits will get exceeded if there is a delay in traffic information getting propagated. This ensures we don’t overshoot by much even if there is a delay in getting the latest information.</p><p>The next step was to figure out how to divide this information between data centers. For this we decided to use the historical traffic data on the web application. More specifically, we track how many different users tried to access the application across every data center in the preceding few minutes. The great thing about historical traffic data is that it's historical and cannot change anymore. So even with a delay in propagation, historical traffic data will be accurate even when the current traffic data is not.</p><p>Let's see an actual example: the current time is <code>Thu, 27 May 2021 16:33:20 GMT</code>. For the minute <code>Thu, 27 May 2021 16:31:00 GMT</code> there were <code>50</code> users in Nairobi and <code>50</code> in Dublin. For the minute <code>Thu, 27 May 2021 16:32:00 GMT</code> there were <code>45</code> users in Nairobi and <code>55</code> in Dublin. This was the only traffic on the application during that time.</p><p>Every data center looks at what the share of traffic to each data center was two minutes in the past. For <code>Thu, 27 May 2021 16:33:20 GMT</code> that value is <code>Thu, 27 May 2021 16:31:00 GMT</code><b><i>.</i></b></p>
            <pre><code>Thu, 27 May 2021 16:31:00 GMT: 
{
  Nairobi: 0.5, //50/100(total) users
  Dublin: 0.5,  //50/100(total) users
},
Thu, 27 May 2021 16:32:00 GMT: 
{
  Nairobi: 0.45, //45/100(total) users
  Dublin: 0.55,  //55/100(total) users
}</code></pre>
            <p>For the minute <code>Thu, 27 May 2021 16:33:00 GMT</code>, the number of user slots available will be divided equally between Nairobi and Dublin as the traffic ratio for <code>Thu, 27 May 2021 16:31:00 GMT</code> is <code>0.5</code> and <code>0.5</code>. So, if there are 1000 slots available, Nairobi will be able to send 500 and Dublin can send 500.</p><p>For the minute <code>Thu, 27 May 2021 16:34:00 GMT</code>, the number of user slots available will be divided using the ratio 0.45 (Nairobi) to 0.55 (Dublin). So if there are 1000 slots available, Nairobi will be able to send 450 and Dublin can send 550.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2pT9KvjCcU8g9kTWHMU2Qt/ae0d2abbede2b05b269f751c607517e8/image8.png" />
            
            </figure><p>The service at the edge data centers counts the number of users it let into the web application. It will start queueing when the data center limit is approached. The presence of limits for the data center that change based on historical traffic helps us to have a system that doesn’t need to communicate often between data centers.</p>
    <div>
      <h3>Clustering</h3>
      <a href="#clustering">
        
      </a>
    </div>
    <p>In order to let people access the application fairly we need a way to keep track of their position in the queue. A bucket has an identifier (bucketId) calculated based on the time the user tried to visit the waiting room for the first time.  All the users who visited the waiting room between 19:51:00 and 19:51:59 are assigned to the bucketId 19:51:00. It's not practical to track every end user in the waiting room individually. When end users visit the application around the same time, they are given the same bucketId. So we cluster users who came around the same time as one time bucket.</p>
    <div>
      <h3>What is in the cookie returned to an end user?</h3>
      <a href="#what-is-in-the-cookie-returned-to-an-end-user">
        
      </a>
    </div>
    <p>We mentioned an encrypted cookie that is assigned to the user when they first visit the waiting room. Every time the user comes back, they bring this cookie with them. The cookie is a ticket for the user to get into the web application. The content below is the typical information the cookie contains when visiting the web application. This user first visited around <code>Wed, 26 May 2021 19:51:00 GMT</code>, waited for around 10 minutes and got accepted on <code>Wed, 26 May 2021 20:01:13 GMT</code>.</p>
            <pre><code>{
  "bucketId": "Wed, 26 May 2021 19:51:00 GMT",
  "lastCheckInTime": "Wed, 26 May 2021 20:01:13 GMT",
  "acceptedAt": "Wed, 26 May 2021 20:01:13 GMT",
 }</code></pre>
            <p>Here</p><p><i>bucketId</i> - the bucketId is the cluster the ticket is assigned to. This tracks the position in the queue.</p><p><i>acceptedAt</i> - the time when the user got accepted to the web application for the first time.</p><p><i>lastCheckInTime</i> - the time when the user was last seen in the waiting room or the web application.</p><p>Once a user has been let through to the web application, we have to check how long they are eligible to spend there. Our customers can customize how long a user spends on the web application using <i>Session Duration</i>. Whenever we see an accepted user we set the cookie to <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Set-Cookie">expire</a> <i>Session Duration</i> minutes from when we last saw them.</p>
    <div>
      <h3>Waiting Room State</h3>
      <a href="#waiting-room-state">
        
      </a>
    </div>
    <p>Previously we talked about the concept of user slots and how we can function even when there is a delay in communication between data centers. The waiting room state helps to accomplish this. It is formed by historical data of events happening in different data centers. So when a waiting room is first created, there is no waiting room state as there is no recorded traffic. The only information available is the customer’s configured limits. Based on that we start letting users in. In the background the service (introduced later in this post as Data Center Durable Object) running in the data center periodically reports about the tickets it has issued to a co-ordinating service and periodically gets a response back about things happening around the world.</p><p>As time progresses more and more users with different bucketIds show up in different parts of the globe. Aggregating this information from the different data centers gives the waiting room state.</p><p>Let's look at an example: there are two data centers, one in Nairobi and the other in Dublin. When there are no user slots available for a data center, users start getting queued. Different users who were assigned different bucketIds get queued. The data center state from Dublin looks like this:</p>
            <pre><code>activeUsers: 50,
buckets: 
[  
  {
    key: "Thu, 27 May 2021 15:55:00 GMT",
    data: 
    {
      waiting: 20,
    }
  },
  {
    key: "Thu, 27 May 2021 15:56:00 GMT",
    data: 
    {
      waiting: 40,
    }
  }
]</code></pre>
            <p>The same thing is happening in Nairobi and the data from there looks like this:</p>
            <pre><code>activeUsers: 151,
buckets: 
[ 
  {
    key: "Thu, 27 May 2021 15:54:00 GMT",
    data: 
    {
      waiting: 2,
    },
  } 
  {
    key: "Thu, 27 May 2021 15:55:00 GMT",
    data: 
    {
      waiting: 30,
    }
  },
  {
    key: "Thu, 27 May 2021 15:56:00 GMT",
    data: 
    {
      waiting: 20,
    }
  }
]</code></pre>
            <p>This information from data centers are reported in the background and aggregated to form a data structure similar to the one below:</p>
            <pre><code>activeUsers: 201, // 151(Nairobi) + 50(Dublin)
buckets: 
[  
  {
    key: "Thu, 27 May 2021 15:54:00 GMT",
    data: 
    {
      waiting: 2, // 2 users from (Nairobi)
    },
  }
  {
    key: "Thu, 27 May 2021 15:55:00 GMT", 
    data: 
    {
      waiting: 50, // 20 from Nairobi and 30 from Dublin
    }
  },
  {
    key: "Thu, 27 May 2021 15:56:00 GMT",
    data: 
    {
      waiting: 60, // 20 from Nairobi and 40 from Dublin
    }
  }
]</code></pre>
            <p>The data structure above is a sorted list of all the bucketIds in the waiting room. The <code>waiting</code> field has information about how many people are waiting with a particular bucketId. The <code>activeUsers</code> field has information about the number of users who are active on the web application.</p><p>Imagine for this customer, the limits they have set in the dashboard are</p><p><i>Total Active Users</i> - 200<i>New Users Per Minute</i> - 200</p><p>As per their configuration only 200 customers can be at the web application at any time. So users slots available for the waiting room state above are 200 - 201(activeUsers) = -1. So no one can go in and users get queued.</p><p>Now imagine that some users have finished their session and activeUsers is now 148.</p><p>Now userSlotsAvailable = 200 - 148 = 52 users. We should let 52 of the users who have been waiting the longest into the application. We achieve this by giving the eligible slots to the oldest buckets in the queue. In the example below 2 users are waiting from bucket <code>Thu, 27 May 2021 15:54:00 GMT</code> and 50 users are waiting from bucket <code>Thu, 27 May 2021 15:55:00 GMT</code>. These are the oldest buckets in the queue who get the eligible slots.</p>
            <pre><code>activeUsers: 148,
buckets: 
[  
  {
    key: "Thu, 27 May 2021 15:54:00 GMT",
    data: 
    {
      waiting: 2,
      eligibleSlots: 2,
    },
  }
  {
    key: "Thu, 27 May 2021 15:55:00 GMT",
    data: 
    {
      waiting: 50,
      eligibleSlots: 50,
    }
  },
  {
    key: "Thu, 27 May 2021 15:56:00 GMT",
    data: 
    {
      waiting: 60,
      eligibleSlots: 0,
    }
  }
]</code></pre>
            <p>If there are eligible slots available for all the users in their bucket, then they can be sent to the web application from any data center. This ensures the fairness of the waiting room.</p><p>There is another case that can happen where we do not have enough eligible slots for a whole bucket. When this happens things get a little more complicated as we cannot send everyone from that bucket to the web application. Instead, we allocate a share of eligible slots to each data center.</p>
            <pre><code>key: "Thu, 27 May 2021 15:56:00 GMT",
data: 
{
  waiting: 60,
  eligibleSlots: 20,
}</code></pre>
            <p>As we did before, we use the ratio of past traffic from each data center to decide how many users it can let through. So if the current time is <code>Thu, 27 May 2021 16:34:10 GMT</code> both data centers look at the traffic ratio in the past at <code>Thu, 27 May 2021 16:32:00 GMT</code> and send a subset of users from those data centers to the web application.</p>
            <pre><code>Thu, 27 May 2021 16:32:00 GMT: 
{
  Nairobi: 0.25, // 0.25 * 20 = 5 eligibleSlots
  Dublin: 0.75,  // 0.75 * 20 = 15 eligibleSlots
}</code></pre>
            
    <div>
      <h3>Estimated wait time</h3>
      <a href="#estimated-wait-time">
        
      </a>
    </div>
    <p>When a request comes from a user we look at their bucketId. Based on the bucketId it is possible to know how many people are in front of the user's bucketId from the sorted list. Similar to how we track the activeUsers we also calculate the average number of users going to the web application per minute. Dividing the number of people who are in front of the user by the average number of users going to the web application gives us the estimated time. This is what is shown to the user who visits the waiting room.</p>
            <pre><code>avgUsersToWebApplication:  30,
activeUsers: 148,
buckets: 
[  
  {
    key: "Thu, 27 May 2021 15:54:00 GMT",
    data: 
    {
      waiting: 2,
      eligibleSlots: 2,
    },
  }
  {
    key: "Thu, 27 May 2021 15:55:00 GMT",
    data: 
    {
      waiting: 50,
      eligibleSlots: 50,
    }
  },
  {
    key: "Thu, 27 May 2021 15:56:00 GMT",
    data: 
    {
      waiting: 60,
      eligibleSlots: 0,
    }
  }
]</code></pre>
            <p>In the case above for a user with bucketId <code>Thu, 27 May 2021 15:56:00 GMT</code>, there are <code>60</code> users ahead of them. With <code>30</code> activeUsersToWebApplication per minute, the estimated time to get into the web application is <code>60/30</code> which is <code>2</code> minutes.</p>
    <div>
      <h2>Implementation with Workers and Durable Objects</h2>
      <a href="#implementation-with-workers-and-durable-objects">
        
      </a>
    </div>
    <p>Now that we have talked about the user experience and the algorithm, let’s focus on the implementation. Our product is specifically built for customers who experience high volumes of traffic, so we needed to run code at the edge in a highly scalable manner. Cloudflare has a great culture of building upon its own products, so we naturally thought of <a href="/introducing-cloudflare-workers/">Workers</a>. The Workers platform uses <a href="/cloud-computing-without-containers/">Isolates</a> to scale up and can scale horizontally as there are more requests.</p><p>The Workers product has an ecosystem of tools like <a href="https://github.com/cloudflare/wrangler">wrangler</a> which help us to iterate and debug things quickly.</p><p>Workers also reduce long-term operational work.</p><p>For these reasons, the decision to build on Workers was easy. The more complex choice in our design was for the coordination. As we have discussed before, our workers need a way to share the waiting room state. We need every worker to be aware of changes in traffic patterns quickly in order to respond to sudden traffic spikes. We use the proportion of traffic from two minutes before to allocate user slots among data centers, so we need a solution to aggregate this data and make it globally available within this timeframe. Our design also relies on having fast coordination within a data center to react quickly to changes. We considered a few different solutions before settling on <a href="https://developers.cloudflare.com/workers/runtime-apis/cache">Cache</a> and <a href="/introducing-workers-durable-objects/">Durable Objects</a>.</p>
    <div>
      <h3>Idea #1: Workers KV</h3>
      <a href="#idea-1-workers-kv">
        
      </a>
    </div>
    <p>We started to work on the project around March 2020. At that point, Workers offered two options for storage: the <a href="https://developers.cloudflare.com/workers/runtime-apis/cache">Cache API</a> and <a href="https://www.cloudflare.com/products/workers-kv/">KV</a>. Cache is shared only at the data center level, so for global coordination we had to use KV. Each worker writes its own key to KV that describes the requests it received and how it processed them. Each key is set to expire after a few minutes if the worker stopped writing. To create a workerState, the worker periodically does a list operation on the KV namespace to get the state around the world.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2fcnX6Cjf5L7E8lDjOBQGD/4398b5634d38d3d7b88985dc4c5fa8e4/image7.png" />
            
            </figure><p>Design using KV</p><p>This design has some flaws because KV <a href="https://developers.cloudflare.com/workers/learning/how-kv-works">wasn’t built for a use case like this</a>. The state of a waiting room changes all the time to match traffic patterns. Our use case is write intensive and KV is intended for read-intensive workflows. As a consequence, our proof of concept implementation turned out to be more expensive than expected. Moreover, KV is eventually consistent: it takes time for information written to KV to be available in all of our data centers. This is a problem for Waiting Room because we need fine-grained control to be able to react quickly to traffic spikes that may be happening simultaneously in several locations across the globe.</p>
    <div>
      <h3>Idea #2: Centralized Database</h3>
      <a href="#idea-2-centralized-database">
        
      </a>
    </div>
    <p>Another alternative was to run our own databases in our core data centers. The <a href="https://developers.cloudflare.com/workers/runtime-apis/cache">Cache API</a> in Workers lets us use the cache directly within a data center. If there is frequent communication with the core data centers to get the state of the world, the cached data in the data center should let us respond with minimal latency on the request hot path. There would be fine-grained control on when the data propagation happens and this time can be kept low.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4ySnARioXWtR5DkkjSyH9b/9e8856d4b0fcf7f629d8e5284ccc4b6c/image3-1.png" />
            
            </figure><p>Design using Core Data centers‌‌</p><p>As noted before, this application is very write-heavy and the data is rather short-lived. For these reasons, a standard relational database would not be a good fit. This meant we could not leverage the existing database clusters maintained by our in-house specialists. Rather, we would need to use an in-memory data store such as Redis, and we would have to set it up and maintain it ourselves. We would have to install a data store cluster in each of our core locations, fine tune our configuration, and make sure data is replicated between them. We would also have to create a  proxy service running in our core data centers to gate access to that database and validate data before writing to it.</p><p>We could likely have made it work, at the cost of substantial operational overhead. While that is not insurmountable, this design would introduce a strong dependency on the availability of core data centers. If there were issues in the core data centers, it would affect the product globally whereas an edge-based solution would be more resilient. If an edge data center goes offline Anycast takes care of routing the traffic to the nearby data centers. This will ensure a web application will not be affected.</p>
    <div>
      <h3>The Scalable Solution: Durable Objects</h3>
      <a href="#the-scalable-solution-durable-objects">
        
      </a>
    </div>
    <p>Around that time, we learned about <a href="/introducing-workers-durable-objects/">Durable Objects</a>. The product was in closed beta back then, but we decided to embrace Cloudflare’s thriving dogfooding culture and did not let that deter us. With Durable Objects, we could create one global Durable Object instance per waiting room instead of maintaining a single database. This object can exist anywhere in the world and handle redundancy and availability. So Durable Objects give us sharding for free. Durable Objects gave us fine-grained control as well as better availability as they run in our edge data centers. Additionally, each waiting room is isolated from the others: adverse events affecting one customer are less likely to spill over to other customers.</p><p><b>Implementation with Durable Objects</b>Based on these advantages, we decided to build our product on Durable Objects.</p><p>As mentioned above, we use a worker to decide whether to send users to the Waiting Room or the web application. That worker periodically sends a request to a Durable Object saying how many users it sent to the Waiting Room and how many it sent to the web application. A Durable Object instance is created on the first request and remains active as long as it is receiving requests. The Durable Object aggregates the counters sent by every worker to create a count of users sent to the Waiting Room and a count of users on the web application.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3uo0AeQpWFcgVEQJSyf4F2/adc1cca869b02b92b06e26a98cb281d0/image10.png" />
            
            </figure><p>A Durable Object instance is only active as long as it is receiving requests and can be restarted during maintenance. When a Durable Object instance is restarted, its in-memory state is cleared. To preserve the in-memory data on Durable Object restarts, we back up the data using the Cache API. This offers weaker guarantees than using the <a href="https://developers.cloudflare.com/workers/learning/using-durable-objects#accessing-persistent-storage-from-a-durable-object">Durable Object persistent storage</a> as data may be evicted from cache, or the Durable Object can be moved to a different data center. If that happens, the Durable Object will have to start without cached data. On the other hand, persistent storage at the edge still has limited capacity. Since we can rebuild state very quickly from worker updates, we decided that cache is enough for our use case.</p><p><b>Scaling up</b>When traffic spikes happen around the world, new workers are created. Every worker needs to communicate how many users have been queued and how many have been let through to the web application. However, while workers automatically scale horizontally when traffic increases, Durable Objects do not. By design, there is only one instance of any Durable Object. This instance runs on a single thread so if it receives requests more quickly than it can respond, it can become overloaded. To avoid that, we cannot let every worker send its data directly to the same Durable Object. The way we achieve scalability is by sharding: we create per data center Durable Object instances that report up to one global instance.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/746CExfEYweWJtQf9fxk1z/81c666b4eca2f963375084a2fce26334/image5.png" />
            
            </figure><p>Durable Objects implementation</p><p>The aggregation is done in two stages: at the data-center level and at the global level.</p><p><b>Data Center Durable Object</b>When a request comes to a particular location, we can see the corresponding data center by looking at the cf.colo field on the <a href="https://developers.cloudflare.com/workers/runtime-apis/request#incomingrequestcfproperties">request</a>. The Data Center Durable Object keeps track of the number of workers in the data center. It aggregates the state from all those workers. It also responds to workers with important information within a data center like the number of users making requests to a waiting room or number of workers. Frequently, it updates the Global Durable Object and receives information about other data centers as the response.</p>
    <div>
      <h4><b>Worker User Slots</b></h4>
      <a href="#worker-user-slots">
        
      </a>
    </div>
    <p>Above we talked about how a data center gets user slots allocated to it based on the past traffic patterns. If every worker in the data center talks to the Data Center Durable Object on every request, the Durable Object could get overwhelmed. Worker User Slots help us to overcome this problem.</p><p>Every worker keeps track of the number of users it has let through to the web application and the number of users that it has queued. The worker user slots are the number of users a worker can send to the web application at any point in time. This is calculated from the user slots available for the data center and the worker count in the data center. We divide the total number of user slots available for the data center by the number of workers in the data center to get the user slots available for each worker. If there are two workers and 10 users that can be sent to the web application from the data center, then we allocate five as the budget for each worker. This division is needed because every worker makes its own decisions on whether to send the user to the web application or the waiting room without talking to anyone else.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7GhRGXwUGAzkSXs5e2ytwF/48b8aff1f9057f84e4cd8731c38e531c/Waiting-room-diagram-4.png" />
            
            </figure><p>Waiting room inside a data center</p><p>When the traffic changes, new workers can spin up or old workers can die. The worker count in a data center is dynamic as the traffic to the data center changes. Here we make a trade off similar to the one for inter data center coordination: there is a risk of overshooting the limit if many more workers are created between calls to the Data Center Durable Object. But too many calls to the Data Center Durable Object would make it hard to scale. In this case though, we can use <a href="https://developers.cloudflare.com/workers/runtime-apis/cache">Cache</a> for faster synchronization within the data center.</p>
    <div>
      <h4><b>Cache</b></h4>
      <a href="#cache">
        
      </a>
    </div>
    <p>On every interaction to the Data Center Durable Object, the worker saves a copy of the data it receives to the cache. Every worker frequently talks to the cache to update the state it has in memory with the state in cache. We also adaptively adjust the rate of writes from the workers to the Data Center Durable Object based on the number of workers in the data center. This helps to ensure that we do not take down the Data Center Durable Object when traffic changes.</p>
    <div>
      <h3>Global Durable Object</h3>
      <a href="#global-durable-object">
        
      </a>
    </div>
    <p>The Global Durable Object is designed to be simple and stores the information it receives from any data center in memory. It responds with the information it has about all data centers. It periodically saves its in-memory state to cache using the Workers Cache API so that it can withstand restarts as mentioned above.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4IciXNUkxrxLFeZ6qo02UP/af58b5e686ed1dc2ef4af9c9ad89c94b/image11.png" />
            
            </figure><p>Components of waiting room</p>
    <div>
      <h2>Recap</h2>
      <a href="#recap">
        
      </a>
    </div>
    <p>This is how the waiting room works right now. Every request with the enabled waiting room goes to a worker at a Cloudflare edge data center. When this happens, the worker looks for the state of the waiting room in the <a href="https://developers.cloudflare.com/workers/runtime-apis/cache">Cache</a> first. We use cache here instead of Data Center Durable Object so that we do not overwhelm the Durable Object instance when there is a spike in traffic. Plus, reading data from cache is faster. The workers periodically make a request to the Data Center Durable Object to get the waiting room state which they then write to the cache. The idea here is that the cache should have a recent copy of the waiting room state.</p><p>Workers can examine the <a href="https://developers.cloudflare.com/workers/runtime-apis/request#incomingrequestcfproperties">request</a> to know which data center they are in. Every worker periodically makes a request to the corresponding Data Center Durable Object. This interaction updates the worker state in the Data Center Durable Object. In return, the workers get the waiting room state from the Data Center Durable Object. The Data Center Durable Object sends the data center state to the Global Durable Object periodically. In the response, the Data Center Durable Object receives all data center states globally. It then calculates the waiting room state and returns that state to a worker in its response.</p><p>The advantage of this design is that it's possible to adjust the rate of writes from workers to the Data Center Durable Object and from the Data Center Durable Object to the Global Durable Object based on the traffic received in the waiting room. This helps us respond to requests during high traffic without overloading the individual Durable Object instances.</p>
    <div>
      <h2>Conclusion</h2>
      <a href="#conclusion">
        
      </a>
    </div>
    <p>By using Workers and Durable Objects, Waiting Room was able to scale up to keep web application servers online for many of our early customers during large spikes of traffic. It helped keep vaccination sign-ups online for companies and governments around the world for free through <a href="https://www.cloudflare.com/fair-shot/">Project Fair Shot</a>: <a href="https://www.cloudflare.com/case-studies/verto/">Verto Health</a> was able to serve over 4 million customers in Canada; <a href="https://www.cloudflare.com/case-studies/tickettailor/">Ticket Tailor</a> reduced their peak resource utilization from 70% down to 10%; the <a href="https://www.cloudflare.com/case-studies/county-of-san-luis-obispo/">County of San Luis Obispo</a> was able to stay online during traffic surges of up to 23,000 users; and the country of <a href="https://www.cloudflare.com/case-studies/latvia-ministry-of-health/">Latvia</a> was able to stay online during surges of thousands of requests per second. These are just a few of the customers we served and will continue to serve until Project Fair Shot ends.</p><p>In the coming days, we are rolling out the Waiting Room to customers on our business plan. <a href="https://www.cloudflare.com/plans/business/">Sign up</a> today to prevent spikes of traffic to your web application. If you are interested in access to Durable Objects, it’s currently available to try out in <a href="/durable-objects-open-beta/">Open Beta</a>.</p> ]]></content:encoded>
            <category><![CDATA[Waiting Room]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Durable Objects]]></category>
            <category><![CDATA[Product News]]></category>
            <guid isPermaLink="false">xO8XUNhlWLIGxJXKlV5Bs</guid>
            <dc:creator>Fabienne Semeria</dc:creator>
            <dc:creator>George Thomas</dc:creator>
            <dc:creator>Mathew Jacob</dc:creator>
        </item>
    </channel>
</rss>