
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/">
    <channel>
        <title><![CDATA[ The Cloudflare Blog ]]></title>
        <description><![CDATA[ Get the latest news on how products at Cloudflare are built, technologies used, and join the teams helping to build a better Internet. ]]></description>
        <link>https://blog.cloudflare.com</link>
        <atom:link href="https://blog.cloudflare.com/" rel="self" type="application/rss+xml"/>
        <language>en-us</language>
        
        <lastBuildDate>Fri, 03 Apr 2026 18:42:09 GMT</lastBuildDate>
        <item>
            <title><![CDATA[Introducing Observatory and Smart Shield — see how the world sees your website, and make it faster in one click]]></title>
            <link>https://blog.cloudflare.com/introducing-observatory-and-smart-shield/</link>
            <pubDate>Fri, 26 Sep 2025 14:00:00 GMT</pubDate>
            <description><![CDATA[ We're announcing two enhancements to our Application Performance suite that'll show how the world sees your website, and make it faster with one click - available Cloudflare Dashboard! ]]></description>
            <content:encoded><![CDATA[ <p>Modern users expect instant, reliable web experiences. When your application is slow, they don’t just complain — they leave. Even delays as small as 100 ms have been <a href="https://wpostats.com/"><u>shown to have a measurable impact on revenue, conversions, bounce rate, engagement and more</u></a>. </p><p>If you’re responsible for delivering on these expectations to the users of your product, you know there are many monitoring tools that show you how visitors experience your website, and can let you know when things are slow or causing issues. This is essential, but we believe understanding the condition is only half the story. The real value comes from integrating monitoring and remedies in the same view, giving customers the ability to quickly identify and resolve issues.</p><p>That's why today, we're excited to launch the new and improved <b>Observatory</b>, now in open beta. This monitoring and <a href="https://www.cloudflare.com/learning/performance/what-is-observability/">observability</a> tool goes beyond charts and graphs, by also telling you exactly how to improve your application's performance and resilience, and immediately showing you the impact of those changes. And we’re releasing it to all subscription tiers (including Free!), available today. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/T6HhZL51aLEhzD3lQPxCq/f9b03f05cf4db0b2e61c8e861df4ecdf/1.png" />
          </figure><p>But wait, there’s more! To make your users’ experience in Cloudflare even faster, we’re launching Smart Shield, available today for all subscription tiers. Using Observatory, you can pinpoint performance bottlenecks, and for many of the most common issues, you can now apply the fix in just a few clicks with our <b>Smart Shield</b> product. Double the fun!</p>
    <div>
      <h2>Our unique perspective: leveraging data from 20% of the web</h2>
      <a href="#our-unique-perspective-leveraging-data-from-20-of-the-web">
        
      </a>
    </div>
    <p>Every day, Cloudflare handles traffic for over 20% of the web, giving us a unique vantage point into what makes websites faster and more resilient. We built Observatory to take advantage of this position, uniting data that is normally scattered across different tools — including real-user data, synthetic testing, error rates, and backend telemetry — into a single platform. This gives you a complete, cohesive picture of your application's health end-to-end, in one spot, and enables you to easily identify and resolve performance issues.</p><p>For this launch, we're bringing together:</p><ul><li><p><b>Real-user data:</b> See how your application performs for real people, in the real world.</p></li><li><p><b>Back-end telemetry:</b> Break down the lifecycle of a request to pinpoint areas for improvement.</p></li><li><p><b>Error rates:</b> Understand the stability of your application at both the edge and origin.</p></li><li><p><b>Cache hit ratios:</b> Ensure you're maximizing the performance of your configuration.</p></li><li><p><b>Synthetic testing:</b> Proactively test and monitor key endpoints with powerful, accurate simulations.</p></li></ul><p>Let's take a quick look at each data set to see how we use them in Observatory.</p>
    <div>
      <h2>Real-user data</h2>
      <a href="#real-user-data">
        
      </a>
    </div>
    <p>There are two primary forms of data collection: real-user data and synthetic data. Real-user data are performance metrics collected from real traffic, from real visitors, to your application. It’s how users are <i>actually</i> seeing your application perform in the real world. It’s unpredictable, and covers every scenario.</p><p>Synthetic data is data collected using some sort of simulated test (loading a site in a headless browser, making network requests from a testing system to an endpoint, etc.). Tests are run under a predefined set of characteristics — location, network speed, etc. — to provide a consistent baseline.</p><p>Both forms of data have their uses, and companies with a strongly established culture of operational excellence tend to use both.</p><p>The first data you’ll see when you visit Observatory is real-user data collected with <a href="https://www.cloudflare.com/web-analytics/"><u>Real User Monitoring (RUM)</u></a>, with a particular focus on the <a href="https://www.cloudflare.com/learning/performance/what-are-core-web-vitals/"><u>Core Web Vital</u></a> metrics.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/400NHp7OBcSXNmLi5AxXb8/641f6436574e040bfbc14b56c7bfcd70/1.5.png" />
          </figure><p>This is very intentional.</p><p>Real-user data should be the source of truth when it comes to measuring performance and resiliency of your application. Even the best of synthetic data sources are always going to be an approximation. They cannot cover every possible scenario, and because they are being run from a lab environment, they will not always reveal issues that may be more sporadic and unpredictable.</p><p>They’re also the best representation of what your users are experiencing when they access your site and, at the end of the day, that’s why we focus on improving performance, resiliency,  and security for our users.</p><p>We believe so strongly in the importance of every company having access to accurate, detailed RUM data that we are providing it for free, to all accounts. In fact, we’re about to make our <a href="https://www.cloudflare.com/web-analytics/#:~:text=Privacy%20First"><u>privacy-first analytics</u></a> — which doesn’t track individual users for analytics — <a href="https://blog.cloudflare.com/the-rum-diaries-enabling-web-analytics-by-default/"><u>available by default for all free zones</u></a> (<b>excluding data from EU or UK visitors</b>), no setup necessary. We believe the right thing is arming everyone with detailed, actionable, real-user data, and we want to make it easy.</p>
    <div>
      <h2>Backend telemetry</h2>
      <a href="#backend-telemetry">
        
      </a>
    </div>
    <p>Front-end performance metrics are our best proxy for understanding the actual user experience of an application and as a result, they work great as key performance indicators (KPI’s).</p><p>But they’re not enough. Every primary metric should have some level of supporting diagnostic metrics that help us understand <i>why</i> our user metrics are performing like they are — so that we can quickly identify issues, bottlenecks, and areas of improvement.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2Un8yQdUf9DZw05gfS5WVs/187901b7e636cec35655ff954b1f38c4/2.png" />
          </figure><p>While the industry has largely, and rightfully, moved on from Time to First Byte (TTFB) as a primary metric of focus, it still has value as a diagnostic metric. In fact, we analyzed our RUM data and found a very strong connection between <a href="https://developers.cloudflare.com/speed/observatory/test-results/#synthetic-tests-and-real-user-monitoring-metrics"><u>Time to First Byte and Largest Contentful Paint</u></a>.</p><p>Google’s recommended thresholds for Time to First Byte are:</p><ul><li><p>Good: &lt;= 800ms</p></li><li><p>Needs Improvement: &gt; 800ms and &lt;= 1800ms</p></li><li><p>Poor: &gt; 1800ms</p></li></ul><p>Similarly, their official thresholds for Largest Contentful Paint are:</p><ul><li><p>Good: &lt;= 2500ms</p></li><li><p>Needs Improvement &gt; 2500ms and &lt;= 4000ms</p></li><li><p>Poor: &gt; 4000ms</p></li></ul><p>Looking across over 9 billion events, we found that when compared to the average site, sites with a “poor” (&gt;1800ms) TTFB are:</p><ul><li><p>70.1 percentage points less likely to have a “good” LCP</p></li><li><p>21.9 percentage points more likely to have a “needs improvement” LCP</p></li><li><p>48.2 percentage points more likely to have a “poor” LCP</p></li></ul><p>TTFB is an ill-defined blackbox, so we’re making a point to break that down into its various subparts so you can quickly pinpoint if the issue is with the connection establishment, the server response time, the network itself, and more. We’ll be working to break this down even further in the coming months as we expose the complete lifecycle of a request so you’re able to pinpoint <i>exactly</i> where the bottlenecks lie.</p>
    <div>
      <h2>Errors &amp; cache ratios</h2>
      <a href="#errors-cache-ratios">
        
      </a>
    </div>
    <p>Degradation in stability and performance are frequently directly connected to configuration changes or an increase in errors. Clear visibility into these characteristics can often cut right to the heart of the issue at hand, as well as point to opportunities for improvement of the overall efficiency and effectiveness of your application.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6j89m6ONeXh9v6XL35YJjn/1d65ac83476971fc42fccc2980bc79ff/3.png" />
          </figure><p>Observatory prominently surfaces cache hit ratio and error rates for <i>both</i> the edge and origin. This compliments the backend telemetry nicely, and helps to further breakdown the backend metrics you are seeing to help pinpoint areas of improvement.</p><p>Take cache hit ratio for example. Intuitively, we know that when content is served from cache on an edge server, it should be faster than when the request has to go all the way back to the origin server. Based on our data, again, that’s exactly what we see.</p><p>If we consider our Time To First Byte thresholds again (good is &lt;= 800ms; needs improvement is &gt; 800ms and less than 1800ms; poor is anything over 1800ms), when looking across 9 billion data points as collected by our RUM solution, we see that a whopping <b>91.7% of all pages served from Cloudflare’s cache have a “good” TTFB compared to 79.7% when the request has to be served from the origin server</b>.</p><p>In other words, optimizing origin performance (more on that in a bit) and moving more content to the edge are sure-fire ways to give you a much stronger performance baseline.</p>
    <div>
      <h2>Accurate and detailed synthetic testing</h2>
      <a href="#accurate-and-detailed-synthetic-testing">
        
      </a>
    </div>
    <p>While real-user data is our source of truth, synthetic testing and monitoring is important as well. Because tests are run in a more controlled environment (test from this location, at this time, with this criteria, etc.), the resulting data is a lot less noisy and variable. In addition, because there is not a user involved and we don’t have to worry about any observer effect, synthetic tests are able to grab a lot more information about the request and page lifecycle.</p><p>As a result, synthetic data tends to work very well for arming engineers with debugging information, as well as providing a cleaner set of data for comparing and contrasting results across different platforms, releases, and other situations.</p><p>Observatory provides two different types of synthetic tests.</p><p>The first synthetic test is a browser test. A browser test will load the requested page in a headless browser, run <a href="https://developer.chrome.com/docs/lighthouse"><u>Google’s Lighthouse</u></a> on it to report on key performance metrics, and provide some light suggestions for improvement. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3cvDSWqtBTMibgYysDgEoI/43cd0c684d3705fe021f588674a91cf6/4.png" />
          </figure><p>The second type of synthetic test Observatory provides is a network test. This is a brand new test type in Cloudflare, and is focused on giving you a better breakdown of the network and back-end performance of an endpoint.</p><p>Each network test will hit the provided endpoint for the test and record the wait time, server response time, connect time, SSL negotiation time, and total load time for the endpoint response. Because these tests are much more targeted, a single test in itself is not as valuable and can be prone to variation. That variation isn’t necessarily a bad thing—in fact, variability in these results can actually give you a better understanding of the breadth of results when real users hit that same endpoint.</p><p>For that reason, network tests trigger a series of individual runs against the provided endpoint spread out over a short period of time. The data for each response is recorded, and then presented as a histogram on the test results page, letting you see not just a single datapoint, but the long and short-tail of each metric. This gives you a much more accurate representation of reality than what a single test run can provide.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3gCWSp0HCTd4iJ0rTKpEpk/a610e47596eedd6b8cedf73dfcde09ca/5.png" />
          </figure><p>You are also able to compare network tests in Observatory, by selecting two network tests that have been completed. Again, all the data points for each test will be provided in a histogram, where you can easily compare the results of the two.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6mG2bRanAGzltvkucJImue/11f56a4d3c3af4cd2a65dab834a0f0af/6.png" />
          </figure><p>We are working on improving both synthetic test types in Q4 2025, focusing on making them more powerful and diagnostic.</p><p>As we mentioned before, even at its best, synthetic data is an approximation of what is actually happening. Accuracy is critical. Inaccurate data can distract teams with variability and faulty measurements.</p><p>It’s important that these tools are as accurate and true to the real world as possible. It’s also important to us that we give back to the community, both because it’s the right thing to do, and because we believe the best way to have the highest level of confidence in the measurement tools and frameworks we’re using is the rigor and scrutiny that open-source provides.</p><p>For those reasons, we’ll be working on open-sourcing many of the testing agents we’re using to power Observatory. We’ll share more on that soon, as well as more details about how we’ve built each different testing tool, and why.</p>
    <div>
      <h2>Doing something about it: Smart Suggestions</h2>
      <a href="#doing-something-about-it-smart-suggestions">
        
      </a>
    </div>
    <p>People don’t measure for the sake of having data and pretty charts. They measure because they want to be able to stay on top of the health of their application and find ways to improve it. Data is easy. Understanding what to do about the data you’re presented is both the hardest, and most important, part.</p><p>Monitoring without action is useless.</p><p>We’re building Observatory to have a <i>relentless</i> focus on actionability. Before any new metric is presented, we take some time to explore why that metric matters, when it’s something worth addressing, and what actions you should take if those metrics need improvement.</p><p>All of that leads us to our new Smart Suggestions. Wherever possible, we want to pair each metric with a set of opinionated, data-driven suggestions for how to make things better. We want to avoid vague hand-wavy advice and instead be prescriptive and specific and precise.</p><p>For example, let’s look at one particular recommendation we provide around improving Largest Contentful Paint.</p><p>Largest Contentful Paint is a core web vital metric that measures when the largest piece of content is displayed on the screen. That piece of content could be an image, video or text.</p><p>Much like TTFB, Largest Contentful Paint is a bit of a black box by itself. While it tells us how long it takes for that content to get on screen, there are a large number of potential bottlenecks that could be causing the delay. Perhaps the server response time was very slow. Or maybe there was something blocking the content from being displayed on the page. If the object was an image or video, perhaps the filesize was large and the resulting download was slow. LCP by itself doesn’t give us that level of granularity, so it’s hard to give more than hand wavy guidance on how to address it.</p><p>Thankfully, just like we can break TTFB into subparts, we can break LCP into its subparts as well. Specifically we can look at:</p><ul><li><p>Time to First Byte: how quickly the server responds to the request for HTML</p></li><li><p>Resource Load Delay: How long it takes after TTFB for the browser to discover the LCP resource</p></li><li><p>Resource Load Duration: How long it takes for the browser to download the LCP resource</p></li><li><p>Render Delay: How long it takes the browser to render the content, after it has the resource in hand.</p></li></ul><p>Breaking it down into these subparts, we can be much more diagnostic about what to do.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7qfKPLaTGTjJjhawTVoWAi/10ce739e376cabd7c468adfa280246dd/7.png" />
          </figure><p>In the example above, our recommendation engine analyzes the site's real-user data and notices that Resource Load Delay accounts for over 10% of total LCP time. As a result, there’s a high likelihood that the resource triggering LCP is large and could potentially be compressed to reduce file size. So we make a recommendation to enable compression using <a href="https://developers.cloudflare.com/images/polish/"><u>Polish</u></a>.</p><p>We’re very excited about the impact these suggestions will have on helping everyone quickly zero in on meaningful solutions for improving performance and resiliency, without having to wade through mountains of data to get there. As we analyze data, we’ll find more and more patterns of problems and the solutions they can map to. Expanding on our Smart Suggestions will be a constant and ongoing focus as we move forward, and we are working on adding much more content about those patterns and what we find in Q4.</p>
    <div>
      <h2>Fixing the biggest pain point: Smart Shield</h2>
      <a href="#fixing-the-biggest-pain-point-smart-shield">
        
      </a>
    </div>
    <p>Observatory gives you unprecedented insight into your application's health, but insights are only half the battle. The next challenge is acting on them, which brings us to another layer of complexity: protecting your origin. For many of our customers, proper management of origin routes and connections is one of the largest drivers of aggregate overall performance. As we mentioned before, we see a clear negative impact on user-facing performance metrics when we have to go back to the origin, and we want to make it as easy as possible for our customers to improve those experiences. Achieving this requires protecting against unnecessary load while ensuring only trusted traffic reaches your servers.</p><p>Today's customers have powerful tools to protect their origins, but achieving basic use cases remains frustratingly complex:</p><ul><li><p>Making applications faster</p></li><li><p>Reducing origin load</p></li><li><p>Understanding origin health issues</p></li><li><p>Restricting IP address access to origin servers</p></li></ul><p>These fundamental needs currently require navigating multiple APIs and dashboard settings. You shouldn't need to become an expert in each feature — we should analyze your traffic patterns and provide clear, actionable solutions.</p>
    <div>
      <h2>Smart Shield: the future of origin shielding</h2>
      <a href="#smart-shield-the-future-of-origin-shielding">
        
      </a>
    </div>
    <p>Smart Shield transforms origin protection from a complex, multi-tool challenge into a streamlined, intelligent solution that works on your behalf. Our unified API and UI combines all origin protection essentials — dynamic traffic acceleration, intelligent caching, health monitoring, and dedicated egress IPs — into one place that enables single-click configuration.</p><p>But we didn't stop at simplification. Smart Shield integrates with <b>Observatory</b> to provide both the <b>“what” </b>— identifying performance bottlenecks and health issues — and the <b>“how” </b>— delivering capabilities that increase performance, availability, and security.</p><p>This creates a continuous feedback loop: Observatory identifies problems, Smart Shield provides solutions, and real-time analytics verify the impact. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2OI8AZzHo5kW4mesYsqM7Z/e08a5961deda6246a8d4fb906f2f5483/8.png" />
          </figure>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6blpvetS2fS0CNAvu1lnp2/c16e1a330c2c260df4920f85b1650917/9.png" />
          </figure><p>But what does this mean for you? </p><ul><li><p>Reduce total cost of ownership (TCO)</p></li><li><p>Reduce the time-to-value (TTV) for performance, availability, and security issues pertaining to customer origins</p></li><li><p>Enable new features without guesswork and validate effectiveness in the data</p></li></ul><p>Your time stays focused on building incredible user experiences, not becoming a configuration expert. We are excited to give you back time for your customers and your engineers, while paving the way for how you make sure your origin infrastructure is easily optimized to delight your customers. </p>
    <div>
      <h2>Protecting and accelerating origins with smart Connection Reuse</h2>
      <a href="#protecting-and-accelerating-origins-with-smart-connection-reuse">
        
      </a>
    </div>
    <p>Keeping your origins fast and stable is a big part of what we do at Cloudflare. When you experience a traffic surge, the last thing you want is for a flood of <a href="https://www.cloudflare.com/learning/ssl/what-happens-in-a-tls-handshake/"><u>TLS handshakes</u></a> to knock your origin down, or for those new connections to stall your requests, leaving your users to wait for slow pages to load.</p><p>This is why we’ve made significant changes to how Cloudflare’s network talks to your origins to dramatically improve the performance of our origin connections. </p><p>When Cloudflare makes a request to your origins, we make them from a subset of the available machines in every Cloudflare data center so that we can improve your connection reuse. Until now, this pool would be sized the same by default for every application within a data center, and changes to the sizing of the pool for a particular customer would need to be made manually. This often led to suboptimal connection reuse for our customers, as we might be making requests from way more machines than were actually needed, resulting in fewer warm connection pools than we otherwise could have had. This also caused issues at our data centers from time to time, as larger applications might have more traffic than the default pool size was capable of serving, resulting in production incidents where engineers are paged and had to manually increase the fanout factor for specific customers.</p><p>Now, these pool sizes are determined automatically and dynamically. By tracking domain-level traffic volume within a datacenter, we can automatically scale up and scale down the number of machines that serve traffic destined for customer origin servers for any particular customer, improving both the performance of customer websites and the reliability of our network. A massive, high-volume website with a considerable amount of API traffic will no longer be processed by the same number of machines as a smaller and more typical website. Our systems can respond to changes in customer traffic patterns within seconds, allowing us to quickly ramp up and respond to surges in origin traffic.</p><p>Thanks to these improvements, Cloudflare now uses over 30% fewer connections across the board to talk to origins. To put this into a more understandable perspective, this translates to saving approximately 402 years of handshake time every day across our global traffic, or 12,060 years of handshake time saved per month! This means just by proxying your traffic through Cloudflare, you’ll see a 30% on average reduction in the amount of connections to your origin, keeping it more available while serving the same traffic volume and in turn lowering your egress fees. But, in many cases, the results observed can be far greater than 30%. For example, in one data center which is particularly heavy in API traffic, we saw a reduction in origin connections of ~60%! </p><p>Many don’t realize that making more connections to an origin requires more compute and time for systems to create TCP and SSL handshakes. This takes time away from serving content requested by your end-users and can act as a hidden tax on your performance and overall to your application.<b> We are proud to reduce the Internet's hidden tax </b>by finding intelligent, innovative ways to reduce the amount of connections needed while supporting the same traffic volume.</p><p>Watch out for more updates to Smart Shield at the start of 2026 — we’re working on adding self-serve support for dedicated CDN egress IP addresses, along with significant performance, reliability, and resilience improvements!</p>
    <div>
      <h2>Charting the course: next steps for Observatory &amp; Smart Shield</h2>
      <a href="#charting-the-course-next-steps-for-observatory-smart-shield">
        
      </a>
    </div>
    <p>We’re really excited to share these two products with everyone today. Smart Shield and Observatory combine to provide a powerful one-two punch of insight and easy remediation.</p><p>As we navigate the beta launch of Observatory, we know this is just the start.</p><p>Our vision for Observatory is to be the single source of truth for your application’s health. We know that making the right decisions requires robust, accurate data, and we want to arm our customers with the most comprehensive picture available.</p><p>In the coming months, we plan to continue driving forward with our goal of providing comprehensive data, backed by a clear path to action.</p><ul><li><p><b>Deeper, more diagnostic data. </b>We’ll continue to break down data silos, bringing in more metrics to make sure you have a truly comprehensive view of your application’s health. We’ll be focused on going deeper and being more diagnostic, breaking down every aspect of both the request and page lifecycle to give you more granular data.</p></li><li><p><b>More paths to solutions. </b>People don’t measure for the sake of looking at data, they measure to solve problems. We’re going to continue to expand our suggestions, arming you with more precise, data-driven solutions to a wider range of issues, letting you fix problems with a single click through Smart Shield and bringing a tighter feedback loop to validate the impact of your configuration updates.</p></li><li><p><b>Benchmarking against other products.</b> Some of our customers split traffic between different CDNs due to regulatory or compliance requirements. Naturally, this brings up a whole series of questions about comparing the performance of the split traffic. In Observatory, you can compare these today, but we have a lot of things planned to make this even easier.</p></li></ul><p>Try out <a href="https://dash.cloudflare.com/?to=/:account/:zone/speed/overview"><u>Observatory</u></a> and <a href="https://www.cloudflare.com/application-services/products/smart-shield/"><u>Smart Shield</u></a> yourself today. And if you have ideas or suggestions for making Observatory and Smart Shield better, <a href="https://docs.google.com/forms/d/e/1FAIpQLScRMJVR7SmkiloMjPciaTdLzvHzKE9v3L0c418l02a1sMRj_g/viewform?usp=sharing&amp;ouid=115763007691250405767"><u>we’re all ears and would love to talk</u></a>!</p><div>
  
</div><p></p> ]]></content:encoded>
            <category><![CDATA[Speed]]></category>
            <category><![CDATA[Performance]]></category>
            <category><![CDATA[Birthday Week]]></category>
            <category><![CDATA[Aegis]]></category>
            <guid isPermaLink="false">tfg3NnmVPl0IoCJgQYuao</guid>
            <dc:creator>Tim Kadlec</dc:creator>
            <dc:creator>Brian Batraski</dc:creator>
            <dc:creator>Noah Maxwell Kennedy</dc:creator>
        </item>
        <item>
            <title><![CDATA[Extending Private Network Load Balancing load balancing to Layer 4 with Spectrum]]></title>
            <link>https://blog.cloudflare.com/extending-local-traffic-management-load-balancing-to-layer-4-with-spectrum/</link>
            <pubDate>Fri, 31 May 2024 13:00:07 GMT</pubDate>
            <description><![CDATA[ Cloudflare is adding support for all TCP and UDP traffic to our Private Network Load Balancing load balancing solution, extending the benefits of Private Network Load Balancing to more than just  ]]></description>
            <content:encoded><![CDATA[ <p></p><p>In 2023, Cloudflare <a href="https://blog.cloudflare.com/elevate-load-balancing-with-private-ips-and-cloudflare-tunnels-a-secure-path-to-efficient-traffic-distribution/"><u>introduced a new load balancing solution</u></a>, supporting Private Network Load Balancing. This gives organizations a way to balance HTTP(S) traffic between private or internal servers within a region-specific data center. Today, we are thrilled to be able to extend those samecapabilities to non-HTTP(S) traffic. This new feature is enabled by the integration of Cloudflare Spectrum, Cloudflare Tunnels, and Cloudflare load balancers and is available to enterprise customers. Our customers can now use Cloudflare load balancers for all TCP and UDP traffic destined for private IP addresses, eliminating the need for expensive on-premise load balancers.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6wjcoenAQ9NFW4PyZiqjCQ/9921257aea4486200be51f070c1cb090/image1-15.png" />
            
            </figure>
    <div>
      <h3>A quick primer</h3>
      <a href="#a-quick-primer">
        
      </a>
    </div>
    <p>In this blog post, we will be referring to <a href="https://www.cloudflare.com/learning/performance/what-is-load-balancing/">load balancers</a> at either layer 4 or layer 7. This is, of course, referring to layers of the <a href="https://www.cloudflare.com/learning/ddos/glossary/open-systems-interconnection-model-osi/">OSI model</a> but more specifically, the ingress path that is being used to reach the load balancer. <a href="https://www.cloudflare.com/learning/ddos/what-is-layer-7/">Layer 7</a>, also known as the Application Layer, is where the HTTP(S) protocol exists. Cloudflare is well known for our layer 7 capabilities, which are built around speeding up and protecting websites which run over HTTP(S). When we refer to layer 7 load balancers, we are referring to HTTP(S)-based services. Our layer 7 stack allows Cloudflare to apply services like CDN, WAF, Bot Management, DDoS protection, and more to a customer's website or application to improve performance, availability, and security.</p><p>Layer 4 load balancers operate at a lower level of the OSI model, called the <a href="https://www.cloudflare.com/learning/ddos/glossary/open-systems-interconnection-model-osi/#:~:text=4.%20The%20transport%20layer">Transport Layer</a>, which means they can be used to support a much broader set of services and protocols. At Cloudflare, our public layer 4 load balancers are enabled by a Cloudflare product called <a href="https://developers.cloudflare.com/spectrum/">Spectrum</a>. Spectrum works as a layer 4 reverse proxy. This places Cloudflare in front of any <a href="https://www.cloudflare.com/learning/ddos/what-is-a-ddos-attack/">DDoS attacks</a> that may be launched against Spectrum-proxied services, and by using Spectrum in front of your application, your private origin IP address is concealed, which also prevents bad actors from discovering and attacking your origin’s IP address directly.</p><p>Services that use TCP or UDP for transport can leverage Spectrum with a Cloudflare load balancer. Layer 4 load balancing allows us to support other application layer protocols such as SSH, FTP, NTP, and SMTP since they operate over TCP and UDP. Given the breadth of services and protocols this represents, the treatment provided is more generalized. Cloudflare Spectrum supports features such as TLS/SSL offloading, DDoS protection, <a href="https://www.cloudflare.com/application-services/products/argo-smart-routing/">Argo Smart Routing</a>, and session persistence with our layer 4 load balancers.</p>
    <div>
      <h3>Cloudflare’s current load balancing capabilities</h3>
      <a href="#cloudflares-current-load-balancing-capabilities">
        
      </a>
    </div>
    <p>Before we dig into the new features we are announcing, it's important to understand what Cloudflare load balancing supports today and the challenges our customers face with regard to their load balancing needs.</p><p>There are three main load balancing traffic flows that Cloudflare supports today:</p><ol><li><p>Internet-facing load balancers connecting to publicly accessible origins operating at layer 7, which supports HTTP(S)</p></li><li><p>Internet-facing load balancers connecting to publicly accessible origins operating at layer 4 (Spectrum), which supports all TCP-based and UDP-based services such as SSH, FTP, NTP, SMTP, etc.</p></li><li><p>Publicly accessible load balancers connecting to <b>private</b> origins operating at layer 7 HTTP(S) over Cloudflare Tunnels</p></li></ol><p>One of the biggest advantages Cloudflare’s load balancing solutions offer our customers is that there is no hardware to purchase or maintain. Hardware-based load balancers are expensive to purchase, license, operate, and upgrade. “Need more bandwidth? Just buy and install this additional module.” “Need more features? Just buy and install this new license.” “Oh, your hardware load balancer is End-of-Life? Just purchase an entire new kit which we will EOL in a few years!” The upgrade or refresh cycle on a fully integrated hardware load balancer setup can take years and, by the time you finish the planning, implementation, and cutover, it might actually be time to start planning the next refresh.</p><p>Cloudflare eliminates all these concerns and lets you focus on innovation and growth. Your load balancers exist in every Cloudflare data center across the globe, in <a href="https://www.cloudflare.com/network/">over 300 cities</a>, with virtually unlimited scale and capacity. You never need to worry about bandwidth constraints, deployment locations, extra hardware modules, downtime, upgrades, or maintenance windows ever again. With Cloudflare’s global Anycast network, every customer connects to a nearby Cloudflare data center and load balancer, where relevant policies, rules, and steering are applied.</p>
    <div>
      <h3>Load balancing more than websites with Cloudflare Spectrum</h3>
      <a href="#load-balancing-more-than-websites-with-cloudflare-spectrum">
        
      </a>
    </div>
    <p>Today, we are excited to announce that Cloudflare Spectrum can now support load balancing traffic to private networks. The addition of private IP origin support for Cloudflare load balancers is very powerful and that's why we are extending that support to load balancing with Cloudflare <a href="https://developers.cloudflare.com/spectrum/">Spectrum</a> as well. This means that any set of private or internal applications that use TCP or UDP can now be locally load balanced via Cloudflare. These services will also benefit from Spectrum’s layer 3/4 DDoS protection and can leverage other features like session persistence without compromising security. So while the ingress to these load balancers is public, the origins to which they distribute traffic can all be private, inaccessible from the public Internet.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/63C3GATpDsujBJLBaboweL/b6a7adeda6c0b3800f45c3f7eb83bf6e/image3-7.png" />
            
            </figure><p>Ordinarily, load balancing to private networks would require expensive on-premise hardware or costly direct physical connections to cloud providers. But, by using Spectrum as the ingress path for TCP and UDP load balancing, customers can keep their origins completely protected and unreachable from the Internet and allow access exclusively through their Cloudflare load balancer – no expensive hardware required. Customers no longer need to manage complex ACLs or security settings to make sure only certain source IP addresses are connecting to the origins. These private origins can be hosted in private data centers, a public cloud, a private cloud, or on-premise.</p>
    <div>
      <h3>How we enabled Spectrum to support private networks</h3>
      <a href="#how-we-enabled-spectrum-to-support-private-networks">
        
      </a>
    </div>
    <p>All of our changes to create this feature center around integrations with Apollo, the unifying service created by the Cloudflare Zero Trust team. You can read their <a href="/from-ip-packets-to-http-the-many-faces-of-our-oxy-framework/">previous blog post on the Oxy framework</a> for more details on how Zero Trust handles and routes traffic. Apollo accepts incoming traffic from supported on-ramps, applies Zero Trust logic as configured by the customer, and then routes the traffic to egress via supported off-ramps. For example, Apollo enables clients connected securely using Cloudflare’s WARP client to communicate over Cloudflare Tunnels with private origins in a customer’s data center. Now, Apollo is being extended to do more.</p><p>When a user creates a load balanced Spectrum app, they choose a hostname and port, and select a Cloudflare load balancer as their origin. This allocates a hostname which will resolve to an IP address where Spectrum will listen for incoming traffic on the customer-configured port. Spectrum makes a call to Cloudflare's internal load balancing service, Director, which responds with the appropriate endpoint, to which Spectrum will proxy the connection. Previously, load balanced Spectrum apps only supported publicly addressable origins. Now, if the response from Director indicates that the traffic is destined for a private origin, Spectrum passes the private origin's IP address and <a href="https://developers.cloudflare.com/cloudflare-one/connections/connect-networks/private-net/tunnel-virtual-networks/">virtual network</a> ID to Apollo, which then proxies the traffic to the customer's private origin.</p><p>In short, new integrations between our Spectrum service and Apollo and between Apollo and Director have allowed us to expand our load balancing offerings not only to layer 4, but also enable us to leverage virtual networks to keep load balanced traffic private and off the public Internet. This also sets the stage for integrating load balancing with other traffic on-ramps and off-ramps, such as WARP, in the future. It also opens the door to a number of exciting possibilities like load balancing authenticated device traffic to private networks or even load balancing internal traffic that is never exposed to the public Internet.</p>
    <div>
      <h3>Looking to the future</h3>
      <a href="#looking-to-the-future">
        
      </a>
    </div>
    <p>We are excited to be releasing this new load balancing feature which enables Cloudflare Spectrum to reach private IP endpoints. Cloudflare load balancers now support steering any TCP or UDP-based protocols over Cloudflare Tunnels to private IP endpoints, which are otherwise not accessible via the public Internet. You can learn more about how to configure this feature on our <a href="https://developers.cloudflare.com/load-balancing/local-traffic-management/">load balancing documentation</a> pages.</p><p>We are just getting started with our private network  load balancing support. There is so much more to come including support for load balancing internal traffic, enhanced layer 4 session affinity, new steering methods, additional traffic ingress methods, and more!</p><p>
</p> ]]></content:encoded>
            <category><![CDATA[Spectrum]]></category>
            <category><![CDATA[Load Balancing]]></category>
            <category><![CDATA[Cloudflare Zero Trust]]></category>
            <category><![CDATA[Private Network]]></category>
            <category><![CDATA[Private IP]]></category>
            <guid isPermaLink="false">6xgIcezZBRXIokMo0e7gMH</guid>
            <dc:creator>Chris Ward</dc:creator>
            <dc:creator>Brian Batraski</dc:creator>
            <dc:creator>Mathew Jacob</dc:creator>
        </item>
        <item>
            <title><![CDATA[Cloudflare’s public IPFS gateways and supporting Interplanetary Shipyard]]></title>
            <link>https://blog.cloudflare.com/cloudflares-public-ipfs-gateways-and-supporting-interplanetary-shipyard/</link>
            <pubDate>Tue, 14 May 2024 13:00:36 GMT</pubDate>
            <description><![CDATA[ Cloudflare is transitioning traffic that comes to our public IPFS gateway to Interplanetary Shipyard’s IPFS gateway. The transition is expected to be complete by August 14th, 2024 ]]></description>
            <content:encoded><![CDATA[ <p></p><p><a href="https://ipfs.tech/">IPFS</a>, the distributed file system and content addressing protocol, has been around since 2015, and Cloudflare has been a user and operator since 2018, when we began <a href="/distributed-web-gateway">operating a public IPFS gateway</a>. Today, we are announcing our plan to transition this gateway traffic to the IPFS Foundation’s gateway, maintained by the <a href="https://ipshipyard.com/">Interplanetary Shipyard</a> (“Shipyard”) team, and discussing what it means for users and the future of IPFS gateways.</p><p><a href="https://blog.ipfs.tech/shipyard-hello-world/">As announced in April 2024</a>, many of the IPFS core developers and maintainers now work within a newly created, independent entity called Interplanetary Shipyard after transitioning from <a href="https://protocol.ai/">Protocol Labs</a>, where IPFS was invented and incubated. By operating as a collective, ongoing maintenance and support of important protocols like IPFS are now even more community-owned and managed. We fully support this “exit to community” and are excited to support Shipyard as they build more great infrastructure for the open web.</p><p>On May 14th, 2024, we will begin to transition traffic that comes to Cloudflare’s <a href="https://docs.ipfs.tech/concepts/public-utilities/#public-ipfs-utilities">public IPFS gateway</a> to the IPFS Foundation’s <a href="https://docs.ipfs.tech/concepts/public-utilities/#public-ipfs-gateways">gateway at ipfs.io or dweb.link</a>. Cloudflare’s public IPFS gateway is just one of many – part of a distributed ecosystem that also includes Pinata, eth.limo, and many more. Visit the <a href="https://ipfs.github.io/public-gateway-checker/">IPFS Public Gateway Checker</a> to see the other publicly available IPFS gateways.</p><p>Cloudflare believes in helping build a better Internet for all and an accessible and private Internet, principles that Protocol Labs, IPFS, and Shipyard all share. We believe the IPFS gateway transition will boost ecosystem collaboration, increase protocol resiliency, and ensure healthy stewardship and governance. Cloudflare is proud to be a partner of the IPFS Project and Shipyard in this transition and will continue to help sponsor their work as gateway stewards.</p>
    <div>
      <h3>What happens next</h3>
      <a href="#what-happens-next">
        
      </a>
    </div>
    <p>All traffic using the <b>cloudflare-ipfs.com</b> or <b>cf-ipfs.com</b> hostname(s) will continue to work without interruption and be redirected to ipfs.io or dweb.link until August 14th, 2024, at which time the Cloudflare hostnames will no longer connect to IPFS and all users must switch the hostname they use to <b>ipfs.io</b> or <b>dweb.link</b> to ensure no service interruption takes place. If you are using either of the Cloudflare hostnames, please be sure to switch to one of the new ones as soon as possible ahead of the transition date to avoid any service interruptions!</p><p>It is important to Cloudflare, IPFS, and Shipyard that this transition is completed seamlessly and with as little impact to users as possible. With that in mind, there is no change to the amount or type of end user information that is visible to either Cloudflare, the IPFS Foundation, or Shipyard before or after the completion of this transition.</p><p>We’re excited to see further development and projects from the IPFS community and play our part in helping those applications be secure, performant, and reliable!</p><hr />
    <div>
      <h3>About Shipyard</h3>
      <a href="#about-shipyard">
        
      </a>
    </div>
    <p><a href="https://ipshipyard.com/">Interplanetary Shipyard</a> is an engineering collective that delivers user agency through technical advancements in <a href="https://ipfs.tech/">IPFS</a> and <a href="https://libp2p.io">libp2p</a>. As the core maintainers of open source projects in the Interplanetary Stack (including IPFS and libp2p implementations such as <a href="https://github.com/ipfs/kubo">Kubo</a>, <a href="https://github.com/ipfs/rainbow/">Rainbow</a>, <a href="https://github.com/ipfs/boxo">Boxo</a>, <a href="https://github.com/ipfs/helia">Helia</a>, and <a href="https://github.com/libp2p/go-libp2p">go-libp2p</a>/<a href="https://github.com/libp2p/js-libp2p">js-libp2p</a>), and supporting performance measurement tooling (<a href="https://probelab.io/">Probelab</a>), they are committed to open source innovation and building bridges between traditional web frameworks and the decentralized ecosystem. To achieve this, their engineers work alongside technical teams in web2 and web3 to promote adoption and drive practical applications.</p> ]]></content:encoded>
            <category><![CDATA[Web3]]></category>
            <category><![CDATA[Distributed Web]]></category>
            <category><![CDATA[IPFS]]></category>
            <category><![CDATA[Cloudflare Gateway]]></category>
            <guid isPermaLink="false">2301leOruEAwLBe7M7S5hk</guid>
            <dc:creator>Brian Batraski</dc:creator>
            <dc:creator>Wesley Evans</dc:creator>
            <dc:creator>Cameron Wood (Guest Author)</dc:creator>
            <dc:creator>Bethany Crystal (Guest Author)</dc:creator>
        </item>
        <item>
            <title><![CDATA[Elevate load balancing with Private IPs and Cloudflare Tunnels: a secure path to efficient traffic distribution]]></title>
            <link>https://blog.cloudflare.com/elevate-load-balancing-with-private-ips-and-cloudflare-tunnels-a-secure-path-to-efficient-traffic-distribution/</link>
            <pubDate>Fri, 08 Sep 2023 13:00:01 GMT</pubDate>
            <description><![CDATA[ We are extremely excited to announce a new addition to our Load Balancing solution, Private Network Load Balancing with deep integrations with Zero Trust!
 ]]></description>
            <content:encoded><![CDATA[ <p></p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6ILMvjlajN04XDxywfXL1a/fcaf05a762de5ac61baa3b001fd4edc7/image7-1.png" />
            
            </figure><p>In the dynamic world of modern applications, efficient load balancing plays a pivotal role in delivering exceptional user experiences. Customers commonly leverage load balancing, so they can efficiently use their existing infrastructure resources in the best way possible. Though, load balancing is not a ‘one-size-fits-all, out of the box’ solution for everyone. As you go deeper into the details of your traffic shaping requirements and as your architecture becomes more complex, different flavors of load balancing are usually required to achieve these varying goals, such as steering between datacenters for public traffic, creating high availability for critical internal services with private IPs, applying steering between servers in a single datacenter, and more. We are extremely excited to announce a new addition to our Load Balancing solution, Private Network Load Balancing with deep integrations with Zero Trust!  </p><p>A common problem businesses run into is that almost no providers can satisfy all these requirements, resulting in a growing list of vendors to manage disparate data sources to get a clear view of your traffic pipeline, and investment into incredibly expensive hardware that is complicated to set up and maintain. Not having a single source of truth to dwindle down ‘time to resolution’ and a single partner to work with in times when things are not operating within the ideal path can be the difference between a proactive, healthy growing business versus one that is reactive and constantly having to put out fires. The latter can result in extreme slowdown to developing amazing features/services, reduction in revenue, tarnishing of brand trust, decreases in adoption - the list goes on!</p><p>For eight years, we have provided top-tier global traffic load balancing (GTM) capabilities to thousands of customers across the globe. But why should the steering intelligence, failover, and reliability we guarantee stop at the front door of the selected datacenter and only operate with public traffic? We came to the conclusion that we should go even further. Today is the start of a long series of new features that allow traffic steering, failover, session persistence, SSL/TLS offloading and much more to take place between servers after datacenter selection has occurred! Instead of relying <i>only</i> on the relative weight to determine which server traffic should be sent to, you can now bring the same intelligent steering policies, such as <a href="https://developers.cloudflare.com/load-balancing/understand-basics/traffic-steering/origin-level-steering/least-outstanding-requests-pools/"><u>least outstanding requests steering</u></a> or <a href="https://developers.cloudflare.com/load-balancing/understand-basics/traffic-steering/origin-level-steering/hash-origin-steering/"><u>hash steering</u></a>, to any of your many data centers. This also means you have a single partner for <b>all</b> of your load balancing initiatives and a single pane of glass to inform business decisions! Cloudflare is thrilled to introduce the powerful combination of private IP support for Load Balancing with Cloudflare Tunnels and Private Network Load Balancing, offering customers a solution that blends unparalleled efficiency, security, flexibility, and privacy.</p>
    <div>
      <h3>What is a load balancer?</h3>
      <a href="#what-is-a-load-balancer">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2batHEyi8vzCMEjgy2E2rD/601bc0a81e751b02dc332cc531623546/pasted-image-0.png" />
            
            </figure><p>A Cloudflare load balancer directs a request from a user to the appropriate origin pool within a data center</p><p>Load balancing — functionality that’s been around for the last 30 years to help businesses leverage their existing infrastructure resources. <a href="https://www.cloudflare.com/learning/performance/what-is-load-balancing/">Load balancing</a> works by proactively steering traffic away from unhealthy origin servers and — for more advanced solutions — intelligently distributing traffic load based on different steering <a href="https://developers.cloudflare.com/load-balancing/understand-basics/traffic-steering/">algorithms</a>. This process ensures that errors aren’t served to end users and empowers businesses to tightly couple overall business objectives to their traffic behavior. Cloudflare Load Balancing has made it simpler and easier to securely and reliably manage your traffic across multiple data centers around the world. With Cloudflare Load Balancing, your traffic will be directed reliably regardless of the scale of traffic or where it originates with customizable steering, affinity and failover. This clearly has an advantage over a physical load balancer since it can be configured easily and traffic doesn’t have to reach one of your data centers to be routed to another location, introducing single points of failure and significant <a href="https://www.cloudflare.com/learning/performance/glossary/what-is-latency/">latency</a>. When compared with other global traffic management load balancers, Cloudflare’s Load Balancing offering is easier to set up, simpler to understand, and is fully integrated with the Cloudflare platform as one single product for all load balancing needs.</p>
    <div>
      <h3>What are Cloudflare Tunnels?</h3>
      <a href="#what-are-cloudflare-tunnels">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1Pn6iWjweXKPpg0Kl00s0W/6a0ef1886f7ce5ea114e010e9b5a32eb/Group-3345--1-.png" />
            
            </figure><p>Origins and servers of various types can be connected to Cloudflare using Cloudflare Tunnel. Users can also secure their traffic using WARP, allowing traffic to be secured and managed end to end through Cloudflare.‌ ‌</p><p>In 2018, Cloudflare introduced <a href="https://www.cloudflare.com/products/tunnel">Cloudflare Tunnels</a>, a private, secure connection between your data center and Cloudflare. Traditionally, from the moment an Internet property is deployed, developers spend an exhaustive amount of time and energy locking it down through access control lists, rotating IP addresses, or more complex solutions like <a href="https://www.cloudflare.com/learning/network-layer/what-is-gre-tunneling/">GRE tunnels</a>. We built Tunnel to help alleviate that burden. With Tunnels, users can create a private link from their origin server directly to Cloudflare without exposing your services directly to the public internet or allowing incoming connections in your data center’s firewall. Instead, this private connection is established by running a lightweight daemon, <code>cloudflared</code>, in your data center, which creates a secure, outbound-only connection. This means that only traffic that you’ve configured to pass through Cloudflare can reach your private origin.</p>
    <div>
      <h3>Unleashing the potential of Cloudflare Load Balancing with Cloudflare Tunnels</h3>
      <a href="#unleashing-the-potential-of-cloudflare-load-balancing-with-cloudflare-tunnels">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4fxiGzB4slJt02ZgQppT1d/1eadd4e57c66458024c2247080c7a07d/After-Elevate-Load-Balancing-with-Private-IP-Support-and-Cloudflare-Tunnels.png" />
            
            </figure><p>Cloudflare Load Balancing can easily and securely direct a user’s request to a specific origin within your private data center or public cloud using Cloudflare Tunnels</p><p>Combining Cloudflare Tunnels with Cloudflare Load Balancing allows you to remove your physical load balancers from your data center and have your Cloudflare load balancer reach out to your servers directly via their private IP addresses with health checks, steering, and all other Load Balancing features currently available. Instead of configuring your on-premise load balancer to expose each service and then updating your Cloudflare load balancer, you can configure it all in one place. This means that from the end-user to the server handling the request, all your configuration can be done in a single place – the Cloudflare dashboard. On top of this, you can say goodbye to the multi hundred thousand dollar price tag to hardware appliances, the incredible management overhead and investing in a solution that has a time limit for its delivered value.</p><p>Load Balancing serves as the backbone for online services, ensuring seamless traffic distribution across servers or data centers. Traditional load balancing techniques often require exposing services on a data center’s public IP addresses, forcing organizations to create complex configurations vulnerable to security risks and potential data exposure. By harnessing the power of private IP support for Load Balancing in conjunction with Cloudflare Tunnels, Cloudflare is revolutionizing the way businesses <a href="https://www.cloudflare.com/application-services/solutions/">protect and optimize their applications</a>. With clear steps to <a href="https://developers.cloudflare.com/cloudflare-one/connections/connect-networks/install-and-setup/tunnel-guide/">install</a> the cloudflared agent to connect your private network to Cloudflare’s network via Cloudflare Tunnels, directly and securely routing traffic into your data centers becomes easier than ever before!</p>
    <div>
      <h3>Publicly exposing services in private data centers is complicated</h3>
      <a href="#publicly-exposing-services-in-private-data-centers-is-complicated">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/fGhHqn2IZj4gMUL24c7lb/16ec2f39fbb2fe37fd26eff7f6d2479f/Before-Elevate-Load-Balancing-with-Private-IP-Support-and-Cloudflare-Tunnels_-A-Secure-Path-to-Efficient-Traffic-Distributio.png" />
            
            </figure><p><sup>A visitor’s request hits a global traffic management (GTM) load balancer directing the request to a data center, then a firewall, then a local load balancer and then an origin</sup></p><p>Load balancing within a private data center can be expensive and difficult to manage. The idea of keeping security first while ensuring ease of use and flexibility for your internal workforce is a tricky balance to strike. It’s not only the ‘how’ of securely exposing internal services, but how to best balance traffic between servers at a single location within your private network!</p><p>In a private data center, even a very simple website can be fairly complex in terms of networking and configuration. Let’s walk through a simple example of a customer device connecting to a website. A customer device performs a DNS lookup for the business’s website and receives an IP address corresponding to a customer data center. The customer then makes an HTTPS request to that IP address, passing the original hostname via <a href="https://www.cloudflare.com/learning/ssl/what-is-sni/">Server Name Indication</a> (SNI). That load balancer forwards that request to the corresponding origin server and returns the response to the customer device.</p><p>This example doesn’t have any advanced functionality and the stack is already difficult to configure:</p><ul><li><p>Expose the service or server on a private IP.</p></li><li><p>Configure your data center’s networking to expose the LB on a public IP or IP range.</p></li><li><p>Configure your load balancer to forward requests for that hostname and/or public IP to your server’s private IP.</p></li><li><p>Configure a DNS record for your domain to point to your load balancer’s public IP.</p></li></ul><p>In large enterprises, each of these configuration changes likely requires approval from several stakeholders and modified through different repositories, websites and/or private web interfaces. Load balancer and networking configurations are often maintained as complex configuration files for Terraform, Chef, Puppet, Ansible or a similar infrastructure-as-code service. These configuration files can be syntax checked or tested but are rarely tested thoroughly prior to deployment. Each deployment environment is often unique enough that thorough testing is often not feasible given the time and hardware requirements needed to do so. This means that changes to these files can negatively affect other services within the data center. In addition, opening up an ingress to your data center <a href="https://www.cloudflare.com/learning/security/what-is-an-attack-surface/">widens the attack surface</a> for varying security risks such as <a href="https://www.cloudflare.com/learning/ddos/what-is-a-ddos-attack/">DDoS attacks</a> or catastrophic data breaches. To make things worse, each vendor has a different interface or API for configuring their devices or services. For example, some <a href="https://www.cloudflare.com/learning/dns/glossary/what-is-a-domain-name-registrar/">registrars</a> only have XML APIs while others have JSON REST APIs. Each device configuration may have different Terraform providers or Ansible playbooks. This results in complex configurations accumulating over time that are difficult to consolidate or standardize, inevitably resulting in technical debt.</p><p>Now let’s add additional origins. For each additional origin for our service, we’ll have to go set up and expose that origin and configure the physical load balancer to use our new origin. Now let’s add another data center. Now we need another solution to distribute across our data centers. This results in one system for global traffic management and another, separate system, for local traffic. . These solutions have in the past come from different vendors and will have to be configured in different ways even though they should serve the same purpose: load balancing. This makes managing your web traffic unnecessarily difficult. Why should you have to configure your origins in two different load balancers? Why can’t you manage all the traffic for all the origins for a service in the same place?</p>
    <div>
      <h3>Simpler and better: Load Balancing with Tunnels</h3>
      <a href="#simpler-and-better-load-balancing-with-tunnels">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5uBuu8as4xTsf4QLZtpjt7/b361efe686342832c1c1de254e22cccf/pasted-image-0--1-.png" />
            
            </figure><p>Cloudflare Load Balancing can manage traffic for all your offices, data centers, remote users, public clouds, private clouds and hybrid clouds in one place‌ ‌</p><p>With Cloudflare Load Balancing and Cloudflare Tunnel, you can manage all your public and private origins in one place: the Cloudflare dashboard. Cloudflare load balancers can be easily configured using the Cloudflare dashboard or the Cloudflare API. There’s no need to <a href="https://www.cloudflare.com/learning/access-management/what-is-ssh/">SSH</a> or open a remote desktop to modify load balancer configurations for your public or private servers. All configurations can be done through the dashboard UI or Cloudflare API, with full parity between the two.</p><p>With Cloudflare Tunnel <a href="https://developers.cloudflare.com/cloudflare-one/connections/connect-networks/install-and-setup/tunnel-guide/">set up and running</a> in your data center, everything is ready to connect your origin server to Cloudflare network and load balancers. You do not need to configure any ingress to your data center since Cloudflare Tunnel operates only over outbound connections and can securely reach out to privately addressed services inside your data center. To expose your service to Cloudflare, you just <a href="https://developers.cloudflare.com/cloudflare-one/connections/connect-networks/private-net/tunnel-virtual-networks/#route-ips-over-virtual-networks">set up your private IP range to be routed over that tunnel</a>. Then, you can create a Cloudflare load balancer and input the corresponding private IP address and <a href="https://developers.cloudflare.com/load-balancing/understand-basics/traffic-management/">virtual network ID into your origin pool</a>. After that, Cloudflare manages the DNS and load balancing across your private servers. Now your origin is receiving traffic exclusively via Cloudflare Tunnel and your physical load balancer is no longer needed!</p><p>This groundbreaking integration enables organizations to deploy load balancers while keeping their applications securely shielded from the public Internet. The customer’s traffic passes through Cloudflare’s data centers, allowing customers to continue to take full advantage of Cloudflare’s security and performance services. Also, by leveraging Cloudflare Tunnels, traffic between Cloudflare and customer origins remains isolated within trusted networks, bolstering privacy, security, and peace of mind.</p>
    <div>
      <h3>The advantages of Private IP support with Cloudflare Tunnels</h3>
      <a href="#the-advantages-of-private-ip-support-with-cloudflare-tunnels">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4zxhWEuT3ZCugzEXSDSHiL/ce99377d9cdbd29aebe95130ac8dd6d1/pasted-image-0--2-.png" />
            
            </figure><p><sup>Cloudflare Load Balancing works in conjunction with all the security and privacy products that Cloudflare has to offer including DDoS protection, Web Application Firewall and Bot Management</sup></p><p><b>Unified Traffic Management : </b>All the features and ease of use that were part of Cloudflare Load Balancing for Global Traffic Management are also available with Private Network Load Balancing. You can configure your public and private origins in one dashboard as opposed to several services and vendors. Now, all your private origins can benefit from the features that Cloudflare Load Balancing is known for: instant failover, customizable steering between data centers, ease of use, custom rules and configuration updates in a matter of seconds. They will also benefit from our newer features including least connection steering, least outstanding request steering, and session affinity by header. This is just a small subset of the expansive feature set for Load Balancing. See our <a href="https://developers.cloudflare.com/load-balancing/"><u>dev docs</u></a> for more features and details on the offering.</p><p><b>Enhanced Security</b>: By combining private IP support with Cloudflare Tunnels, organizations can fortify their security posture and protect sensitive data. With private IP addresses and encrypted connections via Cloudflare Tunnel, the risk of unauthorized access and potential attacks is significantly reduced – traffic remains within trusted networks. You can also configure <a href="https://developers.cloudflare.com/cloudflare-one/policies/access/">Cloudflare Access</a> to add single sign-on support for your application and restrict your application to a subset of authorized users. In addition, you still benefit from Firewall rules, Rate Limiting rules, Bot Management, DDoS protection and all the other Cloudflare products available today allowing comprehensive security configurations.</p><p><b>Uncompromising Privacy</b>: As data privacy continues to take center stage, businesses must ensure the confidentiality of user information. Cloudflare's private IP support with Cloudflare Tunnels enables organizations to segregate applications and keep sensitive data within their private network boundaries. Custom rules also allow you to direct traffic for specific devices to specific data centers. For example, you can use custom rules to direct traffic from Eastern and Western Europe to your European data centers, so you can easily keep those users’ data within Europe. This minimizes the exposure of data to external entities, preserving user privacy and complying with strict privacy regulations across different geographies.</p><p><b>Flexibility &amp; Reliability</b>: Scale and adaptability are some of the major foundations of a well-operating business. <a href="https://www.cloudflare.com/learning/access-management/how-to-implement-zero-trust/">Implementing solutions</a> that fit your business’ needs today is not enough. Customers must find solutions that meet their needs for the next three or more years. The blend of Load Balancing with Cloudflare Tunnels within our <a href="https://www.cloudflare.com/zero-trust/solutions/">Zero Trust solution</a> lends to the very definition of flexibility and reliability! Changes to load balancer configurations propagate around the world in a matter of seconds, making load balancers an effective way to respond to incidents. Also, instant failover, health monitoring, and steering policies all help to maintain high availability for your applications, so you can deliver the reliability that your users expect. This is all in addition to best in class <a href="https://www.cloudflare.com/learning/security/glossary/what-is-zero-trust/">Zero Trust</a> capabilities that are deeply integrated such as, but not limited to Secure Web Gateway (SWG), remote browser isolation, network logs and data loss prevention.</p><p><b>Streamlined Infrastructure</b>: Organizations can consolidate their network architecture and establish secure connections across distributed environments. This unification reduces complexity, lowers operational overhead, and facilitates efficient resource allocation. Whether you need to apply a global traffic manager to intelligently direct traffic between datacenters within your private network, or steer between specific servers after datacenter selection has taken place, there is now a clear, single lens to manage your global and local traffic, regardless of whether the source or destination of the traffic is public or private. Complexity can be a large hurdle in achieving and maintaining fast, agile business units. Consolidating into a single provider, like Cloudflare, that provides security, reliability, and observability will not only save significant cost but allows your teams to move faster and focus on growing their business, enhancing critical services, and developing incredible features, rather than taping together infrastructure that may not work in a few years. Leave the heavy lifting to us, and let us empower you and your team to focus on creating amazing experiences for your employees and end-users.</p><p>The lack of agility, flexibility, and lean operations of hardware appliances for local traffic does not justify the hundreds of thousands of dollars spent on them, along with the huge overhead of managing CPU, memory, power, cooling, etc. Instead, we want to help businesses move this logic to the cloud by abstracting away the needless overhead and bringing more focus back to teams to do what they do best, building amazing experiences, and allowing Cloudflare to do what we do best, protecting, accelerating, and building heightened reliability. Stay tuned for more updates on Cloudflare's Private Network Load Balancing and how it can reduce architecture complexity while bringing more insight, security, and control to your teams. In the meantime, check out our new <a href="https://cf-assets.www.cloudflare.com/slt3lc6tev37/7siMQh0goJJnH4PYbAzOxC/f4a66ebdf20cca2ec85c2b9261fb8a38/Optimize-Web-Performance.pdf"><u>whitepaper</u></a>!</p>
    <div>
      <h3>Looking to the future</h3>
      <a href="#looking-to-the-future">
        
      </a>
    </div>
    <p>Cloudflare's impactful solution, private IP support for Load Balancing with Cloudflare Tunnels as part of the Zero Trust solution, reaffirms our commitment to providing cutting-edge tools that prioritize security, privacy, and performance. By leveraging private IP addresses and secure tunnels, Cloudflare empowers businesses to fortify their network infrastructure while ensuring compliance with regulatory requirements. With enhanced security, uncompromising privacy, and streamlined infrastructure, load balancing becomes a powerful driver of efficient and secure public or private services.</p><p>As a business grows and its systems scale up, they'll need the features that Cloudflare Load Balancing is known for: health monitoring, steering, and failover. As availability requirements increase due to growing demands and standards from end-users, customers can add health checks, enabling automatic failover to healthy servers when an unhealthy server begins to fail. When the business begins to receive more traffic from around the world, they can create new pools for different regions and use dynamic steering to reduce latency between the user and the server. For intensive or long-running requests, such as complex datastore queries, customers can benefit from leveraging <a href="https://developers.cloudflare.com/load-balancing/understand-basics/traffic-steering/origin-level-steering/least-outstanding-requests-pools/"><u>least outstanding requests steering</u></a> to reduce the number of concurrent requests per server. Before, this could all be done with publicly addressable IPs, but it is now available for pools with public IPs, private servers, or combinations of the two. Private Network Load Balancing  is live and ready to use today! Check out our <a href="https://developers.cloudflare.com/load-balancing/understand-basics/traffic-management/"><u>dev docs for instructions on how to get started</u></a>.</p><p>Stay tuned for our next addition to add new Load Balancing onramp support for Spectrum and WARP with Cloudflare Tunnels with private IPs for your <a href="https://developers.cloudflare.com/fundamentals/get-started/concepts/network-layers/">Layer 4</a> traffic, allowing us to support TCP and UDP applications in your private data centers!</p> ]]></content:encoded>
            <category><![CDATA[Load Balancing]]></category>
            <category><![CDATA[Cloudflare Tunnel]]></category>
            <category><![CDATA[Traffic]]></category>
            <guid isPermaLink="false">6WBtRI0c6K4SqCsCAd3hgn</guid>
            <dc:creator>Brian Batraski</dc:creator>
            <dc:creator>Mathew Jacob</dc:creator>
        </item>
        <item>
            <title><![CDATA[Load Balancing with Weighted Pools]]></title>
            <link>https://blog.cloudflare.com/load-balancing-with-weighted-pools/</link>
            <pubDate>Tue, 02 Aug 2022 13:00:00 GMT</pubDate>
            <description><![CDATA[ Today, we’re excited to announce a frequently requested feature for our Load Balancer – Weighted Pools ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Anyone can take advantage of Cloudflare’s far-reaching network to protect and accelerate their online presence. Our vast <a href="https://www.cloudflare.com/network/">number of data centers</a>, and their proximity to Internet users around the world, enables us to secure and accelerate our customers’ Internet applications, APIs and websites. Even a simple service with a <a href="https://www.cloudflare.com/learning/cdn/glossary/origin-server/">single origin server</a> can leverage the massive scale of the Cloudflare network in 270+ cities. Using the Cloudflare cache, you can support more requests and users without purchasing new servers.</p><p>Whether it is to guarantee high availability through redundancy, or to support more dynamic content, an increasing number of services require multiple origin servers. The Cloudflare Load Balancer keeps our customer’s services highly available and makes it simple to spread out requests across multiple origin servers. Today, we’re excited to announce a frequently requested feature for our Load Balancer – Weighted Pools!</p>
    <div>
      <h2>What’s a Weighted Pool?</h2>
      <a href="#whats-a-weighted-pool">
        
      </a>
    </div>
    <p>Before we can answer that, let’s take a quick look at how our load balancer works and define a few terms:</p><p><b>Origin Servers</b> - Servers which sit behind Cloudflare and are often located in a customer-owned datacenter or at a public cloud provider.</p><p><b>Origin Pool</b> - A logical collection of origin servers. Most pools are named to represent data centers, or cloud providers like “us-east,” “las-vegas-bldg1,” or “phoenix-bldg2”. It is recommended to use pools to represent a collection of servers in the same physical location.</p><p><b>Traffic Steering Policy</b> - A policy specifies how a load balancer should steer requests across origin pools. Depending on the steering policy, requests may be sent to the nearest pool as defined by latitude and longitude, the origin pool with the lowest latency, or based upon the location of the Cloudflare data center.</p><p><b>Pool Weight</b> - A numerical value to describe what percentage of requests should be sent to a pool, relative to other pools.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6rTxNJcZUpuAep6AGouFyW/51f258c84377e710316181a3e0ce7991/image6.png" />
            
            </figure><p>When a request from a visitor arrives at the Cloudflare network for a hostname with a load balancer attached to it, the load balancer must decide where the request should be forwarded. Customers can configure this behavior with traffic steering policies.</p><p>The Cloudflare Load Balancer already supports <a href="https://developers.cloudflare.com/load-balancing/understand-basics/traffic-steering/steering-policies/">Standard Steering, Geo Steering, Dynamic Steering, and Proximity Steering</a>. Each of these respective traffic steering policies control how requests are distributed across origin pools. Weighted Pools are an extension of our standard, random steering policy which allows the specification of what relative percentage of requests should be sent to each respective pool.</p><p>In the example above, our load balancer has two origin pools, “las-vegas-bldg1” (which is a customer operated data center), and “us-east-cloud” (which is a public cloud provider with multiple virtual servers). Each pool has a weight of 0.5, so 50% of requests should be sent to each respective pool.</p>
    <div>
      <h2>Why would someone assign weights to origin pools?</h2>
      <a href="#why-would-someone-assign-weights-to-origin-pools">
        
      </a>
    </div>
    <p>Before we built this, Weighted Pools was a frequently requested feature from our customers. Part of the reason we’re so excited about this feature is that it can be used to solve many types of problems.</p>
    <div>
      <h3>Unequally Sized Origin Pools</h3>
      <a href="#unequally-sized-origin-pools">
        
      </a>
    </div>
    <p>In the example below, the amount of dynamic and uncacheable traffic has significantly increased due to a large sales promotion. Administrators notice that the load on their Las Vegas data center is too high, so they elect to dynamically increase the number of origins within their public cloud provider. Our two pools, “las-vegas-bldg1” and “us-east-cloud” are no longer equally sized. Our pool representing the public cloud provider is now much larger, so administrators change the pool weights so that the cloud pool receives 0.8 (80%) of the traffic, relative to the 0.2 (20%) of the traffic which the Las Vegas pool receives. The administrators were able to use pool weights to very quickly fine-tune the distribution of requests across unequally sized pools.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5V6rFceRK8So5AK0S7FzV9/4441a732a2581a6fe42938710e624742/image3.png" />
            
            </figure>
    <div>
      <h3>Data center kill switch</h3>
      <a href="#data-center-kill-switch">
        
      </a>
    </div>
    <p>In addition to balancing out unequal sized pools, Weighted Pools may also be used to completely take a data center (an origin pool) out of rotation by setting the pool’s weight to 0. This feature can be particularly useful if a data center needs to be quickly eliminated during troubleshooting or a proactive maintenance where power may be unavailable. Even if a pool is disabled with a weight of 0, Cloudflare will still monitor the pool for health so that the administrators can assess when it is safe to return traffic.</p>
    <div>
      <h3>Network A/B testing</h3>
      <a href="#network-a-b-testing">
        
      </a>
    </div>
    <p>One final use case we’re excited about is the ability to use weights to attract a very small amount of requests to pool. Did the team just stand up a brand-new data center, or perhaps upgrade all the servers to a new software version? Using weighted pools, the administrators can use a load balancer to effectively A/B test their network. Only send 0.05 (5%) of requests to a new pool to verify the origins are functioning properly before gradually increasing the load.</p>
    <div>
      <h2>How do I get started?</h2>
      <a href="#how-do-i-get-started">
        
      </a>
    </div>
    <p>When setting up a load balancer, you need to configure one or more origin pools, and then place origins into your respective pools. Once you have more than one pool, the relative weights of the respective pools will be used to distribute requests.</p><p>To set up a weighted pool using the Dashboard, create a load balancer in the <b>Traffic &gt; Load Balancing</b> area.</p><p>Once you have set up the load balancer, you’re navigated to the <b>Origin Pools</b> setup page. Under the Traffic Steering Policy, select <b>Random</b>, and then assign relative weights to every pool.</p><p>If your weights do not add up to 1.00 (100%), that’s fine! We will do the math behind the scenes to ensure how much traffic the pool should receive relative to other pools.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1tgWXK6Slx7C3KZdZd5SrO/de00d5b4889346a67ba6fc7b34b0e05d/image4.png" />
            
            </figure>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7MoDiAL340j2GfFJ3KihWr/157eac63640bbd6cd11c75d8b610458b/image2.png" />
            
            </figure>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4FjRnImOHxs5NHVZqRVG7w/ee4e4f38303dc06ee0ec1494fa65faab/image5-1.png" />
            
            </figure><p>Weighted Pools may also be configured via the API. We’ve edited an example illustrating the relevant parts of the REST API.</p><ul><li><p>The load balancer should employ a “steering_policy” of random.</p></li><li><p>Each pool has a UUID, which can then be assigned a “pool_weight.”</p></li></ul>
            <pre><code> {
    "description": "Load Balancer for www.example.com",
    "name": "www.example.com",
    "enabled": true,
    "proxied": true,
    "fallback_pool": "9290f38c5d07c2e2f4df57b1f61d4196",
    "default_pools": [
        "9290f38c5d07c2e2f4df57b1f61d4196",
        "17b5962d775c646f3f9725cbc7a53df4"
    ],
    "steering_policy": "random",
    "random_steering": {
        "pool_weights": {
            "9290f38c5d07c2e2f4df57b1f61d4196": 0.8
        },
        "default_weight": 0.2
    }
}</code></pre>
            <p>We’re excited to launch this simple, yet powerful and capable feature. Weighted pools may be utilized in tons of creative new ways to solve load balancing challenges. It’s available for all customers with load balancers today!</p><p>Developer Docs:<a href="https://developers.cloudflare.com/load-balancing/how-to/create-load-balancer/#create-a-load-balancer">https://developers.cloudflare.com/load-balancing/how-to/create-load-balancer/#create-a-load-balancer</a></p><p>API Docs:<a href="https://api.cloudflare.com/#load-balancers-create-load-balancer">https://api.cloudflare.com/#load-balancers-create-load-balancer</a></p> ]]></content:encoded>
            <category><![CDATA[Load Balancing]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[Speed & Reliability]]></category>
            <category><![CDATA[Network Services]]></category>
            <category><![CDATA[Performance]]></category>
            <guid isPermaLink="false">12ylWmYVhyw4uFIG3NBqN2</guid>
            <dc:creator>Brian Batraski</dc:creator>
            <dc:creator>Ben Ritter</dc:creator>
        </item>
        <item>
            <title><![CDATA[Public access for our Ethereum and IPFS gateways now available]]></title>
            <link>https://blog.cloudflare.com/ea-web3-gateways/</link>
            <pubDate>Mon, 16 May 2022 12:57:48 GMT</pubDate>
            <description><![CDATA[ Today we are excited to announce that our Ethereum and IPFS gateways are publicly available to all Cloudflare customers for the first time ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Today we are excited to announce that our Ethereum and IPFS gateways are publicly available to all Cloudflare customers for the first time. Since our announcement of our private beta last September the interest in our Eth and IPFS gateways has been overwhelming. We are humbled by the demand for these tools, and we are excited to get them into as many developers' hands as possible. Starting today, any Cloudflare customer can log into the dashboard and configure a zone for Ethereum, IPFS, or both!</p><p>Over the last eight months of the private beta, we’ve been busy working to fully operationalize the gateways to ensure they meet the needs of our customers!</p><p>First, we have created a new <a href="https://api.cloudflare.com/#web3-hostname-properties">API</a> with end-to-end managed hostname deployment. This ensures the creation and management of gateways as you continue to scale remains extremely quick and easy! It is paramount to give time and focus back to developers to focus on your core product and services and leave the infrastructural components to us!</p><p>Second, we’ve added a <a href="http://dash.cloudflare.com/?to=/:account/:zone/web3">brand new UI</a> bringing web3 to Cloudflare's zone-level dashboard. Now, regardless of the workflow you are used to, we have parity between our UI and API to ensure we fit into your existing processes and no time is wasted internally to have to figure out ‘how we integrate’, but rather, a quick setup and start to serve content or connect your services!</p><p>Third, we are pleased to say that you will soon have testnet support to ensure your new development can be easily tested, hardened, and deployed to your mainnet without incurring additional risk to your brand trust, product availability, or concern that something may fail silently and begin a cascade of problems throughout your network. We want to ensure that anyone leveraging our web3 gateways is able to achieve more confidence and trust that whatever changes are pushed forward do not impact end user experience. At the end of the day, the Internet is for end users and their experience and perception must always be kept within our purview at all times.</p><p>Lastly, Cloudflare loves to build on top of Cloudflare. This helps us stay resilient and also shows our commitment and belief in all the products we create! We have always used our SSL for SaaS and Workers products in the background. Building on our own services gives our customers the ability to define and control their own HTTP features on top of traffic destined for web3 gateways, including: rate limits, WAF rules, custom security filters, serving video, customer defined Workers logic, custom redirects and more!</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/30FMdovSIlUpWq4n63c0KR/c623107c8c1991700c55cefdbd592378/image1-42.png" />
            
            </figure><p>Today thousands of different individuals, companies, and DAO's are building new products leveraging our web3 gateways -- the most reliable web3 infrastructure with the largest network</p><p>Here are just a few snippets of how people are already using our web3 Gateways, and we can’t wait to see what you build on them:</p><ul><li><p>DeFi DAO’s use the Cloudflare IPFS gateway to serve their front end web applications globally without latency or cache penalties.</p></li><li><p>NFT designers use the Ethereum Gateway to effortlessly drop new offerings, and the IPFS gateway to store them in a fully decentralized system.</p></li><li><p>Large Dapp developers trust us to handle huge traffic spikes quickly and efficiently, without rate limits or overage caps. They combine all our offerings into a single pane of glass so that they don’t have to juggle multiple systems.</p></li></ul><p>As part of this announcement, we will begin migrating our existing users away from the legacy gateway endpoints and onto our new API, which is easier, highly managed, and more robust. To ensure a smooth transition, you will first need to make sure you have signed up for a <a href="https://developers.cloudflare.com/fundamentals/get-started/setup/account-setup/">Cloudflare account</a> if you did not already have one. On top of that, we have made sure to keep our free users in mind and thus our free users will continue to use the gateways at no cost with our free tier option! This includes no cap in the amount of traffic that can be pushed through our gateways along with offering the most transparent and forecastable pricing models in the market today. We are very excited about the future and look forward to sharing the next iterations of web3 at Cloudflare!</p><p>Also, if you can’t wait to start building on our gateways, check out our <a href="https://developers.cloudflare.com/web3/">product documentation</a> for more guidance.</p> ]]></content:encoded>
            <category><![CDATA[Platform Week]]></category>
            <category><![CDATA[Web3]]></category>
            <category><![CDATA[Ethereum]]></category>
            <category><![CDATA[IPFS]]></category>
            <guid isPermaLink="false">21fRaPsfontOlRPBBt88E5</guid>
            <dc:creator>Wesley Evans</dc:creator>
            <dc:creator>Brian Batraski</dc:creator>
        </item>
        <item>
            <title><![CDATA[Announcing The Cloudflare Distributed Web Gateways Private Beta: Unlocking the Web3 Metaverse and Decentralized Finance for Everyone]]></title>
            <link>https://blog.cloudflare.com/announcing-web3-gateways/</link>
            <pubDate>Fri, 01 Oct 2021 12:59:48 GMT</pubDate>
            <description><![CDATA[ Cloudflare announces the Private Beta of their Web3 gateways for Ethereum and IPFS. Unlocking the Metaverse, Web3, and Decentralized Finance for every developer. ]]></description>
            <content:encoded><![CDATA[ 
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4fgIKsu1B2OUYfIvoufy4J/2c3e73dd9e7c7082aabaf224daf3c13a/image8-2.png" />
            
            </figure><p>It’s cliché to say that the Internet has undergone massive changes in the last five years. New technologies like distributed ledgers, NFTs, and cross-platform metaverses have become all the rage. Unless you happen to hang out with the Web3 community in Hong Kong, San Francisco, and London, these technologies have a high barrier to entry for the average developer. You have to understand how to run distributed nodes, set up esoteric developer environments, and keep up with the latest chains just to get your app to run. That stops today. Today you can <a href="https://docs.google.com/forms/d/11_oXpvGGVtP0DJenWBzLfxE4cyCjHHbqrbIibLAz2wQ/edit">sign up for the private beta</a> of our Web3 product suite starting with our Ethereum and IPFS gateway.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6CPHhwfETpZPYPZ7YMspBP/ccc8a837b76989d9cdf9c3b31fb1628c/image9.png" />
            
            </figure><p>Before we go any further, a brief introduction to blockchain (<a href="https://ethereum.org/en/what-is-ethereum/">Ethereum</a> in our example) and the <a href="https://ipfs.io/#how">InterPlanetary FileSystem</a> (IPFS). In a Web3 setting, you can think of Ethereum as the compute layer, and IPFS as the storage layer. By leveraging decentralised ledger technology, Ethereum provides verifiable decentralised computation. Publicly available binaries, called "smart contracts", can be instantiated by users to perform operations on an immutable set of records. This set of records is the state of the blockchain. It has to be maintained by every node on the network, so they can verify, and participate in the computation. Performing operations on a lot of data is therefore expensive. A common pattern is to use IPFS as an external storage solution. IPFS is a peer-to-peer network for storing content on a distributed file system. Content is identified by its hash, making it inexpensive to reference from a blockchain context.</p><p>If you want an even deeper understanding of how Web3 works check out our other blog posts on <a href="/what-is-web3/">what is Web3</a> and <a href="/get-started-web3/">creating Web3 Dapps with Cloudflare Workers</a>.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2Or8TSruyUwyvrcwMsEoNp/bb4cde50e8ad68f9cb48f390a455d76c/image1-4.png" />
            
            </figure>
    <div>
      <h3>Web3 and the Metaverse</h3>
      <a href="#web3-and-the-metaverse">
        
      </a>
    </div>
    <p>Over the last four years, while we have been working to mature the technology required to provide access to Web3 services at a global scale, the idea of the Metaverse has come back into vogue. Popularized by novels like “Snowcrash,” and "Ready Player One," the idea is a simple one. Imagine an Internet where you can hop into an app and have access to all of your favorite digital goods available for you to use regardless of where you purchased them. You could sell your work on social media without granting them a worldwide license, and the buyer could use it on their online game. The Metaverse is a place where copyright and ownership can be managed through NFTs (<a href="/get-started-web3/">Non-Fungible Tokens</a>) stored on IPFS, and accessed trustlessly through Ethereum. It is a place where everyday creators can easily monetize their content, and have it be used by everyone, regardless of platform, since content is not being stored in walled gardens but decentralised ecosystems with open standards.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2ZeZo9C6EniEeJ89QF4DjE/e4b8513f15f77389c63e5f8f2937931f/image3.png" />
            
            </figure>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7iwQLeHJRogEM9WvWyu3An/8a40b9d4763bfd5c320fc7a748d7d540/image6.png" />
            
            </figure><p>This shifts the way users and content creators think about the Internet. Questions like: “Do you actually need a Model View Controller system with a server to build an application?” “What is the best way to provide consistent naming of web resources across platforms?” “Do we actually need to keep our data locked behind another company's systems or can the end-user own their data?”. This builds different trust assumptions. Instead of trusting a single company because they are the only one to have your users' data, trust is being built leveraging a source verifiable by all participants. This can be people you physically interact with for <a href="https://support.signal.org/hc/en-us/articles/360007060632-What-is-a-safety-number-and-why-do-I-see-that-it-changed-#safety_number_view">messaging applications</a>, X.509 certificates logged in a <a href="https://certificate.transparency.dev/">public Certificate Transparency</a> Log for websites, or public keys that interact with blockchains for distributed applications.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6r8YeF8xw3ABv5x3bFzlbE/9363c73ddd6882d1866a47d889023af8/image10-1.png" />
            
            </figure><p>It’s an exciting time. Unlike the emergence of the Internet however, there are large established companies that want to control the shape and direction of Web3 and this Metaverse. We believe in a future of a <a href="/what-is-web3/">decentralised and private web</a>. An open, standards-based web independent of any one company or centralizing force. We believe that we can be one of the many technical platforms that supports Web3 and the growing Metaverse ecosystem. It’s why we are so excited to be announcing the private beta of our Ethereum and IPFS gateways. Technologies that are at the forefront of Web3 and its emerging Metaverse.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7hWZ5XkA9Y9v3ZxT7YXbRw/8839a7c61076625531ef2c8c48bad198/image4-1.png" />
            
            </figure><p>Time and time again over the last year we have been asked by our customers to support their exploration of Web3, and oftentimes their core product offering. At Cloudflare, we are committed to helping build a better Internet for everyone, regardless of their preferred tech stack. We want to be the pickaxes and shovels for everyone. We believe that Web3 and the Metaverse is not just an experiment, but an entirely new networking paradigm where many of the next multi-billion dollar businesses are going to be built. We believe that the first complete metaverse could be built entirely on Cloudflare today using systems like Ethereum, IPFS, RTC, <a href="https://www.cloudflare.com/developer-platform/r2/">R2 storage</a>, and Workers. Maybe you will be the one to build it...</p><p>We are excited to be on this journey with our Web3 community members, and can’t wait to show you what else we have been working on.</p>
    <div>
      <h3>Introducing the Cloudflare Web3 Gateways!</h3>
      <a href="#introducing-the-cloudflare-web3-gateways">
        
      </a>
    </div>
    <p>A gateway is a computer that sits between clients (such as your browser or mobile device) and a number of other systems and helps translate traffic from one protocol to another, so the systems powering an application required to handle the request can do so properly. But there are different types of gateways that exist today.</p><p>You have probably heard mention of an <a href="https://www.cloudflare.com/learning/security/api/what-is-an-api-gateway/">API gateway</a>, which is responsible for accepting API calls inbound to an application and aggregating the appropriate services to fulfill those requests and return a proper response to the end user. You utilize gateways every time you watch Netflix! Their company leverages an API gateway to ensure the hundreds of different devices that access their streaming service can receive a successful and proper response, allowing end users to watch their shows. Gateways are a critical component of how Web3 is being enabled for every end user on the planet.</p><p>Remember that Web3 or the distributed web is a set of technologies that enables hosting of content and web applications in a serverless manner by leveraging purely distributed systems and consensus protocols. Gateways let you use these applications in your browser without having to install plugins or run separate pieces of software called nodes. The distributed web community runs into the same problem of needing a stable, reliable, and resilient method to translate HTTP requests into the correct Web3 functions or protocols.</p><p>Today, we are introducing the Cloudflare Ethereum and IPFS Gateways to help Web3 developers do what they do best, develop applications, without having to worry about also running the infrastructure required to support Ethereum (Eth) or IPFS nodes.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4jEKbTRVOn95CzcJoLEE5E/f8f4c167512b17069711ce74e0bedded/image5-1.png" />
            
            </figure>
    <div>
      <h3>What’s the problem with existing Eth or IPFS Web Gateways?</h3>
      <a href="#whats-the-problem-with-existing-eth-or-ipfs-web-gateways">
        
      </a>
    </div>
    <p>Traditional web technologies such as HTTP have had decades to develop standards and best practices that make sites fast, secure, and available. These haven’t been developed on the distributed web side of the Internet, which focuses more on redundancy. We identified an opportunity to bring the optimizations and infrastructure of the web to the distributed web by building a gateway — a service that translates HTTP API calls to IPFS or Ethereum functions, while adding Cloudflare added-value services on the HTTP side. The ability for a customer to operate their entire network control layer with a single pane of glass using Cloudflare is huge. You can manage the DNS, Firewall, Load Balancing, Rate Limiting, Tunnels, and more for your marketing site, your distributed application (Dapp), and corporate security, all from one location.</p><p>For many of our customers, the existing solutions for Web3 gateway do not have a large enough network to handle the growing amount of requests within the Ethereum and IPFS networks, but more importantly do not have the degree of resilience and redundancy that businesses expect and require operating at scale. The idea of the distributed web is to do just that… stay distributed, so no single actor can control the overall market. Speed, security, and reliability are at the heart of what we do. We are excited to be part of the growing Web3 infrastructure community so that we can help Dapp developers have more choice, scalability, and reliability from their infrastructure providers.</p><p>A clear example of this is when existing gateways have an outage. With too few gateways to handle the traffic, the result of this outage is pre-process transactions falling behind the blockchain they are accessing, thus leading to increased latency for the transaction, potentially leading to it failing. Worse, when decentralised application (Dapp) developers use IPFS to power their front end, it can lead to their entire application falling over. Overall, this leads to massive amounts of frustration from businesses and end users alike — not being able to collect revenue for products or services, thus putting a portion of the business at a halt and breaking trust with end users who depend on the reliability of these services to manage their Web3 assets.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1JgXilj9lnei2QHAcsRFtx/21d90693861fa2b02a7e1bad7e86e5db/image7.png" />
            
            </figure>
    <div>
      <h3>How is Cloudflare solving this problem?</h3>
      <a href="#how-is-cloudflare-solving-this-problem">
        
      </a>
    </div>
    <p>We found that there was a unique opportunity in a segment of the Web3 community that closely mirrored Cloudflare’s traditional customer base: the distributed web. This segment has some major usability issues that Cloudflare could help solve around reliability, performance, and caching. Cloudflare has an advantage that no other company in this space — and very few in the industry — have: a global network. For instance, content fetched through our <a href="https://cloudflare-ipfs.com/">IPFS Gateway</a> can be cached near users, allowing download latency in the milliseconds. Compare this with up to seconds per asset using native IPFS. This speed enables services based on IPFS to go hybrid. Content can be served over the source decentralised protocols while browsers and tools are maturing to access them, and served to regular web users through a gateway like Cloudflare. We do provide a convenient, fast and secure option to browse this distributed content.</p><p>On Ethereum, users can be categorised in two ways. Application developers that operate smart contracts, and users that want to interact with the said contracts. While smart contracts operate autonomously based on their code, users have to fetch data and send transactions. As part of the chain, smart contracts do not have to worry about the network or a user interface to be online. This is why decentralised exchanges have had the ability to operate continuously across multiple interfaces without disruptions. Users on the other hand do need to know the state of the chain, and be able to interact with it. Application developers therefore have to require the users to run an Ethereum node, or can point them to use remote nodes through a <a href="https://ethereum.org/en/developers/docs/apis/json-rpc/">standardised JSON RPC API</a>. This is where Cloudflare comes in. Cloudflare Ethereum gateway relies on Ethereum nodes and provides a secure and fast interface to the Ethereum network. It allows application developers to leverage Ethereum in front-facing applications. The gateway can interact with any content part of the Ethereum chain. This includes NFT contracts, DeFi exchanges, or name services like ENS.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4VtJXJP7vwn9gAM0e25eod/ee26e0bc56ff0d7b98113557245ebf16/image2.png" />
            
            </figure>
    <div>
      <h3>How are the gateways doing so far?</h3>
      <a href="#how-are-the-gateways-doing-so-far">
        
      </a>
    </div>
    <p>Since our alpha release to very early customers as research experiments, we’ve seen a staggering number of customers wanting to leverage the new gateway technology and benefit from the availability, resiliency, and caching benefits of Cloudflare’s network.</p><p>Our current alpha includes companies that have raised billions of dollars in venture capital, companies that power the decentralised finance ecosystem on Ethereum, and emerging metaverses that make use of NFT technology.</p><p>In fact, we have over 2,000 customers leveraging our IPFS gateway lending to over 275TB of traffic per month. For Ethereum, we have over 200 customers transacting over 13TB, including 1.6 billion requests per month. We’ve seen extremely stable results from these customers and fully expect to see these metrics continue to ramp up as we add more customers to use this new product.</p><p>We are now very happy to announce the opening of our private beta for both the Ethereum and IPFS gateways. <a href="https://docs.google.com/forms/d/11_oXpvGGVtP0DJenWBzLfxE4cyCjHHbqrbIibLAz2wQ/edit">Sign up to participate in the private beta</a> and our team will reach out shortly to ensure you are set up!</p><p>P.S. We are hiring for Web3! If you want to come work on it with us, check out our <a href="https://boards.greenhouse.io/cloudflare/jobs/3352190?gh_jid=3352190">careers page</a>.</p> ]]></content:encoded>
            <category><![CDATA[Birthday Week]]></category>
            <category><![CDATA[Web3]]></category>
            <category><![CDATA[Distributed Web]]></category>
            <category><![CDATA[IPFS]]></category>
            <category><![CDATA[Ethereum]]></category>
            <category><![CDATA[Research]]></category>
            <guid isPermaLink="false">3JkUkPfA7HavDc4YUSBMaw</guid>
            <dc:creator>Wesley Evans</dc:creator>
            <dc:creator>Brian Batraski</dc:creator>
        </item>
        <item>
            <title><![CDATA[Cloudflare and COVID-19: Project Fair Shot Update]]></title>
            <link>https://blog.cloudflare.com/cloudflare-and-covid-19-project-fair-shot-update/</link>
            <pubDate>Thu, 29 Jul 2021 13:00:43 GMT</pubDate>
            <description><![CDATA[ Cloudflare Waiting Room helping organizations around the world to stifle COVID-19 and aid with easy rapid vaccinations. ]]></description>
            <content:encoded><![CDATA[ 
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/oMc5oJGNvoM3zY9UMJf7X/7ff04e8e12c8d19b33a7cd67d12b459b/image1-40.png" />
            
            </figure><p>In February 2021, Cloudflare launched <a href="/project-fair-shot/">Project Fair Shot</a> — a program that gave our Waiting Room product free of charge to any government, municipality, private/public business, or anyone responsible for the scheduling and/or dissemination of the COVID-19 vaccine.</p><p>By having our <a href="/cloudflare-waiting-room/">Waiting Room</a> technology in front of the vaccine scheduling application, it ensured that:</p><ul><li><p>Applications would remain available, reliable, and resilient against massive spikes of traffic for users attempting to get their vaccine appointment scheduled.</p></li><li><p>Visitors could wait for their long-awaited vaccine with confidence, arriving at a branded queuing page that provided accurate, estimated wait times.</p></li><li><p>Vaccines would get distributed equitably, and not just to folks with faster reflexes or Internet connections.</p></li></ul><p>Since February, we’ve seen a good number of participants in Project Fair Shot. To date, we have helped more than 100 customers across more than 10 countries to schedule approximately 100 million vaccinations. Even better, these vaccinations went smoothly, with customers like the County of San Luis Obispo regularly dealing with more than 20,000 appointments in a day.  “The bottom line is Cloudflare saved lives today. Our County will forever be grateful for your participation in getting the vaccine to those that need it most in an elegant, efficient and ethical manner” — Web Services Administrator for the <a href="https://www.cloudflare.com/case-studies/county-of-san-luis-obispo/">County of San Luis Obispo</a>.</p><p>We are happy to have helped not just in the US, but worldwide as well. In Canada, we partnered with a number of organizations and the Canadian government to increase access to the vaccine. One partner stated: “Our relationship with Cloudflare went from ‘Let's try Waiting Room’ to ‘Unless you have this, we're not going live with that public-facing site.'” — CEO of <a href="https://www.cloudflare.com/case-studies/verto/">Verto Health</a>. In another country in Europe, we saw over three million people go through the Waiting Room in less than 24 hours, leading to a significantly smoother and less stressful experience. Cities in Japan, — working closely with our partner, <a href="https://classmethod.jp/news/202106-cloudflare-en/">Classmethod</a> — have been able to vaccinate over 40 million people and are on track to complete their vaccination process across 317 cities. If you want more stories from Project Fair Shot, check out <a href="https://www.cloudflare.com/case-studies/?product=Waiting+Room">our case studies</a>.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3JW3zznYeLCU0J8N1MDYcd/327e638ae0b710f938a93d8cd2207643/image2-28.png" />
            
            </figure><p>A European customer seeing very high amounts of traffic during a vaccination event</p><p>We are continuing to add more customers to Project Fair Shot every day to ensure we are doing all that we can to help distribute more vaccines. With the emergence of the Delta variant and others, vaccine distribution (and soon, booster shots) is still very much a real problem to keep everyone healthy and resilient. Because of these new developments, Cloudflare will be extending Project Fair Shot until at least July 1, 2022. Though we are not excited to see the pandemic continue, we are humbled to be able to provide our services and be a critical part in helping us collectively move towards a better tomorrow.</p> ]]></content:encoded>
            <category><![CDATA[Impact Week]]></category>
            <category><![CDATA[Waiting Room]]></category>
            <category><![CDATA[COVID-19]]></category>
            <category><![CDATA[Project Fair Shot]]></category>
            <category><![CDATA[Reliability]]></category>
            <guid isPermaLink="false">5HBc7sJzo5x35fCSj9DBji</guid>
            <dc:creator>Brian Batraski</dc:creator>
        </item>
        <item>
            <title><![CDATA[Rich, complex rules for advanced load balancing]]></title>
            <link>https://blog.cloudflare.com/rich-complex-rules-for-advanced-load-balancing/</link>
            <pubDate>Fri, 16 Jul 2021 13:00:22 GMT</pubDate>
            <description><![CDATA[ Take control of your traffic by adding custom logic to your origin selection and traffic steering decisions! ]]></description>
            <content:encoded><![CDATA[ 
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4kRo39XsUp3WJjldKkZhhZ/fb1970ec6417d47dbed3affadfc1c9b2/Cloudflarerd-updates-2.png" />
            
            </figure><p>Load Balancing — functionality that’s been around for the last 30 years to help businesses leverage their existing infrastructure resources. Load balancing works by proactively steering traffic away from unhealthy origin servers and — for more advanced solutions — intelligently distributing traffic load based on different steering <a href="https://www.cloudflare.com/learning/performance/types-of-load-balancing-algorithms/">algorithms</a>. This process ensures that errors aren’t served to end users and empowers businesses to tightly couple overall business objectives to their traffic behavior.</p>
    <div>
      <h2>What’s important for load balancing today?</h2>
      <a href="#whats-important-for-load-balancing-today">
        
      </a>
    </div>
    <p>We are no longer in the age where setting up a fixed amount of servers in a data center is enough to meet the massive growth of users browsing the Internet. This means that we are well past the time when there is a one size fits all solution to suffice the needs of different businesses. Today, customers look for load balancers that are easy to use, propagate changes quickly, and — especially now — provide the most feature flexibility. Feature flexibility has become so important because different businesses have different paths to success and, consequently, different challenges! Let’s go through a few common use cases:</p><ul><li><p>You might have an application split into microservices, where specific origins support segments of your application. You need to route your traffic based on specific paths to ensure no single origin can be overwhelmed and users get sent to the correct server to answer the originating request.</p></li><li><p>You may want to route traffic based on a specific value within a header request such as “PS5” and send requests to the data center with the matching header.</p></li><li><p>If you heavily prioritize security and privacy, you may adopt a split-horizon DNS setup within your network architecture. You might choose this architecture to separate internal network requests from public requests from the rest of the public Internet. Then, you could route each type of request to pools specifically suited to handle the amount and type of traffic.</p></li></ul><p>As we continue to build new features and products, we also wanted to build a framework that would allow us to increase our velocity to add new items to our Load Balancing solution while we also take the time to create first class features as well. The result was the creation of our custom rule builder!</p><p>Now you can build complex, custom rules to direct traffic using Cloudflare Load Balancing, empowering customers to create their own custom logic around their traffic steering and origin selection decisions. As we mentioned, there is no one size fits all solution in today's world. We provide the tools to easily and quickly create rules that meet the exact requirements needed for any customer's unique situation and architecture. On top of that, we also support ‘and’ and ‘or’ statements within a rule, allowing very powerful and complex rules to be created for any situation!</p><p>Load Balancing by path becomes easy, requiring just a few minutes to enter the paths and some boolean statements to create complex rules. Steer by a specific header, query string, or cookie. It’s no longer a pain point. Leverage a split horizon DNS design by creating a rule looking at the IP source address and then routing to the appropriate pool based on the value. This is just a small subset of the very robust capabilities that load balancing custom rules makes available to our users and this is just the start! Not only do we have a large amount of functionality right out of the box, but we’re also providing a consistent, intuitive experience by building on our Firewall Rules Engine.</p><p>Let’s go through some use cases to explore how custom rules can open new possibilities by giving you more granular control of your traffic.</p>
    <div>
      <h2>High-volume transactions for ecommerce</h2>
      <a href="#high-volume-transactions-for-ecommerce">
        
      </a>
    </div>
    <p>For any high-volume transaction business such as an ecommerce or retail store, ensuring the transactions go through as fast and reliably as possible is a table stakes requirement. As transaction volume increases, no single origin can handle the incoming traffic, and it doesn’t always make sense for it to do so. Why have a transaction request travel around the world to a specifically nominated origin for payment processing? This setup would only add latency, leading to degraded performance, increased errors, and a poor customer experience. But what if you could create custom logic to segment transactions to different origin servers based on a specific value in a query string, such as a PS5 (associated with Sony’s popular PlayStation 5)? What if you could then couple that value with dynamic latency steering to ensure your load balancer always chooses the most performant path to the origin? This would be game changing to not only ensure that table-stakes transactions are reliable and fast but also drastically improve the customer experience. You could do this in minutes with load balancing custom rules:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1svxayy53RBp3u5XDKRh0b/3608c2ec914e40e1ab1cb6a9f42ab641/image5-5.png" />
            
            </figure><p>For any requests where the query string shows ‘PS5’, then route based on which pool is the most performant.</p>
    <div>
      <h2>Load balance across multiple DNS vendors to support privacy and security</h2>
      <a href="#load-balance-across-multiple-dns-vendors-to-support-privacy-and-security">
        
      </a>
    </div>
    <p>Some customers may want to use multiple DNS providers to bolster their resiliency along with their security and privacy for the different types of traffic going through their network. By utilizing  two DNS providers, customers can not only be sure that they remain highly available in times of outages, but also direct different types of traffic, whether that be internal network traffic across offices or unknown traffic from the public Internet.</p><p>Without flexibility, however, it can be difficult to easily and intelligently route traffic to the proper data centers to maintain that security and privacy posture. Not anymore! With load balancing custom rules, supporting a split horizon DNS architecture takes as little as five minutes to set up a rule based on the IP source condition and then overwriting which pools or data centers that traffic should route to.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2ElmM0Q5zkKM0KjXWqLr4K/704a2a3569bb7690c39a997ab60873dc/image2-4.png" />
            
            </figure><p>This can also be extremely helpful if your data centers are spread across multiple areas of the globe that don’t align with the 13 current regions within Cloudflare. By segmenting where traffic goes based on the IP source address, you can create a type of geo-steering setup that is also finely tuned to the requirements of the business!</p>
    <div>
      <h2>How did we build it?</h2>
      <a href="#how-did-we-build-it">
        
      </a>
    </div>
    <p>We built Load Balancing rules on top of our <a href="https://github.com/cloudflare/wirefilter">open-source wirefilter execution engine</a>. People familiar with Firewall Rules and other products will notice similar syntax since both products are built on top of this execution engine.</p><p>By reusing the same underlying engine, we can take advantage of a battle-tested production library used by other products that have the performance and stability requirements of their own. For those experienced with our rule-based products, you can reuse your knowledge due to the shared syntax to define conditionals statements. For new users, the Wireshark-like syntax is often familiar and relatively simple.</p>
    <div>
      <h2>DNS vs Proxied?</h2>
      <a href="#dns-vs-proxied">
        
      </a>
    </div>
    <p>Our Load Balancer supports both DNS and Proxied load balancing. These two protocols operate very differently and as such are handled differently.</p><p>For <a href="https://www.cloudflare.com/learning/performance/what-is-dns-load-balancing/">DNS-based load balancing</a>, our load balancer responds to DNS queries sent from recursive resolvers. These resolvers are normally not the end user directly requesting the traffic nor is there a 1-to-1 ratio between DNS query and end-user requests. The DNS makes extensive use of caching at all levels so the result of each query could potentially be used by thousands of users. Combined, this greatly limits the possible feature set for DNS. Since you don’t see the end user directly nor know if your response is going to be used by one or more users, all responses can only be customized to a limited degree.</p><p>Our Proxied load balancing, on the other hand, processes rules logic for every request going through the system. Since we act as a proxy for all these requests, we can invoke this logic for all requests and access user-specific data.</p><p>These different modes mean the fields available to each end up being quite different. The DNS load balancer gets access to DNS-specific fields such as “dns.qry.name” (the query name) while our Proxied load balancer has access to “http.request.method” (the HTTP method used to access the proxied resource). Some more general fields — like the name of the load balancer being used — are available in both modes.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/NUhRkCfvb73WY8MTkGvfi/ff01e7523568b2d62acc15f916e45b82/Screenshot-2021-07-14-at-12.20.53.png" />
            
            </figure>
    <div>
      <h2>How does it work under the hood?</h2>
      <a href="#how-does-it-work-under-the-hood">
        
      </a>
    </div>
    <p>When a load balancer rule is configured, that API call will validate that the conditions and actions of the rules are valid. It makes sure the condition only references known fields, isn’t excessively long, and is syntactically valid. The overrides are processed and applied to the load balancers configuration to make sure they won’t cause an invalid configuration. After validation, the new rule is saved to our database.</p><p>With the new rule saved, we take the load balancer’s data and all rules used by it and package that data together into one configuration to be shipped out to our edge. This process happens very quickly, so any changes are visible to you in just a few seconds.</p><p>While DNS and proxied load balancers have access to different fields and the protocols themselves are quite different, the two code paths overlap quite a bit. When either request type makes it to our load balancer, we first load up the load balancer specific configuration data from our edge datastore. This object contains all the “static” data for a load balancer, such as rules, origins, pools, steering policy, and so forth. We load dynamic data such as origin health and RTT data when evaluating each pool.</p><p>At the start of the load balancer processing, we run our rules. This ends up looking very much like a loop where we check each condition and — if the condition is true — we apply the effects specified by the rules. After each condition is processed and the effects are applied we then run our normal load balancing logic as if you have configured the load balancer with the overridden settings. This style of applying each override in turn allows more than one rule to change a given setting multiple times during execution. This lets users avoid extremely long and specific conditionals and instead use shorter conditionals and rule ordering to override specific settings creating a more modular ruleset.</p>
    <div>
      <h2>What’s coming next?</h2>
      <a href="#whats-coming-next">
        
      </a>
    </div>
    <p>For you, the next steps are simple. Start building custom load balancing rules! For more guidance, check out our <a href="https://developers.cloudflare.com/load-balancing/understand-basics/load-balancing-rules">developer documentation</a>.</p><p>For us, we’re looking to expand this functionality. As this new feature develops, we are going to be identifying new fields for conditionals and new options for overrides to allow more specific behavior. As an example, we’ve been looking into exposing a means to creating more time-based conditionals, so users can create rules that only apply during certain times of the day or month. Stay tuned to the blog for more!</p> ]]></content:encoded>
            <category><![CDATA[Load Balancing]]></category>
            <category><![CDATA[Speed & Reliability]]></category>
            <guid isPermaLink="false">5Huo0GwA3T7Fm1oAfILLqs</guid>
            <dc:creator>Brian Batraski</dc:creator>
            <dc:creator>Tim Polich</dc:creator>
        </item>
        <item>
            <title><![CDATA[Per Origin Host Header Override]]></title>
            <link>https://blog.cloudflare.com/per-origin-host-header-override/</link>
            <pubDate>Fri, 09 Apr 2021 13:00:00 GMT</pubDate>
            <description><![CDATA[ Load Balancing as a concept is pretty straightforward. Take an existing infrastructure and route requests to the available origin servers so no single server is overwhelmed. Add in some health monitoring to ensure each server has a heartbeat/pulse so proactive decisions can be made.  ]]></description>
            <content:encoded><![CDATA[ <p>Load Balancing as a concept is pretty straightforward. Take an existing infrastructure and route requests to the available origin servers so no single server is overwhelmed. Add in some health monitoring to ensure each server has a heartbeat/pulse so proactive decisions can be made. With two steps, you get more effective utilization of your existing resources… simple enough!</p><p>As your application grows, however, load balancing becomes more complicated. An example of this — and the subject of this blog post — is how load balancing interacts with the Host header in an HTTP request.</p>
    <div>
      <h3>Host headers and load balancing</h3>
      <a href="#host-headers-and-load-balancing">
        
      </a>
    </div>
    <p>Every request to a website contains a unique piece of identifying information called the Host header. The Host header helps route each request to the correct origin server so the end user is sent the information they requested from the start.</p><p>For example, say that you enter <code>example.com</code> into my URL bar in my browser. You are sending a request to ‘example.com’ to send you back the homepage located <i>within</i> that application. To make sure you actually get resources from <code>example.com</code>, your browser includes a Host header of <code>example.com</code>. When that request reaches the back-end infrastructure, it finds the origin server that also has the host <code>example.com</code> and then sends back the required information.</p><p>Host headers are not only important for locating resources, but also for security. Imagine how jarring it would be if someone sent your request to <code>scary-example.com</code> and returned malicious resources instead! If the Host header in the request does not match the Host header at the destination, then the request will fail.</p><p>This behavior can cause issues when you start using third-party applications like Google Compute Cloud, Google App Engine, Heroku, and Amazon S3. These applications are great, helping you deploy and scale applications. However, they have an important effect on the requirements for a Host header. Your server is no longer located at <code>example.com</code>. Instead, it’s probably at something like <code>example.herokuapp.com</code>. If you can’t change the Host header on your origin, users might be sending their requests to the wrong place, leading to error messages and poor user experience.</p><p>The combination of Host headers and third-party applications caused issues with our Load Balancing solution. The Host header of the request would not match the updated Host header for the new application now added into the request pipeline, meaning the request would fail. Larger customers were forced to choose between using third-party applications and load balancing… not a decision we wanted to force our customers to make!</p><p>We saw this to be a big problem and wanted to help our customers leverage different solutions in the market to ensure they are successful and can align their business objectives with their infrastructure.</p>
    <div>
      <h3>Introducing Origin Server Host Header Overrides</h3>
      <a href="#introducing-origin-server-host-header-overrides">
        
      </a>
    </div>
    <p>To address this problem, we’ve added support to override the Host header on a per-origin basis within our Load Balancing solution. This means that you don’t have to worry about sending errors back to your end-users and can seamlessly integrate apps — such as Heroku — into your reliability solutions.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7lHzhndIL0cghD1eIW5bp0/341af6fc0de25da3311ca71b99c7c806/image3-3.png" />
            
            </figure><p>We wanted to create a scalable solution, one that allowed you to perform these overrides in an easy and intuitive way. As we reviewed the problem, we saw that there was no one-size fit all solution. Different customers are going to have their apps architected differently and have different goals based on their business, industry, geography, etc. Thus, it was important to provide flexibility to override the Host header on a per-origin basis, since different origins can support different segments of an application or entirely different applications altogether! With a simple click, users can now update the Host header on their origins. When requests hit the Load Balancer, it reads the overwritten value for the origin host instead of the defaulted origin Host header value and requests are routed properly without errors.</p><p>“<i>But what about my health monitors!?</i>” you may be thinking. When you add a Host header override on your origin, we will automatically apply it to the origin’s health monitor. No extra configuration is required. On top of that, we felt it was important to provide as much meaningful information as possible so you could make informed decisions around any configuration updates. When editing a health monitor, you will see a new table if any origins have a Host header override.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7xmbMjXqdFhPrxqTNQXnsJ/7c8231a41bdc43895aec5fbed1aa34b9/image2-2.png" />
            
            </figure>
    <div>
      <h3>How else can overriding the Host header help me?</h3>
      <a href="#how-else-can-overriding-the-host-header-help-me">
        
      </a>
    </div>
    <p>You might find the Host header override useful if you have a shared web server. In these situations, IP addresses are not uniquely assigned to each server, meaning you might need a Host header override to direct your request to the right domain. For example, you might have <code>example-grocery.com</code>, <code>example-furniture.com</code>, and <code>example-perfume.com</code> all on a shared webserver hosted on the same IP address. When a request intended for <code>example-furniture.com</code> is forwarded to the server, the Host header override — <code>Host: example-furniture.com</code> — specifies which host to connect to.</p><p>Setting a Host header would mean that you enforce a strict HTTPS/TLS connection to reach the origin. We validate the Server Name Indicator (SNI) to verify that the request will be forwarded to the appropriate website.</p>
    <div>
      <h3>How does it work under the hood?</h3>
      <a href="#how-does-it-work-under-the-hood">
        
      </a>
    </div>
    <p>To understand how it works, first let’s see what solutions we have currently. To configure a Load Balancer with a third-party cloud platform that requires a Host header, you would need to set a Host header override with Page Rules (Enterprise customers only). This solution works great if you have back-end origin1 and origin2 that expect the same Host header. But when the origins expect different Host headers, it wasn’t possible to set it per origin. There was a need for a more granular flexibility, hence the reason for per-origin Host header override.</p><p>Now, when you navigate to the Traffic tab and set up a Load Balancer, you can add a Host header override on your back-end origin. When you add the Host header, we do safety checks — which we’ll get to in a bit — and if it passes all the checks then your configuration gets saved. If you set a Host header override on an origin, it will take precedence over a Host header override set on the monitor.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/pGJS2FVLjiyF6otz5yHiB/7b23bcaa6e45a1f68674dbe65d7f8bd8/image1-4.png" />
            
            </figure><p>Cloudflare has over 200 edge locations around the world, making it one of the largest Anycast networks. We make sure that your Load Balancer config information is available at all our Cloudflare edge locations. For example, you may have your website hosted on a third-party cloud provider like Heroku. You would set up a Load Balancer with pools and associated origin <code>example.com</code>. To reach <code>example.com</code> hosted on Heroku, you would set the heroku url <code>example.herokuapp.com</code> as the origin Host header “Host:example.herokuapp.com”. When a request hits a Cloudflare edge, we would first check the load balancing config and the health monitor to check the origin health status. Then, we would set the Host header override and Server Name Indication (SNI).For more about, visit our <a href="https://www.cloudflare.com/learning/ssl/what-is-sni/">learning center</a>.</p><p>There are some restrictions that limit the domain set as the Host header override on an origin:</p><ol><li><p>The only allowed HTTP header is “Host” and no other.</p></li><li><p>You can specify only one “Host” Host header per origin, so no duplicates or line wrap/indent Host header with space is allowed.</p></li><li><p>No ports can be added to the Host header.</p></li><li><p>We allow fully qualified domain names (FQDN) and IP addresses that can be resolved by public DNS.</p></li><li><p>The FQDN in the Host header must be a subdomain of a zone associated with the account, which is applicable for partial zones and secondary zones.</p></li></ol><p>These requirements make sure you are directing requests to domains that belong to you. With future development, we may relax some restrictions.</p><p>If you want to understand more about Load Balancing on Cloudflare, visit our <a href="https://www.cloudflare.com/load-balancing/">product</a> page or look at our help articles, such as <a href="https://support.cloudflare.com/hc/en-us/articles/205893698-Configure-Cloudflare-and-Heroku-over-HTTPS">Configure Cloudflare &amp; Heroku</a>.</p> ]]></content:encoded>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[Load Balancing]]></category>
            <guid isPermaLink="false">5PrsuOAafzdbuI12gTvuSH</guid>
            <dc:creator>Brian Batraski</dc:creator>
            <dc:creator>Roopa Chandrashekar</dc:creator>
        </item>
        <item>
            <title><![CDATA[Cloudflare Waiting Room]]></title>
            <link>https://blog.cloudflare.com/cloudflare-waiting-room/</link>
            <pubDate>Fri, 22 Jan 2021 14:00:00 GMT</pubDate>
            <description><![CDATA[ Today, we are excited to announce Cloudflare Waiting Room! It will be first available to select customers through a new program called Project Fair Shot, with general availability in our Business and Enterprise plans in the near future.  ]]></description>
            <content:encoded><![CDATA[ 
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2LySWqjpMJS9ugWRRi5gPv/e2cd9d09980b05f5585ee65574e88b7f/image3-14.png" />
            
            </figure><p>Today, we are excited to announce Cloudflare Waiting Room! It will first be available to select customers through a new program called <a href="/project-fair-shot/">Project Fair Shot</a> which aims to help with the problem of overwhelming demand for COVID-19 vaccinations causing appointment registration websites to fail. General availability in our Business and Enterprise plans will be added in the near future.</p>
    <div>
      <h3>Wait, you’re excited about a… Waiting Room?</h3>
      <a href="#wait-youre-excited-about-a-waiting-room">
        
      </a>
    </div>
    <p>Most of us are familiar with the concept of a waiting room, and rarely are we excited about the idea of being in one. Usually our first experience of one is at a doctor’s office — yes, you have an appointment, but sometimes the doctor is running late (or one of the patients was). Given the doctor can only see one person at a time… the waiting room was born, as a mechanism to queue up patients.</p><p>While servers can handle more concurrent requests than a doctor can, they too can be overwhelmed. If, in a pre-COVID world, you’ve ever tried buying tickets to a popular concert or event, you’ve probably encountered a waiting room online. It limits requests inbound to an application, and places these requests into a virtual queue. Once the number of users in the application has reduced, new users are let in within the defined thresholds the application can handle. This protects the origin servers supporting the application from being inundated with too many requests, while also ensuring equity from a user perspective — users who try to access a resource when the system is overloaded are not unfairly dropped and forced to reconnect, hoping to join their chance in the queue.</p>
    <div>
      <h3>Why Now?</h3>
      <a href="#why-now">
        
      </a>
    </div>
    <p>Given not many of us are going to live concerts any time soon, why is Cloudflare doing this now?</p><p>Well, perhaps we aren’t going to concerts, but the second order effects of COVID-19 have created a huge need for waiting rooms. First of all, given social distancing and the closing of many places of business and government, customers and citizens have shifted to online channels, putting substantially more strain on business and government infrastructure.</p><p>Second, the pandemic and the flow-on consequences of it have meant many folks around the world have come to rely on resources that they didn’t need twelve months earlier. To be specific, these are often health or government-related resources — for example, unemployment insurance websites. The online infrastructure was set up to handle a peak load that didn’t foresee the impact of COVID-19. We’re seeing a similar pattern emerge with websites that are related to vaccines.</p><p>Historically, the number of organizations that needed waiting rooms was quite small. The nature of most businesses online usually involves a more consistent user load, rather than huge crushes of people all at once. Those organizations were able to build custom waiting rooms and were integrated deeply into their application (for example, buying tickets).  With Cloudflare’s Waiting Room, no code changes to the application are necessary and a Waiting Room can be set up in a matter of minutes for any website without writing a single line of code.</p><p>Whether you are an engineering architect or a business operations analyst, setting up a Waiting Room is simple. We make it quick and easy to ensure your applications are reliable and protected from unexpected spikes in traffic.  Other features we felt were important are automatic enablement and dynamic outflow. In other words, a waiting room should turn on automatically when thresholds are exceeded and as users finish their tasks in the application, let out different sized buckets of users and intake new ones already in the queue. It should just work. Lastly, we’ve seen the major impact COVID-19 has made on users and businesses alike, especially, but not limited to, the <a href="/project-fair-shot">health and government sectors</a>. We wanted to provide another way to ensure these applications remain available and functional so all users can receive the care that they need and not errors within their browser.</p>
    <div>
      <h3>How does Cloudflare’s Waiting Room work?</h3>
      <a href="#how-does-cloudflares-waiting-room-work">
        
      </a>
    </div>
    <div></div><p>We built Waiting Room on top of our edge network and our Workers product. By leveraging Workers and our new <a href="/introducing-workers-durable-objects/">Durable Objects</a> offerings, we were able to remove the need for any customer coding and provide a seamless, out of the box product that will ‘just work’. On top of this, we get the benefits of the scale and performance of our Workers product to ensure we maintain extremely low latency overhead, keep estimated times presented to end users accurate as can be and not keep any user in the queue longer than needed. But building a centralized system in a decentralized network is no easy task. When requests come into an application from around the world, we need to be able to get a broad, accurate view of what that load looks like inbound and outbound to a given application.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/58SzyjWqUnxbAHlfw2NQlN/8b297e6b82cb4b5a373fa6ef502766af/image7-4.png" />
            
            </figure><p>Request going through Cloudflare without a Waiting Room</p><p>These requests, as fast as they are, still take time to travel across the planet. And so, a unique edge case was presented. What if a website is getting reasonable traffic from North America and Europe, but then a sudden major spike of traffic takes place from South America - how do we know when to keep letting users into the application and when to kick in the Waiting Room to protect the origin servers from being overloaded?</p><p>Thanks to some clever engineering and our Workers product, we were able to create a system that almost immediately keeps itself synced with global demand to an application giving us the necessary insight into when we should and should not be queueing users into the Waiting Room. By leveraging our global Anycast network and over 200+ data centers, we remove any single point of failure to protect our customers' infrastructure yet also provide a great experience to end-users who have to wait a small amount of time to enter the application under high load.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5A4dbEqx7QljSUT9sCrHDw/0927fd2fecf6d10cd9daf1615d2288f0/image.png" />
            
            </figure><p>Request going through Cloudflare with a Waiting Room</p>
    <div>
      <h3>How to setup a Waiting Room</h3>
      <a href="#how-to-setup-a-waiting-room">
        
      </a>
    </div>
    <p>Setting up a Waiting Room is incredibly easy and very fast! At the easiest side of the scale, a user needs to fill out only five fields: 1) the name of the Waiting Room, 2) a hostname (which will already be pre-populated with the zone it’s being configured on), 3) the total active users that can be in the application at any given time, 4) the new users per minute allowed into the application, and 5) the session duration for any given user. No coding or any application changes are necessary.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7jvd256hBoik5Q1vOrHaY7/eb3d44f17ed464c10a4a87a7ff596468/image2-10.png" />
            
            </figure><p>We provide the option of using our default Waiting Room template for customers who don’t want to add additional branding. This simplifies the process of getting a Waiting Room up and running.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/47mGOhBbYguIY0mR2PNqA6/f6dbb1eb9e291c4dab9f3c7674f74598/image4-13.png" />
            
            </figure><p>That’s it! Press save and the Waiting Room is ready to go!</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3SaV701UPb0yPHAsZveNvh/a4502cdcd5a1d5795245cfd37a148968/image1-13.png" />
            
            </figure><p>For customers with more time and technical ability, the same process is followed, except we give full customization capabilities to our users so they can brand the Waiting Room, ensuring it matches the look and feel of their overall product.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/57kJYj33S42WCATXuhDWPm/cbdd07a98ea75dcb7c56b7643a03f971/image8-6.png" />
            
            </figure><p>Lastly, managing different Waiting Rooms is incredibly easy. With our Manage Waiting Room table, at a glance you are able to get a full snapshot of which rooms are actively queueing, not queueing, and/or disabled.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/59ZVBRBMdPw3S1ZNxCy8N/b60b0daa36e701ea50a2f3246e00270b/image5-6.png" />
            
            </figure><p>We are very excited to put the power of our <a href="https://www.cloudflare.com/waiting-room/">Waiting Room</a> into the hands of our customers to ensure they continue to focus on their businesses and customers. Keep an eye out for another blog post coming soon with major updates to our Waiting Room product for Enterprise!</p> ]]></content:encoded>
            <category><![CDATA[Project Fair Shot]]></category>
            <category><![CDATA[Load Balancing]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[Better Internet]]></category>
            <category><![CDATA[COVID-19]]></category>
            <category><![CDATA[Waiting Room]]></category>
            <guid isPermaLink="false">12yEIFZBDJjbYa9zqvEhbl</guid>
            <dc:creator>Brian Batraski</dc:creator>
        </item>
        <item>
            <title><![CDATA[Health Check Analytics and how you can use it]]></title>
            <link>https://blog.cloudflare.com/health-check-analytics-and-how-you-can-use-it/</link>
            <pubDate>Fri, 12 Jun 2020 11:00:00 GMT</pubDate>
            <description><![CDATA[ Health Check Analytics is now live and available to all Pro, Business and Enterprise customers!  We are very excited to help decrease your time to resolution and ensure your application reliability is maximised. ]]></description>
            <content:encoded><![CDATA[ <p>At the end of last year, we introduced <a href="/new-tools-to-monitor-your-server-and-avoid-downtime/">Standalone Health Checks</a> - a service that lets you monitor the health of your origin servers and avoid the need to purchase additional third party services. The more that can be controlled from Cloudflare decreases maintenance cost, vendor management, and infrastructure complexity. This is important as it ensures you are able to scale your infrastructure seamlessly as your company grows. Today, we are introducing Standalone Health Check Analytics to help decrease your time to resolution for any potential issues. You can find Health Check Analytics in the sub-menu under the Traffic tab in your Cloudflare Dashboard.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7f25fu7kADPkUPmpNrPqbV/efa9b95ac2310e7abd23ea9ee79be535/image6-2.png" />
            
            </figure><p>As a refresher, Standalone Health Checks is a service that monitors an IP address or hostname for your origin servers or application and notifies you in near real-time if there happens to be a problem. These Health Checks support fine-tuned configurations based on expected codes, interval, protocols, timeout and <a href="/new-tools-to-monitor-your-server-and-avoid-downtime/">more</a>. These configurations enable you to properly target your checks based on the unique setup of your infrastructure. An example of a Health Check can be seen below which is monitoring an origin server in a staging environment with a notification set via email.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1rPMDAiXMEg7DUPWHSI33a/74089596eb49bfde3619b08fa3b72941/image7.png" />
            
            </figure><p>Once you set up a notification, you will be alerted when there is a change in the health of your origin server. In the example above, if your staging environment starts responding with anything other than a 200 OK response code, we will send you an email within seconds so you can take the necessary action before customers are impacted.</p>
    <div>
      <h3>Introducing Standalone Health Check Analytics</h3>
      <a href="#introducing-standalone-health-check-analytics">
        
      </a>
    </div>
    <p>Once you get the notification email, we provide tools that help to quickly debug the possible cause of the issue with detailed logs as well as data visualizations enabling you to better understand the context around the issue. Let’s walk through a real-world scenario and see how Health Check Analytics helps decrease our time to resolution.</p><p>A notification email has been sent to you letting you know that Staging is unhealthy. You log into your dashboard and go into Health Check Analytics for this particular Health Check. In the screenshot below, you can see that Staging is up 76% of the time vs 100% of the time for Production. Now that we see the graph validating the email notification that there is indeed a problem, we need to dig in further.  Below the graph you can see a breakdown of the type of errors that have taken place in both the Staging and Production addresses over the specified time period. We see there is only one error taking place, but very frequently, in the staging environment - a TCP Connection Failed error, leading to the lower availability.</p><p>This starts to narrow the funnel for what the issue could be. We know that there is something wrong with either the Staging server's ability to receive connections, maybe an issue during the SYN-ACK handshake, or possibly an issue with the router being used and not an issue with the origin server at all but instead receiving a down-stream consequence. With this information, you can quickly make the necessary checks to validate your hypothesis and minimize your time to resolution. Instead of having to dig through endless logs, or try to make educated guesses at where the issue could stem from, Health Check Analytics allows you to quickly hone in on detailed areas that could be the root cause. This in turn maximizes your application reliability but more importantly, keeps trust and brand expectation with your customers.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5u9fZspxov0t9Zv6MpDz07/f0e81032ac1c5cc5c7dbba46cb104195/image1-5.png" />
            
            </figure><p>Being able to quickly understand an overview of your infrastructure is important, but sometimes being able to dig deeper into each healthcheck can be more valuable to understand what is happening at a granular level. In addition to general information like address, response-code, <a href="https://www.cloudflare.com/learning/cdn/glossary/round-trip-time-rtt/">round trip time (RTT)</a> and failure reason,  we are adding more features to help you understand the Health Check result(s). We have also added extra information into the event table so you can quickly understand a given problem. In the case of a Response Code Mismatch Error, we now provide the expected response code for a given Health Check along with the received code. This removes the need to go back and check the configuration that may have been setup long ago and keep focus on the problem at hand.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1nItEajnHqqIPpyOC2rxBk/3485f38a0c9b7ac77fa827ea4a8fef49/image4.png" />
            
            </figure><p>The availability of different portions of your infrastructure is very important, but this does not provide the complete view. Performance is continuing to skyrocket in importance and value to customers. If an application is not performant, they will quickly go to a competitor without a second thought. Sometimes RTT is not enough to understand why requests have higher latency and where the root of the issue may reside. To better understand where time is spent for a given request, we are introducing the waterfall view of a request within the Event Log. With this view, you can understand the time taken for the TCP connection, time taken for the TLS handshake, and the time to first byte (TTFB) for the request. The waterfall will give you a chronological idea about time spent in different stages of the request.</p><ol><li><p>Time taken for establishing the initial TCP connection.(in dark blue, approx 41ms)</p></li><li><p>Once the TCP connection is established, time is spent doing the TLS handshake. This is another component that takes up time for HTTPS websites. (in light blue, approx 80ms)</p></li><li><p>Once the SYN-ACK handshake and connection is complete, then the time taken for the first byte to be received is also exposed. (in dark orange, approx 222ms)</p></li><li><p>The total round trip time (RTT) is the time taken to load the complete page once the TLS handshake is complete. The difference between the RTT and the TTFB gives you the time spent downloading content from a page. If your page has a large amount of content, the difference between TTFB and RTT will be high. (in light orange, approx 302ms). The page load time is approximately 80 ms for the address.</p></li></ol><p>Using the information above lends to a number of steps that can be taken for the website owner. The delay in initial TCP connection time could be decreased by making the website available in different geo locations around the world. This could also reduce the time for TLS handshake as each round trip will be faster. Another thing that is visible is the page load time of 80ms. This might be because of the contents of the page, maybe compression can be applied on the server side to make the load time better or unnecessary content can be removed. The information in the waterfall view can also tell if an additional external library increases the time to load the website after a release.</p><p>Cloudflare has over 200 edge locations around the world making it one of the largest Anycast networks on the planet. When a health check is configured, it can be run across the different regions on the Cloudflare infrastructure, enabling you to see the variation in latency around the world for specific Health Checks.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2mvXsdyNuJ4mV3QPhcbjag/a0043711628a194fddb8ce60c6aab2a2/image2-4.png" />
            
            </figure><p>Waterfall from India</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/ehZAiYItuf69fyMo7Gful/d723821ec1d4f7f2ac662a94e7d29965/image5-2.png" />
            
            </figure><p>Waterfall from Western North America‌‌</p><p>Based on the new information provided from Health Check Analytics, users can definitively validate that the address performs better from Western North America compared to India due to the page load time and overall RTT.</p>
    <div>
      <h3>How do health checks run?</h3>
      <a href="#how-do-health-checks-run">
        
      </a>
    </div>
    <p>To understand and decipher the logs that are found in the analytics dashboard, it is important to understand how Cloudflare runs the Health Checks. Cloudflare has data centers in more than 200 cities across 90+ countries throughout the world [<a href="/cloudflare-expanded-to-200-cities-in-2019/">more</a>]. We don’t run health checks from every single of these data centers (that would be a lot of requests to your servers!). Instead, we let you pick between one and thirteen regions from which to run health checks [<a href="https://api.cloudflare.com/#health-checks-create-health-check">Regions</a>].</p><p>The Internet is not the same everywhere around the world. So your users may not have the same experience on your application according to where they are. Running Health Checks from different regions lets you know the health of your application from the point of view of the Cloudflare network in each of these regions.</p><p>Imagine you configure a Health Check from two regions, Western North America and South East Asia, at an interval of 10 seconds. You may have been expecting to get two requests to your origin server every 10 seconds, but if you look at your server’s logs you will see that you are actually getting six. That is because we send Health Checks not just from one location in each region but three.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6thlddjekU628u1WI3wPrY/204fd64bba493ca2d3b3969ca9a8ddff/image3-1.png" />
            
            </figure><p>For a health check configured from All Regions (thirteen regions) there will be 39 requests to your server per configured interval.</p><p>You may wonder: ‘Why do you probe from multiple locations within a region?’ We do this to make sure the health we report represents the overall performance of your service as seen from that region. Before we report a change, we check that at least two locations agree on the status. We added a third one to make sure that the system keeps running even if there is an issue at one of our locations.</p>
    <div>
      <h3>Conclusion</h3>
      <a href="#conclusion">
        
      </a>
    </div>
    <p>Health Check Analytics is now live and available to all Pro, Business and Enterprise customers!  We are very excited to help decrease your time to resolution and ensure your application reliability is maximised.</p> ]]></content:encoded>
            <category><![CDATA[Insights]]></category>
            <category><![CDATA[Analytics]]></category>
            <category><![CDATA[Product News]]></category>
            <guid isPermaLink="false">6xUAxLdz1IsT1SOEn5ErYy</guid>
            <dc:creator>Fabienne Semeria</dc:creator>
            <dc:creator>Brian Batraski</dc:creator>
            <dc:creator>George Thomas</dc:creator>
        </item>
        <item>
            <title><![CDATA[Adding the Fallback Pool to the Load Balancing UI and other significant UI enhancements]]></title>
            <link>https://blog.cloudflare.com/adding-the-fallback-pool-to-the-load-balancing-ui/</link>
            <pubDate>Sat, 21 Mar 2020 12:00:00 GMT</pubDate>
            <description><![CDATA[ The Cloudflare Load Balancer was introduced over three years ago to provide our customers with a powerful, easy to use tool to intelligently route traffic to their origins across the world. ]]></description>
            <content:encoded><![CDATA[ <p></p><p>The Cloudflare Load Balancer was <a href="/cloudflare-traffic/">introduced</a> over three years ago to provide our customers with a powerful, easy to use tool to intelligently route traffic to their origins across the world. During the initial design process, one of the questions we had to answer was ‘where do we send traffic if all pools are down?’ We did not think it made sense just to drop the traffic, so we used the concept of a ‘fallback pool’ to send traffic to a ‘pool of last resort’ in the case that no pools were detected as available. While this may still result in an error, it gave an eyeball request a chance at being served successfully in case the pool was still up.</p><p>As a brief reminder, a <a href="https://www.cloudflare.com/learning/performance/what-is-load-balancing/">load balancer</a> helps route traffic across your origin servers to ensure your overall infrastructure stays healthy and available. Load Balancers are made up of pools, which can be thought of as collections of servers in a particular location.</p><p>Over the past three years, we’ve made many updates to the dashboard. The new designs now support the fallback pool addition to the dashboard UI. The use of a fallback pool is incredibly helpful in a tight spot, but not having it viewable in the dashboard led to confusion around which pool was set as the fallback. Was there a fallback pool set at all? We want to be sure you have the tools to support your day-to-day work, while also ensuring our dashboard is usable and intuitive.</p><p>You can now check which pool is set as the fallback in any given Load Balancer, along with being able to easily designate any pool in the Load Balancer as the fallback. If no fallback pool is set, then the last pool in the list will automatically be chosen. We made the decision to auto-set a pool to be sure that customers are always covered in case the worst scenario happens. You can access the fallback pool within the Traffic App of the Cloudflare dashboard when creating or editing a Load Balancer.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4ggltAcFa0BLRxsGYz12Lz/fb485229a99a7ee22949b9506be9c90b/image3-2.png" />
            
            </figure>
    <div>
      <h2>Load Balancing UI Improvements</h2>
      <a href="#load-balancing-ui-improvements">
        
      </a>
    </div>
    <p>Not only did we add the fallback pool to the UI, but we saw this as an opportunity to update other areas of the Load Balancing app that have caused some confusion in the past.</p>
    <div>
      <h3>Facelift and De-modaling</h3>
      <a href="#facelift-and-de-modaling">
        
      </a>
    </div>
    <p>As a start, we gave the main Load Balancing page a face lift as well as de-modaling (moving content out of a smaller modal screen into a larger area) the majority of the Load Balancing UI. We felt moving this content out of a small web element would allow users to more easily understand the content on the page and allow us to better use the larger available space rather than being limited to the small area of a modal. This change has been applied when you create or edit a Load Balancer and manage monitors and/or pools.</p>
    <div>
      <h4>Before:</h4>
      <a href="#before">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6ODp92PgLiDpHkvYbouxHz/c1dfa0f0ddf83384599ccff09246f0b6/image5-4.png" />
            
            </figure>
    <div>
      <h4>After:</h4>
      <a href="#after">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/sfJyMiONZBd9isiHDqhYy/05d65590ce4c7e0104c840c37be205fd/image6-2.png" />
            
            </figure><p>The updated UI has combined the health status and icon to declutter the available space and make it clear at a glance what the status is for a particular Load Balancer or Pool. We have also updated to a smaller toggle button across the Load Balancing UI, which allows us to update the action buttons with the added margin space gained. Now that we are utilizing the page surface area more efficiently, we moved forward to add more information in our tables so users are more aware of the shared aspects of their Load Balancer.</p>
    <div>
      <h3>Shared Objects and Editing</h3>
      <a href="#shared-objects-and-editing">
        
      </a>
    </div>
    <p>Shared objects have caused some level of concern for companies who have teams across the world - all leveraging the Cloudflare dashboard.</p><p>Some of the shared objects, Monitors and Pools, have a new column added outlining which Pools or Load Balancers are currently in use by a particular Monitor or Pool. This brings more clarity around what will be affected by any changes made by someone from your organization. This supports users to be more autonomous and confident when they make an update in the dashboard. If someone from team A wants to update a monitor for a production server, they can do so without the worry of monitoring for another pool possibly breaking or have to speak to team B first. The time saved and empowerment to make updates as things change in your business is incredibly valuable. It supports velocity you may want to achieve while maintaining a safe environment to operate in. The days of having to worry about unforeseen consequences that could crop up later down the road are swiftly coming to a close.</p><p>This helps teams understand the impact of a given change and what else would be affected. But, we did not feel this was enough. We want to be sure that everyone is confident in the changes they are making. On top of the additional columns, we added in a number of confirmation modals to drive confidence about a particular change. Also, a list in the modal of the other Load Balancers or Pools that would be impacted. We really wanted to drive the message home around which objects are shared: we made a final change to allow edits of monitors to take place only within the Manage Monitors page. We felt that having users navigate to the manage page in itself gives more understanding that these items are shared. For example, allowing edits to a Monitor in the same view of editing a Load Balancer can make it seem like those changes are only for that Load Balancer, which is not always the case.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7cPA9DDTZ7EmDkZCtr9PCf/3c334437776e2462badfbfd86b8fcae7/image9-3.png" />
            
            </figure>
    <div>
      <h4>Manage Monitors before:</h4>
      <a href="#manage-monitors-before">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4awzMfOIMP6lshqnLF409V/92ca4869adedddfb4da832e199991089/image8-1.png" />
            
            </figure>
    <div>
      <h4>Manage monitors after:</h4>
      <a href="#manage-monitors-after">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3xLO3oVzvRaRdpShzPtrY8/d8ca418d62bd4ed213fbb180aeac0f76/image2-4.png" />
            
            </figure>
    <div>
      <h3>Updated CTAs/Buttons</h3>
      <a href="#updated-ctas-buttons">
        
      </a>
    </div>
    <p>Lastly, when users would expand the Manage Load Balancer table to view more details about their Pools or Origins within that specific Load Balancer, they would click the large X icon in the top right of that expanded card to close it - seems reasonable in the expanded context.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3gTj2r2LBTViiOpVxQptqw/4d569e7beb2b7a280317f4858ad91599/image7-2.png" />
            
            </figure><p>But, the X icon did not close the expanded card, but rather deleted the Load Balancer altogether. This is dangerous and we want to prevent users from making mistakes. With the added space we gained from de-modaling large areas of the UI, we have updated these buttons to be clickable text buttons that read ‘Edit’ or ‘Delete’ instead of the icon buttons. The difference is providing clearly defined text around the action that will take place, rather than leaving it up to a users interpretation of what the icon on the button means and the action it would result in. We felt this was much clearer to users and not be met with unwanted changes.</p><p>We are very excited about the updates to the Load Balancing dashboard and look forward to improving day in and day out.</p>
    <div>
      <h4>After:</h4>
      <a href="#after">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3ZblI7RVkKGLKcgJwvX9J0/f1928fd1df0fdc36d0a1575362ed3155/image1-6.png" />
            
            </figure> ]]></content:encoded>
            <category><![CDATA[Load Balancing]]></category>
            <category><![CDATA[Speed & Reliability]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[Dashboard]]></category>
            <guid isPermaLink="false">6LhSBUmHtAtDR5zLfiumFX</guid>
            <dc:creator>Brian Batraski</dc:creator>
        </item>
        <item>
            <title><![CDATA[New tools to monitor your server and avoid downtime]]></title>
            <link>https://blog.cloudflare.com/new-tools-to-monitor-your-server-and-avoid-downtime/</link>
            <pubDate>Wed, 11 Dec 2019 10:13:00 GMT</pubDate>
            <description><![CDATA[ When your server goes down, it’s a big problem. Today, Cloudflare is introducing two new tools to help you understand and respond faster to origin downtime — plus, a new service to automatically avoid downtime. ]]></description>
            <content:encoded><![CDATA[ <p></p><p>When your server goes down, it’s a big problem. Today, Cloudflare is introducing two new tools to help you understand and respond faster to origin downtime — plus, a new service to automatically <i>avoid</i> downtime.</p><p>The new features are:</p><ul><li><p><b>Standalone Health Checks</b>, which notify you as soon as we detect problems at your origin server, without needing a Cloudflare Load Balancer.</p></li><li><p><b>Passive Origin Monitoring</b>, which lets you know when your origin cannot be reached, with no configuration required.</p></li><li><p><b>Zero-Downtime Failover</b>, which can automatically avert failures by retrying requests to origin.</p></li></ul>
    <div>
      <h3>Standalone Health Checks</h3>
      <a href="#standalone-health-checks">
        
      </a>
    </div>
    <p>Our first new tool is Standalone Health Checks, which will notify you as soon as we detect problems at your origin server -- without needing a Cloudflare Load Balancer.</p><p>A <i>Health Check</i> is a service that runs on our edge network to monitor whether your origin server is online. Health Checks are a key part of our load balancing service because they allow us to quickly and actively route traffic to origin servers that are live and ready to serve requests. Standalone Health Checks allow you to monitor the health of your origin even if you only have one origin or do not yet need to balance traffic across your infrastructure.</p><p>We’ve provided many dimensions for you to hone in on exactly what you’d like to check, including response code, protocol type, and interval. You can specify a particular path if your origin serves multiple applications, or you can check a larger subset of response codes for your staging environment. All of these options allow you to properly target your Health Check, giving you a precise picture of what is wrong with your origin.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7vtTuksJ9p1aCImlW8G92Q/77f4673597912f4e067c567c9caa7414/image4-3.png" />
            
            </figure><p>If one of your origin servers becomes unavailable, you will receive a notification letting you know of the health change, along with detailed information about the failure so you can take action to restore your origin’s health.  </p><p>Lastly, once you’ve set up your Health Checks across the different origin servers, you may want to see trends or the top unhealthy origins. With Health Check Analytics, you’ll be able to view all the change events for a given health check, isolate origins that may be top offenders or not performing at par, and move forward with a fix. On top of this, in the near future, we are working to provide you with access to all Health Check raw events to ensure you have the detailed lens to compare Cloudflare Health Check Event logs against internal server logs.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1VYa5CvnnQFYZPtYy6HJ0D/c0b8f509411e19726998b1d6fd3f2b49/image5-3.png" />
            
            </figure><p><b>Users on the Pro, Business, or Enterprise plan will have access to Standalone Health Checks and Health Check Analytics</b> to promote top-tier application reliability and help maximize brand trust with their customers. You can access Standalone Health Checks and Health Check Analytics through the Traffic app in the dashboard.</p>
    <div>
      <h3>Passive Origin Monitoring</h3>
      <a href="#passive-origin-monitoring">
        
      </a>
    </div>
    <p>Standalone Health Checks are a super flexible way to understand what’s happening at your origin server. However, they require some forethought to configure before an outage happens. That’s why we’re excited to introduce <i>Passive</i> Origin Monitoring, which will automatically notify you when a problem occurs -- no configuration required.</p><p>Cloudflare knows when your origin is down, because we’re the ones trying to reach it to serve traffic! When we detect downtime lasting longer than a few minutes, we’ll send you an email.</p><p>Starting today, you can configure origin monitoring alerts to go to multiple email addresses. Origin Monitoring alerts are available in the new Notification Center (more on that below!) in the Cloudflare dashboard:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/KZoJ9kNRg0X8i3LJkBtZN/fd6b4d2b57d0781d495b17effd855495/image1-6.png" />
            
            </figure><p><b>Passive Origin Monitoring is available to customers on </b><a href="https://www.cloudflare.com/plans/"><b>all Cloudflare plans</b></a><b>.</b></p>
    <div>
      <h3>Zero-Downtime Failover</h3>
      <a href="#zero-downtime-failover">
        
      </a>
    </div>
    <p>What’s better than getting notified about downtime? Never having downtime in the first place! With Zero-Downtime Failover, we can automatically retry requests to origin, even before Load Balancing kicks in.</p><p>How does it work? If a request to your origin fails, and Cloudflare has another record for your origin server, we’ll just try another origin <i>within the same HTTP request</i>. The alternate record could be either an A/AAAA record configured via Cloudflare DNS, or another origin server in the same Load Balancing pool.</p><p>Consider an website, example.com, that has web servers at two different IP addresses: <code>203.0.113.1</code> and <code>203.0.113.2</code>. Before Zero-Downtime Failover, if <code>203.0.113.1</code> becomes unavailable, Cloudflare would attempt to connect, fail, and ultimately serve an error page to the user. With Zero-Downtime Failover, if <code>203.0.113.1</code> cannot be reached, then Cloudflare’s proxy will seamlessly attempt to connect to <code>203.0.113.2</code>. If the second server can respond, then Cloudflare can avert serving an error to example.com’s user.</p><p>Since we rolled Zero-Downtime Failover a few weeks ago, we’ve prevented <b>tens of millions of requests per day</b> from failing!</p><p>Zero-Downtime <a href="https://www.cloudflare.com/learning/performance/what-is-load-balancing/">Failover</a> works in conjunction with <a href="https://www.cloudflare.com/learning/performance/what-is-load-balancing/">Load Balancing</a>, Standalone Health Checks, and Passive Origin Monitoring to keep your website running without a hitch. Health Checks and Load Balancing can avert failure, but take time to kick in. Zero-Downtime failover works instantly, but adds latency on each connection attempt. In practice, Zero-Downtime Failover is helpful at the <i>start</i> of an event, when it can instantly recover from errors; once a Health Check has detected a problem, a Load Balancer can then kick in and properly re-route traffic. And if no origin is available, we’ll send an alert via Passive Origin Monitoring.</p><p>To see an example of this in practice, consider an incident from a recent customer. They saw a spike in errors at their origin that would ordinarily cause availability to plummet (red line), but thanks to Zero-Downtime failover, their actual availability stayed flat (blue line).</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2cvaHca9BEnJbZXotcJ20N/e2e930d0986728978a569df15e719f51/zdf-availability.png" />
            
            </figure><p>During a 30 minute time period, Zero-Downtime Failover improved overall availability from 99.53% to 99.98%, and prevented 140,000 HTTP requests from resulting in an error.</p><p>It’s important to note that we only attempt to retry requests that have failed during the TCP or TLS connection phase, which ensures that HTTP headers and payload have not been transmitted yet. Thanks to this safety mechanism, <b>we're able to make Zero-Downtime Failover Cloudflare's default behavior for Pro, Business, and Enterprise plans</b>. In other words, Zero-Downtime Failover makes connections to your origins more reliable with no configuration or action required.</p>
    <div>
      <h3>Coming soon: more notifications, more flexibility</h3>
      <a href="#coming-soon-more-notifications-more-flexibility">
        
      </a>
    </div>
    <p>Our customers are always asking us for more insights into the health of their critical edge infrastructure. Health Checks and Passive Origin monitoring are a significant step towards Cloudflare taking a <b>proactive</b> instead of reactive approach to insights.</p><p>To support this work, today we’re announcing the <b>Notification Center</b> as the central place to manage notifications. This is available in the dashboard today, accessible from your Account Home.</p><p>From here, you can create new notifications, as well as view any existing notifications you’ve already set up. Today’s release allows you to configure  Passive Origin Monitoring notifications, and set multiple email recipients.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3ZDuV1fp4eY29qoySlyBr2/a67bfa65849c3194b1ab990b889f66c2/image2-7.png" />
            
            </figure><p>We’re excited about today’s launches to helping our customers avoid downtime. Based on your feedback, we have lots of improvements planned that can help you get the timely insights you need:</p><ul><li><p>New notification delivery mechanisms</p></li><li><p>More events that can trigger notifications</p></li><li><p>Advanced configuration options for Health Checks, including added protocols, threshold based notifications, and threshold based status changes.</p></li><li><p>More ways to configure Passive Health Checks, like the ability to add thresholds, and filter to specific status codes</p></li></ul> ]]></content:encoded>
            <category><![CDATA[Insights]]></category>
            <category><![CDATA[Analytics]]></category>
            <category><![CDATA[Product News]]></category>
            <guid isPermaLink="false">Uj0yC4ktYS40SSrcbwbbH</guid>
            <dc:creator>Brian Batraski</dc:creator>
            <dc:creator>Jon Levine</dc:creator>
            <dc:creator>Steven Pack</dc:creator>
        </item>
        <item>
            <title><![CDATA[Introducing Load Balancing Analytics]]></title>
            <link>https://blog.cloudflare.com/introducing-load-balancing-analytics/</link>
            <pubDate>Tue, 10 Dec 2019 14:00:00 GMT</pubDate>
            <description><![CDATA[ Cloudflare aspires to make Internet properties everywhere faster, more secure, and more reliable. Load Balancing helps with speed and reliability and has been evolving over the past three years. ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Cloudflare aspires to make Internet properties everywhere faster, more secure, and more reliable. <a href="https://www.cloudflare.com/load-balancing/">Load Balancing</a> helps with speed and reliability and has been evolving over the past <a href="/cloudflare-traffic/">three years</a>.</p><p>Let’s go through a scenario that highlights a bit more of <a href="https://www.cloudflare.com/learning/performance/what-is-load-balancing/">what a Load Balancer is</a> and the value it can provide.  A standard load balancer comprises a set of pools, each of which have origin servers that are hostnames and/or IP addresses. A routing policy is assigned to each load balancer, which determines the origin pool selection process.</p><p>Let’s say you build an API that is using cloud provider ACME Web Services. Unfortunately, ACME had a rough week, and their service had a regional outage in their Eastern US region. Consequently, your website was unable to serve traffic during this period, which resulted in reduced brand trust from users and missed revenue. To prevent this from happening again, you decide to take two steps: use a secondary cloud provider (in order to avoid having ACME as a single point of failure) and use Cloudflare’s Load Balancing to take advantage of the multi-cloud architecture. Cloudflare’s Load Balancing can help you maximize your API’s availability for your new architecture. For example, you can assign health checks to each of your origin pools. These health checks can monitor your origin servers’ health by checking HTTP status codes, response bodies, and more. If an origin pool’s response doesn’t match what is expected, then traffic will stop being steered there. This will reduce downtime for your API when ACME has a regional outage because traffic in that region will seamlessly be rerouted to your fallback origin pool(s). In this scenario, you can set the fallback pool to be origin servers in your secondary cloud provider. In addition to health checks, you can use the ‘random’ routing policy in order to distribute your customers’ API requests evenly across your backend. If you want to optimize your response time instead, you can use ‘dynamic steering’, which will send traffic to the origin determined to be closest to your customer.</p><p>Our customers love Cloudflare Load Balancing, and we’re always looking to improve and make our customers’ lives easier. Since Cloudflare’s Load Balancing was first released, the most popular customer request was for an analytics service that would provide insights on traffic steering decisions.</p><p>Today, we are rolling out <a href="https://dash.cloudflare.com/traffic/load-balancing-analytics">Load Balancing Analytics</a> in the Traffic tab of the Cloudflare  dashboard. The three major components in the analytics service are:</p><ul><li><p>An overview of traffic flow that can be filtered by load balancer, pool, origin, and region.</p></li><li><p>A latency map that indicates origin health status and latency metrics from <a href="https://www.cloudflare.com/network/">Cloudflare’s global network spanning 194 cities</a> and growing!</p></li><li><p>Event logs denoting changes in origin health. This feature was released in 2018 and tracks pool and origin transitions between healthy and unhealthy states. We’ve moved these logs under the new Load Balancing Analytics subtab. See the <a href="https://support.cloudflare.com/hc/en-us/articles/360000062871-Load-Balancing-Event-Logs">documentation</a> to learn more.</p></li></ul><p>In this blog post, we’ll discuss the traffic flow distribution and the latency map.</p>
    <div>
      <h2>Traffic Flow Overview</h2>
      <a href="#traffic-flow-overview">
        
      </a>
    </div>
    <p>Our users want a detailed view into where their traffic is going, why it is going there, and insights into what changes may optimize their infrastructure. With Load Balancing Analytics, users can graphically view traffic demands on load balancers, pools, and origins over variable time ranges.</p><p>Understanding how traffic flow is distributed informs the process of creating new origin pools, adapting to peak traffic demands, and observing failover response during origin pool failures.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6TWw8kMjxWkVZ2m8MeYz8D/863596a6142772d6391a1e4dd3b1f3f4/image5-1.png" />
            
            </figure><p>Figure 1</p><p>In Figure 1, we can see an overview of traffic for a given domain. On Tuesday, the 24th, the red pool was created and added to the load balancer. In the following 36 hours, as the red pool handled more traffic, the blue and green pool both saw a reduced workload. In this scenario, the traffic distribution graph did provide the customer with new insights. First, it demonstrated that traffic was being steered to the new red pool. It also allowed the customer to understand the new level of traffic distribution across their network. Finally, it allowed the customer to confirm whether traffic decreased in the expected pools. Over time, these graphs can be used to better manage capacity and plan for upcoming infrastructure needs.</p>
    <div>
      <h2>Latency Map</h2>
      <a href="#latency-map">
        
      </a>
    </div>
    <p>The traffic distribution overview is only one part of the puzzle. Another essential component is understanding request performance around the world. This is useful because customers can ensure user requests are handled as fast as possible, regardless of where in the world the request originates.</p><p>The standard Load Balancing configuration contains monitors that probe the health of customer origins. These monitors can be configured to run from a particular region(s) or, for Enterprise customers, from <a href="https://www.cloudflare.com/network/">all Cloudflare locations</a>. They collect useful information, such as round-trip time, that can be aggregated to create the latency map.</p><p>The map provides a summary of how responsive origins are from around the world, so customers can see regions where requests are underperforming and may need further investigation. A common metric used to identify performance is request latency. We found that the p90 latency for all Load Balancing origins being monitored is 300 milliseconds, which means that 90% of all monitors’ health checks had a round trip time faster than 300 milliseconds. We used this value to identify locations where latency was slower than the p90 latency seen by other Load Balancing customers.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/h0Yh8XJ5eicGBGDwqzKgO/5261df5479aaccfe537ebb6cab47a61f/image6-2.png" />
            
            </figure><p>Figure 2</p><p>In Figure 2, we can see the responsiveness of the Northeast Asia pool. The Northeast Asia pool is slow specifically for monitors in South America, the Middle East, and Southern Africa, but fast for monitors that are probing closer to the origin pool. Unfortunately, this means users for the pool in countries like Paraguay are seeing high request latency. High page load times have many unfortunate consequences: higher visitor bounce rate, decreased visitor satisfaction rate, and a lower search engine ranking. In order to avoid these repercussions, a site administrator could consider adding a new origin pool in a region closer to underserved regions. In Figure 3, we can see the result of adding a new origin pool in Eastern North America. We see the number of locations where the domain was found to be unhealthy drops to zero and the number of slow locations cut by more than 50%.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3ZgBHTryIIddChrdpq3nPl/5ae7f131f2894e2caac5d28800681a48/image3-2.png" />
            
            </figure><p>Figure 3</p><p>Tied with the traffic flow metrics from the Overview page, the latency map arms users with insights to optimize their internal systems, reduce their costs, and increase their <a href="https://www.cloudflare.com/learning/performance/glossary/application-availability/">application availability</a>.</p>
    <div>
      <h2>GraphQL Analytics API</h2>
      <a href="#graphql-analytics-api">
        
      </a>
    </div>
    <p>Behind the scenes, Load Balancing Analytics is powered by the GraphQL Analytics API. As you’ll learn later this week, GraphQL provides many benefits to us at Cloudflare. Customers now only need to learn a single API format that will allow them to extract only the data they require. For internal development, GraphQL eliminates the need for customized analytics APIs for each service, reduces query cost by increasing cache hits, and reduces developer fatigue by using a straightforward query language with standardized input and output formats. Very soon, all Load Balancing customers on paid plans will be given the opportunity to extract insights from the GraphQL API.  Let’s walk through some examples of how you can utilize the GraphQL API to understand your Load Balancing logs.</p><p>Suppose you want to understand the number of requests the pools for a load balancer are seeing from the different locations in Cloudflare’s global network. The query in Figure 4 counts the number of unique (location, pool ID) combinations every fifteen minutes over the course of a week.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/iG1rrtCkk9R95r1l8b2Yf/2b4b4fc042c9141e93bc627d29ee0182/image7.png" />
            
            </figure><p>Figure 4</p><p>For context, our example load balancer, lb.example.com, utilizes dynamic steering. <a href="/i-wanna-go-fast-load-balancing-dynamic-steering/">Dynamic steering</a> directs requests to the most responsive, available, origin pool, which is often the closest. It does so using a weighted round-trip time measurement. Let’s try to understand why all traffic from Singapore (SIN) is being steered to our pool in Northeast Asia (asia-ne). We can run the query in Figure 5. This query shows us that the asia-ne pool has an avgRttMs value of 67ms, whereas the other two pools have avgRttMs values that exceed 150ms. The lower avgRttMs value explains why traffic in Singapore is being routed to the asia-ne pool.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5y7YRu5ZthmJV6JMZ6Sxlv/8199396319354447783e07a62cf685ad/image4-1.png" />
            
            </figure><p>Figure 5</p><p>Notice how the query in Figure 4 uses the loadBalancingRequestsGroups schema, whereas the query in Figure 5 uses the loadBalancingRequests schema. loadBalancingRequestsGroups queries aggregate data over the requested query interval, whereas loadBalancingRequests provides granular information on individual requests. For those ready to get started, Cloudflare has written a helpful <a href="https://developers.cloudflare.com/analytics/graphql-api/getting-started/">guide</a>. The <a href="https://graphql.org/learn/">GraphQL website</a> is also a great resource. We recommend you use an IDE like <a href="https://electronjs.org/apps/graphiql">GraphiQL</a> to make your queries. GraphiQL embeds the schema documentation into the IDE, autocompletes, saves your queries, and manages your custom headers, all of which help make the developer experience smoother.</p>
    <div>
      <h2>Conclusion</h2>
      <a href="#conclusion">
        
      </a>
    </div>
    <p>Now that the Load Balancing Analytics solution is live and available to all Pro, Business, Enterprise customers, we’re excited for you to start using it! We’ve attached a survey to the Traffic overview page, and we’d love to hear your feedback.</p> ]]></content:encoded>
            <category><![CDATA[Insights]]></category>
            <category><![CDATA[Analytics]]></category>
            <category><![CDATA[Load Balancing]]></category>
            <category><![CDATA[Product News]]></category>
            <guid isPermaLink="false">50oghLDuz1nzE8oznPQP3Y</guid>
            <dc:creator>Brian Batraski</dc:creator>
            <dc:creator>Rohin Lohe</dc:creator>
        </item>
    </channel>
</rss>