
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/">
    <channel>
        <title><![CDATA[ The Cloudflare Blog ]]></title>
        <description><![CDATA[ Get the latest news on how products at Cloudflare are built, technologies used, and join the teams helping to build a better Internet. ]]></description>
        <link>https://blog.cloudflare.com</link>
        <atom:link href="https://blog.cloudflare.com/" rel="self" type="application/rss+xml"/>
        <language>en-us</language>
        
        <lastBuildDate>Thu, 09 Apr 2026 22:47:48 GMT</lastBuildDate>
        <item>
            <title><![CDATA[DIY BYOIP: a new way to Bring Your Own IP prefixes to Cloudflare]]></title>
            <link>https://blog.cloudflare.com/diy-byoip/</link>
            <pubDate>Fri, 07 Nov 2025 14:00:00 GMT</pubDate>
            <description><![CDATA[ Announcing a new self-serve API for Bring Your Own IP (BYOIP), giving customers unprecedented control and flexibility to onboard, manage, and use their own IP prefixes with Cloudflare's services. ]]></description>
            <content:encoded><![CDATA[ <p>When a customer wants to <a href="https://blog.cloudflare.com/bringing-your-own-ips-to-cloudflare-byoip/"><u>bring IP address space to</u></a> Cloudflare, they’ve always had to reach out to their account team to put in a request. This request would then be sent to various Cloudflare engineering teams such as addressing and network engineering — and then the team responsible for the particular service they wanted to use the prefix with (e.g., CDN, Magic Transit, Spectrum, Egress). In addition, they had to work with their own legal teams and potentially another organization if they did not have primary ownership of an IP prefix in order to get a <a href="https://developers.cloudflare.com/byoip/concepts/loa/"><u>Letter of Agency (LOA)</u></a> issued through hoops of approvals. This process is complex, manual, and  time-consuming for all parties involved — sometimes taking up to 4–6 weeks depending on various approvals. </p><p>Well, no longer! Today, we are pleased to announce the launch of our self-serve BYOIP API, which enables our customers to onboard and set up their BYOIP prefixes themselves.</p><p>With self-serve, we handle the bureaucracy for you. We have automated this process using the gold standard for routing security — the Resource Public Key Infrastructure, RPKI. All the while, we continue to ensure the best quality of service by generating LOAs on our customers’ behalf, based on the security guarantees of our new ownership validation process. This ensures that customer routes continue to be accepted in every corner of the Internet.</p><p>Cloudflare takes the security and stability of the whole Internet very seriously. RPKI is a cryptographically-strong authorization mechanism and is, we believe, substantially more reliable than common practice which relies upon human review of scanned documents. However, deployment and availability of some RPKI-signed artifacts like the AS Path Authorisation (ASPA) object remains limited, and for that reason we are limiting the initial scope of self-serve onboarding to BYOIP prefixes originated from Cloudflare's autonomous system number (ASN) AS 13335. By doing this, we only need to rely on the publication of Route Origin Authorisation (ROA) objects, which are widely available. This approach has the advantage of being safe for the Internet and also meeting the needs of most of our BYOIP customers. </p><p>Today, we take a major step forward in offering customers a more comprehensive IP address management (IPAM) platform. With the recent update to <a href="https://blog.cloudflare.com/your-ips-your-rules-enabling-more-efficient-address-space-usage/"><u>enable multiple services on a single BYOIP prefix</u></a> and this latest advancement to enable self-serve onboarding via our API, we hope customers feel empowered to take control of their IPs on our network.</p>
    <div>
      <h2>An evolution of Cloudflare BYOIP</h2>
      <a href="#an-evolution-of-cloudflare-byoip">
        
      </a>
    </div>
    <p>We want Cloudflare to feel like an extension of your infrastructure, which is why we <a href="https://blog.cloudflare.com/bringing-your-own-ips-to-cloudflare-byoip/"><u>originally launched Bring-Your-Own-IP (BYOIP) back in 2020</u></a>. </p><p>A quick refresher: Bring-your-own-IP is named for exactly what it does - it allows customers to bring their own IP space to Cloudflare. Customers choose BYOIP for a number of reasons, but the main reasons are control and configurability. An IP prefix is a range or block of IP addresses. Routers create a table of reachable prefixes, known as a routing table, to ensure that packets are delivered correctly across the Internet. When a customer's Cloudflare services are configured to use the customer's own addresses, onboarded to Cloudflare as BYOIP, a packet with a corresponding destination address will be routed across the Internet to Cloudflare's global edge network, where it will be received and processed. BYOIP can be used with our Layer 7 services, Spectrum, or Magic Transit. </p>
    <div>
      <h2>A look under the hood: How it works</h2>
      <a href="#a-look-under-the-hood-how-it-works">
        
      </a>
    </div>
    
    <div>
      <h3>Today’s world of prefix validation</h3>
      <a href="#todays-world-of-prefix-validation">
        
      </a>
    </div>
    <p>Let’s take a step back and take a look at the state of the BYOIP world right now. Let’s say a customer has authority over a range of IP addresses, and they’d like to bring them to Cloudflare. We require customers to provide us with a Letter of Authorization (LOA) and have an Internet Routing Registry (IRR) record matching their prefix and ASN. Once we have this, we require manual review by a Cloudflare engineer. There are a few issues with this process:</p><ul><li><p>Insecure: The LOA is just a document—a piece of paper. The security of this method rests entirely on the diligence of the engineer reviewing the document. If the review is not able to detect that a document is fraudulent or inaccurate, it is possible for a prefix or ASN to be hijacked.</p></li><li><p>Time-consuming: Generating a single LOA is not always sufficient. If you are leasing IP space, we will ask you to provide documentation confirming that relationship as well, so that we can see a clear chain of authorisation from the original assignment or allocation of addresses to you. Getting all the paper documents to verify this chain of ownership, combined with having to wait for manual review can result in weeks of waiting to deploy a prefix!</p></li></ul>
    <div>
      <h3>Automating trust: How Cloudflare verifies your BYOIP prefix ownership in minutes</h3>
      <a href="#automating-trust-how-cloudflare-verifies-your-byoip-prefix-ownership-in-minutes">
        
      </a>
    </div>
    <p>Moving to a self-serve model allowed us to rethink the manner in which we conduct prefix ownership checks. We asked ourselves: How can we quickly, securely, and automatically prove you are authorized to use your IP prefix and intend to route it through Cloudflare?</p><p>We ended up killing two birds with one stone, thanks to our two-step process involving the creation of an RPKI ROA (verification of intent) and modification of IRR or rDNS records (verification of ownership). Self-serve unlocks the ability to not only onboard prefixes more quickly and without human intervention, but also exercises more rigorous ownership checks than a simple scanned document ever could. While not 100% foolproof, it is a significant improvement in the way we verify ownership.</p>
    <div>
      <h3>Tapping into the authorities	</h3>
      <a href="#tapping-into-the-authorities">
        
      </a>
    </div>
    <p>Regional Internet Registries (RIRs) are the organizations responsible for distributing and managing Internet number resources like IP addresses. They are composed of 5 different entities operating in different regions of the world (<a href="https://developers.cloudflare.com/byoip/get-started/#:~:text=Your%20prefix%20must%20be%20registered%20under%20one%20of%20the%20Regional%20Internet%20Registries%20(RIRs)%3A"><u>RIRs</u></a>). Originally allocated address space from the Internet Assigned Numbers Authority (IANA), they in turn assign and allocate that IP space to Local Internet Registries (LIRs) like ISPs.</p><p>This process is based on RIR policies which generally look at things like legal documentation, existing database/registry records, technical contacts, and BGP information. End-users can obtain addresses from an LIR, or in some cases through an RIR directly. As IPv4 addresses have become more scarce, brokerage services have been launched to allow addresses to be leased for fixed periods from their original assignees.</p><p>The Internet Routing Registry (IRR) is a separate system that focuses on routing rather than address assignment. Many organisations operate IRR instances and allow routing information to be published, including all five RIRs. While most IRR instances impose few barriers to the publication of routing data, those that are operated by RIRs are capable of linking the ability to publish routing information with the organisations to which the corresponding addresses have been assigned. We believe that being able to modify an IRR record protected in this way provides a good signal that a user has the rights to use a prefix.</p><p>Example of a route object containing validation token (using the documentation-only address 192.0.2.0/24):</p>
            <pre><code>% whois -h rr.arin.net 192.0.2.0/24

route:          192.0.2.0/24
origin:         AS13335
descr:          Example Company, Inc.
                cf-validation: 9477b6c3-4344-4ceb-85c4-6463e7d2453f
admin-c:        ADMIN2521-ARIN
tech-c:         ADMIN2521-ARIN
tech-c:         CLOUD146-ARIN
mnt-by:         MNT-CLOUD14
created:        2025-07-29T10:52:27Z
last-modified:  2025-07-29T10:52:27Z
source:         ARIN</code></pre>
            <p>For those that don’t want to go through the process of IRR-based validation, reverse DNS (rDNS) is provided as another secure method of verification. To manage rDNS for a prefix — whether it's creating a PTR record or a security TXT record — you must be granted permission by the entity that allocated the IP block in the first place (usually your ISP or the RIR).</p><p>This permission is demonstrated in one of two ways:</p><ul><li><p>Directly through the IP owner’s authenticated customer portal (ISP/RIR).</p></li><li><p>By the IP owner delegating authority to your third-party DNS provider via an NS record for your reverse zone.</p></li></ul><p>Example of a reverse domain lookup using dig command (using the documentation-only address 192.0.2.0/24):</p>
            <pre><code>% dig cf-validation.2.0.192.in-addr.arpa TXT

; &lt;&lt;&gt;&gt; DiG 9.10.6 &lt;&lt;&gt;&gt; cf-validation.2.0.192.in-addr.arpa TXT
;; global options: +cmd
;; Got answer:
;; -&gt;&gt;HEADER&lt;&lt;- opcode: QUERY, status: NOERROR, id: 16686
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;cf-validation.2.0.192.in-addr.arpa. IN TXT

;; ANSWER SECTION:
cf-validation.2.0.192.in-addr.arpa. 300 IN TXT "b2f8af96-d32d-4c46-a886-f97d925d7977"

;; Query time: 35 msec
;; SERVER: 127.0.2.2#53(127.0.2.2)
;; WHEN: Fri Oct 24 10:43:52 EDT 2025
;; MSG SIZE  rcvd: 150</code></pre>
            <p>So how exactly is one supposed to modify these records? That’s where the validation token comes into play. Once you choose either the IRR or Reverse DNS method, we provide a unique, single-use validation token. You must add this token to the content of the relevant record, either in the IRR or in the DNS. Our system then looks for the presence of the token as evidence that the request is being made by someone with authorization to make the requested modification. If the token is found, verification is complete and your ownership is confirmed!</p>
    <div>
      <h3>The digital passport 🛂</h3>
      <a href="#the-digital-passport">
        
      </a>
    </div>
    <p>Ownership is only half the battle; we also need to confirm your intention that you authorize Cloudflare to advertise your prefix. For this, we rely on the gold standard for routing security: the Resource Private Key Infrastructure (RPKI), and in particular Route Origin Authorization (ROA) objects.</p><p>A ROA is a cryptographically-signed document that specifies which Autonomous System Number (ASN) is authorized to originate your IP prefix. You can think of a ROA as the digital equivalent of a certified, signed, and notarised contract from the owner of the prefix.</p><p>Relying parties can validate the signatures in a ROA using the RPKI.You simply create a ROA that specifies Cloudflare's ASN (AS13335) as an authorized originator and arrange for it to be signed. Many of our customers used hosted RPKI systems available through RIR portals for this. When our systems detect this signed authorization, your routing intention is instantly confirmed. </p><p>Many other companies that support BYOIP require a complex workflow involving creating self-signed certificates and manually modifying RDAP (Registration Data Access Protocol) records—a heavy administrative lift. By embracing a choice of IRR object modification and Reverse DNS TXT records, combined with RPKI, we offer a verification process that is much more familiar and straightforward for existing network operators.</p>
    <div>
      <h3>The global reach guarantee</h3>
      <a href="#the-global-reach-guarantee">
        
      </a>
    </div>
    <p>While the new self-serve flow ditches the need for the "dinosaur relic" that is the LOA, many network operators around the world still rely on it as part of the process of accepting prefixes from other networks.</p><p>To help ensure your prefix is accepted by adjacent networks globally, Cloudflare automatically generates a document on your behalf to be distributed in place of a LOA. This document provides information about the checks that we have carried out to confirm that we are authorised to originate the customer prefix, and confirms the presence of valid ROAs to authorise our origination of it. In this way we are able to support the workflows of network operators we connect to who rely upon LOAs, without our customers having the burden of generating them.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1GimIe80gJn5PrRUGkEMpF/130d2590e45088d58ac62ab2240f4d5c/image1.png" />
          </figure>
    <div>
      <h2>Staying away from black holes</h2>
      <a href="#staying-away-from-black-holes">
        
      </a>
    </div>
    <p>One concern in designing the Self-Serve API is the trade-off between giving customers flexibility while implementing the necessary safeguards so that an IP prefix is never advertised without a matching service binding. If this were to happen, Cloudflare would be advertising a prefix with no idea on what to do with the traffic when we receive it! We call this “blackholing” traffic. To handle this, we introduced the requirement of a default service binding — i.e. a service binding that spans the entire range of the IP prefix onboarded. </p><p>A customer can later layer different service bindings on top of their default service binding via <a href="https://blog.cloudflare.com/your-ips-your-rules-enabling-more-efficient-address-space-usage/"><u>multiple service bindings</u></a>, like putting CDN on top of a default Spectrum service binding. This way, a prefix can never be advertised without a service binding and blackhole our customers’ traffic.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/20QAM5GITJ5m5kYkNlh701/82812d202ffa7b9a4e46838aa6c04937/image2.png" />
          </figure>
    <div>
      <h2>Getting started</h2>
      <a href="#getting-started">
        
      </a>
    </div>
    <p>Check out our <a href="https://developers.cloudflare.com/byoip/get-started/"><u>developer docs</u></a> on the most up-to-date documentation on how to onboard, advertise, and add services to your IP prefixes via our API. Remember that onboardings can be complex, and don’t hesitate to ask questions or reach out to our <a href="https://www.cloudflare.com/professional-services/"><u>professional services</u></a> team if you’d like us to do it for you.</p>
    <div>
      <h2>The future of network control</h2>
      <a href="#the-future-of-network-control">
        
      </a>
    </div>
    <p>The ability to script and integrate BYOIP management into existing workflows is a game-changer for modern network operations, and we’re only just getting started. In the months ahead, look for self-serve BYOIP in the dashboard, as well as self-serve BYOIP offboarding to give customers even more control.</p><p>Cloudflare's self-serve BYOIP API onboarding empowers customers with unprecedented control and flexibility over their IP assets. This move to automate onboarding empowers a stronger security posture, moving away from manually-reviewed PDFs and driving <a href="https://rpki.cloudflare.com/"><u>RPKI adoption</u></a>. By using these API calls, organizations can automate complex network tasks, streamline migrations, and build more resilient and agile network infrastructures.</p> ]]></content:encoded>
            <category><![CDATA[API]]></category>
            <category><![CDATA[Addressing]]></category>
            <category><![CDATA[BYOIP]]></category>
            <category><![CDATA[IPv4]]></category>
            <category><![CDATA[IPv6]]></category>
            <category><![CDATA[Spectrum]]></category>
            <category><![CDATA[CDN]]></category>
            <category><![CDATA[Magic Transit]]></category>
            <category><![CDATA[Egress]]></category>
            <category><![CDATA[Cloudflare Gateway]]></category>
            <category><![CDATA[RPKI]]></category>
            <category><![CDATA[Aegis]]></category>
            <category><![CDATA[Smart Shield]]></category>
            <guid isPermaLink="false">4usaEaUwShJ04VKzlMV0V9</guid>
            <dc:creator>Ash Pallarito</dc:creator>
            <dc:creator>Lynsey Haynes</dc:creator>
            <dc:creator>Gokul Unnikrishnan</dc:creator>
        </item>
        <item>
            <title><![CDATA[Your IPs, your rules: enabling more efficient address space usage]]></title>
            <link>https://blog.cloudflare.com/your-ips-your-rules-enabling-more-efficient-address-space-usage/</link>
            <pubDate>Mon, 19 May 2025 13:00:00 GMT</pubDate>
            <description><![CDATA[ IPv4 is expensive, and moving network resources around is hard. Previously, when customers wanted to use multiple Cloudflare services, they had to bring a new address range. ]]></description>
            <content:encoded><![CDATA[ <p>IPv4 addresses have become a costly commodity, driven by their growing scarcity. With the original pool of 4.3 billion addresses long exhausted, organizations must now rely on the secondary market to acquire them. Over the years, prices have surged, often exceeding $30–$50 USD per address, with <a href="https://auctions.ipv4.global/?cf_history_state=%7B%22guid%22%3A%22C255D9FF78CD46CDA4F76812EA68C350%22%2C%22historyId%22%3A6%2C%22targetId%22%3A%22B695D806845101070936062659E97ADD%22%7D"><u>costs</u></a> varying based on block size and demand. Given the scarcity, these prices are only going to rise, particularly for businesses that haven’t transitioned to <a href="https://blog.cloudflare.com/amazon-2bn-ipv4-tax-how-avoid-paying/"><u>IPv6</u></a>. This rising cost and limited availability have made efficient IP address management more critical than ever. In response, we’ve evolved how we handle BYOIP (<a href="https://blog.cloudflare.com/bringing-your-own-ips-to-cloudflare-byoip/"><u>Bring Your Own IP</u></a>) prefixes to give customers greater flexibility.</p><p>Historically, when customers onboarded a BYOIP prefix, they were required to assign it to a single service, binding all IP addresses within that prefix to one service before it was advertised. Once set, the prefix's destination was fixed — to direct traffic exclusively to that service. If a customer wanted to use a different service, they had to onboard a new prefix or go through the cumbersome process of offboarding and re-onboarding the existing one.</p><p>As a step towards addressing this limitation, we’ve introduced a new level of flexibility: customers can now use parts of any prefix — whether it’s bound to Cloudflare CDN, Spectrum, or Magic Transit — for additional use with CDN or Spectrum. This enhancement provides much-needed flexibility, enabling businesses to optimize their IP address usage while keeping costs under control. </p>
    <div>
      <h2>The challenges of moving onboarded BYOIP prefixes between services</h2>
      <a href="#the-challenges-of-moving-onboarded-byoip-prefixes-between-services">
        
      </a>
    </div>
    <p>Migrating BYOIP prefixes dynamically between Cloudflare services is no trivial task, especially with thousands of servers capable of accepting and processing connections. The problem required overcoming several technical challenges related to IP address management, kernel-level bindings, and orchestration. </p>
    <div>
      <h3>Dynamic reallocation of prefixes across services</h3>
      <a href="#dynamic-reallocation-of-prefixes-across-services">
        
      </a>
    </div>
    <p>When configuring an IP prefix for a service, we need to update IP address lists and firewall rules on each of our servers to allow only the traffic we expect for that service, such as opening ports 80 and 443 to allow HTTP and HTTPS traffic for the Cloudflare CDN. We use Linux <a href="https://en.wikipedia.org/wiki/Iptables#:~:text=iptables%20is%20a%20user%2Dspace,to%20treat%20network%20traffic%20packets."><u>iptables</u></a> and <a href="https://en.wikipedia.org/wiki/Iptables"><u>IP sets</u></a> for this.</p><p>Migrating IP prefixes to a different service involves dynamically reassigning them to different IP sets and iptable rules. This requires automated updates across a large-scale distributed environment.</p><p>As prefixes shift between services, it is critical that servers update their IP sets and iptable rules dynamically to ensure traffic is correctly routed. Failure to do so could lead to routing loops or dropped connections.  </p>
    <div>
      <h3>Updating Tubular – an eBPF-based IP and port binding service</h3>
      <a href="#updating-tubular-an-ebpf-based-ip-and-port-binding-service">
        
      </a>
    </div>
    <p>Most web applications bind to a list of IP addresses at startup, and listen on only those IPs until shutdown. To allow customers to change the IPs bound to each service dynamically, we needed a way to add and remove IPs from a running service, without restarting it. <a href="https://blog.cloudflare.com/tubular-fixing-the-socket-api-with-ebpf/"><u>Tubular</u></a> is a <a href="https://blog.cloudflare.com/cloudflare-architecture-and-how-bpf-eats-the-world/"><u>BPF</u></a> program we wrote that runs on Cloudflare servers that allows services to listen on a single socket, dynamically updating the list of addresses that are routed to that socket over the lifetime of the service, without requiring it to restart when those addresses change.</p><p>A significant engineering challenge was extending Tubular to support traffic destined for Cloudflare’s CDN.  Without this enhancement, customers would be unable to leverage dynamic reassignment to bind prefixes onboarded through Spectrum to the Cloudflare CDN, limiting flexibility across services.</p><p>Cloudflare’s CDN depends on each server running an NGINX ingress proxy to terminate incoming connections. Due to the <a href="https://blog.cloudflare.com/how-we-built-pingora-the-proxy-that-connects-cloudflare-to-the-internet/"><u>scale and performance limitations of NGINX</u></a>, we are actively working to replace it by 2026. In the interim, however, we still depend on the current ingress proxy to reliably handle incoming connections.</p><p>One limitation is that this ingress proxy does not support <a href="https://systemd.io/"><u>systemd</u></a> socket activation, a mechanism Tubular relies on to integrate with other Cloudflare services on each server. For services that do support systemd socket activation, systemd independently starts the sockets for the owning service and passes them to Tubular, allowing Tubular to easily detect and route traffic to the correct terminating service.</p><p>Since this integration model is not feasible, an alternative solution was required. This was addressed by introducing a shared Unix domain socket between Tubular and the ingress proxy service on each server. Through this channel,  the ingress proxy service explicitly transmits socket information to Tubular, enabling it to correctly register the sockets in its datapath.</p><p>The final challenge was deploying the Tubular-ingress proxy integration across the fleet of servers without disrupting active connections. As of April 2025, Cloudflare handles an average of 71 million HTTP requests per second, peaking at 100 million. To safely deploy at this scale, the necessary Tubular and ingress proxy configuration changes were staged across all Cloudflare servers without disrupting existing connections. The final step involved adding bindings — IP addresses and ports corresponding to Cloudflare CDN prefixes — to the Tubular configuration. These bindings direct connections through Tubular via the Unix sockets registered during the previous integration step. To minimize risk, bindings were gradually enabled in a controlled rollout across the global fleet.</p>
    <div>
      <h4>Tubular data plane in action</h4>
      <a href="#tubular-data-plane-in-action">
        
      </a>
    </div>
    <p>This high-level representation of the Tubular data plane binds together the Layer 4 protocol (TCP), prefix (192.0.2.0/24 - which is 254 usable IP addresses), and port number 0 (any port). When incoming packets match this combination, they are directed to the correct socket of the service — in this case, Spectrum.​</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5yQpYeTxPM7B8DZwLsQATs/3f488c5b37ef2358eacf779a42ac59d5/image4.png" />
          </figure><p>In the following example, TCP 192.0.2.200/32 has been upgraded to the Cloudflare CDN via the edge <a href="https://developers.cloudflare.com/api/resources/addressing/subresources/prefixes/subresources/service_bindings/"><u>Service Bindings API</u></a>. Tubular dynamically consumes this information, adding a new entry to its data plane bindings and socket table. Using Longest Prefix Match, all packets within the 192.0.2.0/24 range port 0 will be routed to Spectrum, except for 192.0.2.200/32 port 443, which will be directed to the Cloudflare CDN.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6wWlR9gWb6JEoyZm4iOpgQ/4a59bcab4a6731a53ea235500596c7f5/image1.png" />
          </figure>
    <div>
      <h4>Coordination and orchestration at scale </h4>
      <a href="#coordination-and-orchestration-at-scale">
        
      </a>
    </div>
    <p>Our goal is to achieve a quick transition of IP address prefixes between services when initiated by customers, which requires a high level of coordination. We need to ensure that changes propagate correctly across all servers to maintain stability. Currently, when a customer migrates a prefix between services, there is a 4-6 hour window of uncertainty where incoming packets may be dropped due to a lack of guaranteed routing. To address this, we are actively implementing systems that will reduce this transition time from hours to just a matter of minutes, significantly improving reliability and minimizing disruptions.</p>
    <div>
      <h2>Smarter IP address management</h2>
      <a href="#smarter-ip-address-management">
        
      </a>
    </div>
    <p>Service Bindings are mappings that control whether traffic destined for a given IP address is routed to Magic Transit, the CDN pipeline, or the Spectrum pipeline.</p><p>Consider the example in the diagram below. One of our customers, a global finance infrastructure platform, is using BYOIP and has a /24 range bound to <a href="https://developers.cloudflare.com/spectrum/"><u>Spectrum</u></a> for DDoS protection of their TCP and UDP traffic. However, they are only using a few addresses in that range for their Spectrum applications, while the rest go unused. In addition, the customer is using Cloudflare’s CDN for their Layer 7 traffic and wants to set up <a href="https://developers.cloudflare.com/byoip/concepts/static-ips/"><u>Static IPs</u></a>, so that their customers can allowlist a consistent set of IP addresses owned and controlled by their own network infrastructure team. Instead of using up another block of address space, they asked us whether they could carve out those unused sub-ranges of the /24 prefix.</p><p>From there, we set out to determine how to selectively map sub-ranges of the onboarded prefix to different services using service bindings:</p><ul><li><p>192.0.2.0/24 is already bound to <b>Spectrum</b></p><ul><li><p>192.0.2.0/25 is updated and bound to <b>CDN</b></p></li><li><p>192.0.2.200/32 is also updated bound to <b>CDN</b></p></li></ul></li></ul><p>Both the /25 and /32 are sub-ranges within the /24 prefix and will receive traffic directed to the CDN. All remaining IP addresses within the /24 prefix, unless explicitly bound, will continue to use the default Spectrum service binding.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/uwhMHBEuI1NHfp9qD9IFM/d2dcea59a8d9f962f03389831fd73851/image3.png" />
          </figure><p>As you can see in this example, this approach provides customers with greater control and agility over how their IP address space is allocated. Instead of rigidly assigning an entire prefix to a single service, users can now tailor their IP address usage to match specific workloads or deployment needs. Setting this up is straightforward — all it takes is a few HTTP requests to the <a href="https://developers.cloudflare.com/api/resources/addressing/subresources/prefixes/subresources/service_bindings/"><u>Cloudflare API</u></a>. You can define service bindings by specifying which IP addresses or subnets should be routed to CDN, Spectrum, or Magic Transit. This allows you to tailor traffic routing to match your architecture without needing to restructure your entire IP address allocation. The process remains consistent whether you're configuring a single IP address or splitting up larger subnets, making it easy to apply across different parts of your network. The foundational technical work addressing the underlying architectural challenges outlined above made it possible to streamline what could have been a complex setup into a straightforward series of API interactions.</p>
    <div>
      <h2>Conclusion</h2>
      <a href="#conclusion">
        
      </a>
    </div>
    <p>We envision a future where customers have granular control over how their traffic moves through Cloudflare’s global network, not just by service, but down to the port level. A single prefix could simultaneously power web applications on CDN, protect infrastructure through Magic Transit, and much more. This isn't just flexible routing, but programmable traffic orchestration across different services. What was once rigid and static becomes dynamic and fully programmable to meet each customer’s unique needs. </p><p>If you are an existing BYOIP customer using Magic Transit, CDN, or Spectrum, check out our <a href="https://developers.cloudflare.com/byoip/service-bindings/magic-transit-with-cdn/"><u>configuration guide here</u></a>. If you are interested in bringing your own IP address space and using multiple Cloudflare services on it, please reach out to your account team to enable setting up this configuration via <a href="https://developers.cloudflare.com/api/resources/addressing/subresources/prefixes/subresources/service_bindings/"><u>API</u></a> or reach out to sales@cloudflare.com if you’re new to Cloudflare.</p> ]]></content:encoded>
            <category><![CDATA[API]]></category>
            <category><![CDATA[Addressing]]></category>
            <category><![CDATA[IPv4]]></category>
            <category><![CDATA[BYOIP]]></category>
            <category><![CDATA[Spectrum]]></category>
            <category><![CDATA[CDN]]></category>
            <category><![CDATA[Magic Transit]]></category>
            <guid isPermaLink="false">7FAYMppkyZG4CEGdLEcLlR</guid>
            <dc:creator>Mark Rodgers</dc:creator>
            <dc:creator>Sphoorti Metri</dc:creator>
            <dc:creator>Ash Pallarito</dc:creator>
        </item>
        <item>
            <title><![CDATA[Extending Private Network Load Balancing load balancing to Layer 4 with Spectrum]]></title>
            <link>https://blog.cloudflare.com/extending-local-traffic-management-load-balancing-to-layer-4-with-spectrum/</link>
            <pubDate>Fri, 31 May 2024 13:00:07 GMT</pubDate>
            <description><![CDATA[ Cloudflare is adding support for all TCP and UDP traffic to our Private Network Load Balancing load balancing solution, extending the benefits of Private Network Load Balancing to more than just  ]]></description>
            <content:encoded><![CDATA[ <p></p><p>In 2023, Cloudflare <a href="https://blog.cloudflare.com/elevate-load-balancing-with-private-ips-and-cloudflare-tunnels-a-secure-path-to-efficient-traffic-distribution/"><u>introduced a new load balancing solution</u></a>, supporting Private Network Load Balancing. This gives organizations a way to balance HTTP(S) traffic between private or internal servers within a region-specific data center. Today, we are thrilled to be able to extend those samecapabilities to non-HTTP(S) traffic. This new feature is enabled by the integration of Cloudflare Spectrum, Cloudflare Tunnels, and Cloudflare load balancers and is available to enterprise customers. Our customers can now use Cloudflare load balancers for all TCP and UDP traffic destined for private IP addresses, eliminating the need for expensive on-premise load balancers.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6wjcoenAQ9NFW4PyZiqjCQ/9921257aea4486200be51f070c1cb090/image1-15.png" />
            
            </figure>
    <div>
      <h3>A quick primer</h3>
      <a href="#a-quick-primer">
        
      </a>
    </div>
    <p>In this blog post, we will be referring to <a href="https://www.cloudflare.com/learning/performance/what-is-load-balancing/">load balancers</a> at either layer 4 or layer 7. This is, of course, referring to layers of the <a href="https://www.cloudflare.com/learning/ddos/glossary/open-systems-interconnection-model-osi/">OSI model</a> but more specifically, the ingress path that is being used to reach the load balancer. <a href="https://www.cloudflare.com/learning/ddos/what-is-layer-7/">Layer 7</a>, also known as the Application Layer, is where the HTTP(S) protocol exists. Cloudflare is well known for our layer 7 capabilities, which are built around speeding up and protecting websites which run over HTTP(S). When we refer to layer 7 load balancers, we are referring to HTTP(S)-based services. Our layer 7 stack allows Cloudflare to apply services like CDN, WAF, Bot Management, DDoS protection, and more to a customer's website or application to improve performance, availability, and security.</p><p>Layer 4 load balancers operate at a lower level of the OSI model, called the <a href="https://www.cloudflare.com/learning/ddos/glossary/open-systems-interconnection-model-osi/#:~:text=4.%20The%20transport%20layer">Transport Layer</a>, which means they can be used to support a much broader set of services and protocols. At Cloudflare, our public layer 4 load balancers are enabled by a Cloudflare product called <a href="https://developers.cloudflare.com/spectrum/">Spectrum</a>. Spectrum works as a layer 4 reverse proxy. This places Cloudflare in front of any <a href="https://www.cloudflare.com/learning/ddos/what-is-a-ddos-attack/">DDoS attacks</a> that may be launched against Spectrum-proxied services, and by using Spectrum in front of your application, your private origin IP address is concealed, which also prevents bad actors from discovering and attacking your origin’s IP address directly.</p><p>Services that use TCP or UDP for transport can leverage Spectrum with a Cloudflare load balancer. Layer 4 load balancing allows us to support other application layer protocols such as SSH, FTP, NTP, and SMTP since they operate over TCP and UDP. Given the breadth of services and protocols this represents, the treatment provided is more generalized. Cloudflare Spectrum supports features such as TLS/SSL offloading, DDoS protection, <a href="https://www.cloudflare.com/application-services/products/argo-smart-routing/">Argo Smart Routing</a>, and session persistence with our layer 4 load balancers.</p>
    <div>
      <h3>Cloudflare’s current load balancing capabilities</h3>
      <a href="#cloudflares-current-load-balancing-capabilities">
        
      </a>
    </div>
    <p>Before we dig into the new features we are announcing, it's important to understand what Cloudflare load balancing supports today and the challenges our customers face with regard to their load balancing needs.</p><p>There are three main load balancing traffic flows that Cloudflare supports today:</p><ol><li><p>Internet-facing load balancers connecting to publicly accessible origins operating at layer 7, which supports HTTP(S)</p></li><li><p>Internet-facing load balancers connecting to publicly accessible origins operating at layer 4 (Spectrum), which supports all TCP-based and UDP-based services such as SSH, FTP, NTP, SMTP, etc.</p></li><li><p>Publicly accessible load balancers connecting to <b>private</b> origins operating at layer 7 HTTP(S) over Cloudflare Tunnels</p></li></ol><p>One of the biggest advantages Cloudflare’s load balancing solutions offer our customers is that there is no hardware to purchase or maintain. Hardware-based load balancers are expensive to purchase, license, operate, and upgrade. “Need more bandwidth? Just buy and install this additional module.” “Need more features? Just buy and install this new license.” “Oh, your hardware load balancer is End-of-Life? Just purchase an entire new kit which we will EOL in a few years!” The upgrade or refresh cycle on a fully integrated hardware load balancer setup can take years and, by the time you finish the planning, implementation, and cutover, it might actually be time to start planning the next refresh.</p><p>Cloudflare eliminates all these concerns and lets you focus on innovation and growth. Your load balancers exist in every Cloudflare data center across the globe, in <a href="https://www.cloudflare.com/network/">over 300 cities</a>, with virtually unlimited scale and capacity. You never need to worry about bandwidth constraints, deployment locations, extra hardware modules, downtime, upgrades, or maintenance windows ever again. With Cloudflare’s global Anycast network, every customer connects to a nearby Cloudflare data center and load balancer, where relevant policies, rules, and steering are applied.</p>
    <div>
      <h3>Load balancing more than websites with Cloudflare Spectrum</h3>
      <a href="#load-balancing-more-than-websites-with-cloudflare-spectrum">
        
      </a>
    </div>
    <p>Today, we are excited to announce that Cloudflare Spectrum can now support load balancing traffic to private networks. The addition of private IP origin support for Cloudflare load balancers is very powerful and that's why we are extending that support to load balancing with Cloudflare <a href="https://developers.cloudflare.com/spectrum/">Spectrum</a> as well. This means that any set of private or internal applications that use TCP or UDP can now be locally load balanced via Cloudflare. These services will also benefit from Spectrum’s layer 3/4 DDoS protection and can leverage other features like session persistence without compromising security. So while the ingress to these load balancers is public, the origins to which they distribute traffic can all be private, inaccessible from the public Internet.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/63C3GATpDsujBJLBaboweL/b6a7adeda6c0b3800f45c3f7eb83bf6e/image3-7.png" />
            
            </figure><p>Ordinarily, load balancing to private networks would require expensive on-premise hardware or costly direct physical connections to cloud providers. But, by using Spectrum as the ingress path for TCP and UDP load balancing, customers can keep their origins completely protected and unreachable from the Internet and allow access exclusively through their Cloudflare load balancer – no expensive hardware required. Customers no longer need to manage complex ACLs or security settings to make sure only certain source IP addresses are connecting to the origins. These private origins can be hosted in private data centers, a public cloud, a private cloud, or on-premise.</p>
    <div>
      <h3>How we enabled Spectrum to support private networks</h3>
      <a href="#how-we-enabled-spectrum-to-support-private-networks">
        
      </a>
    </div>
    <p>All of our changes to create this feature center around integrations with Apollo, the unifying service created by the Cloudflare Zero Trust team. You can read their <a href="/from-ip-packets-to-http-the-many-faces-of-our-oxy-framework/">previous blog post on the Oxy framework</a> for more details on how Zero Trust handles and routes traffic. Apollo accepts incoming traffic from supported on-ramps, applies Zero Trust logic as configured by the customer, and then routes the traffic to egress via supported off-ramps. For example, Apollo enables clients connected securely using Cloudflare’s WARP client to communicate over Cloudflare Tunnels with private origins in a customer’s data center. Now, Apollo is being extended to do more.</p><p>When a user creates a load balanced Spectrum app, they choose a hostname and port, and select a Cloudflare load balancer as their origin. This allocates a hostname which will resolve to an IP address where Spectrum will listen for incoming traffic on the customer-configured port. Spectrum makes a call to Cloudflare's internal load balancing service, Director, which responds with the appropriate endpoint, to which Spectrum will proxy the connection. Previously, load balanced Spectrum apps only supported publicly addressable origins. Now, if the response from Director indicates that the traffic is destined for a private origin, Spectrum passes the private origin's IP address and <a href="https://developers.cloudflare.com/cloudflare-one/connections/connect-networks/private-net/tunnel-virtual-networks/">virtual network</a> ID to Apollo, which then proxies the traffic to the customer's private origin.</p><p>In short, new integrations between our Spectrum service and Apollo and between Apollo and Director have allowed us to expand our load balancing offerings not only to layer 4, but also enable us to leverage virtual networks to keep load balanced traffic private and off the public Internet. This also sets the stage for integrating load balancing with other traffic on-ramps and off-ramps, such as WARP, in the future. It also opens the door to a number of exciting possibilities like load balancing authenticated device traffic to private networks or even load balancing internal traffic that is never exposed to the public Internet.</p>
    <div>
      <h3>Looking to the future</h3>
      <a href="#looking-to-the-future">
        
      </a>
    </div>
    <p>We are excited to be releasing this new load balancing feature which enables Cloudflare Spectrum to reach private IP endpoints. Cloudflare load balancers now support steering any TCP or UDP-based protocols over Cloudflare Tunnels to private IP endpoints, which are otherwise not accessible via the public Internet. You can learn more about how to configure this feature on our <a href="https://developers.cloudflare.com/load-balancing/local-traffic-management/">load balancing documentation</a> pages.</p><p>We are just getting started with our private network  load balancing support. There is so much more to come including support for load balancing internal traffic, enhanced layer 4 session affinity, new steering methods, additional traffic ingress methods, and more!</p><p>
</p> ]]></content:encoded>
            <category><![CDATA[Spectrum]]></category>
            <category><![CDATA[Load Balancing]]></category>
            <category><![CDATA[Cloudflare Zero Trust]]></category>
            <category><![CDATA[Private Network]]></category>
            <category><![CDATA[Private IP]]></category>
            <guid isPermaLink="false">6xgIcezZBRXIokMo0e7gMH</guid>
            <dc:creator>Chris Ward</dc:creator>
            <dc:creator>Brian Batraski</dc:creator>
            <dc:creator>Mathew Jacob</dc:creator>
        </item>
        <item>
            <title><![CDATA[Argo Smart Routing for UDP: speeding up gaming, real-time communications and more]]></title>
            <link>https://blog.cloudflare.com/turbo-charge-gaming-and-streaming-with-argo-for-udp/</link>
            <pubDate>Tue, 20 Jun 2023 13:00:40 GMT</pubDate>
            <description><![CDATA[ Today, Cloudflare is super excited to announce that we’re bringing traffic acceleration to customer’s UDP traffic. Now, you can improve the latency of UDP-based applications like video games, voice calls, and video meetings by up to 17% ]]></description>
            <content:encoded><![CDATA[ 
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/64tixskqgONiSTACdvMbMX/3502932df801cd9691f432892495f379/image1-14.png" />
            
            </figure><p>Today, Cloudflare is super excited to announce that we’re bringing traffic acceleration to customer’s UDP traffic. Now, you can improve the latency of UDP-based applications like video games, voice calls, and video meetings by up to 17%. Combining the power of Argo Smart Routing (our traffic acceleration product) with UDP gives you the ability to supercharge your UDP-based traffic.</p>
    <div>
      <h3>When applications use TCP vs. UDP</h3>
      <a href="#when-applications-use-tcp-vs-udp">
        
      </a>
    </div>
    <p>Typically when people talk about the Internet, they think of websites they visit in their browsers, or apps that allow them to order food. This type of traffic is sent across the Internet via <a href="https://www.cloudflare.com/learning/ddos/glossary/hypertext-transfer-protocol-http/">HTTP</a> which is built on top of the <a href="https://www.cloudflare.com/learning/ddos/glossary/tcp-ip/">Transmission Control Protocol</a> (TCP). However, there’s a lot more to the Internet than just browsing websites and using apps. Gaming, <a href="https://www.cloudflare.com/developer-platform/solutions/live-streaming/">live video</a>, or tunneling traffic to different networks via a VPN are all common applications that don’t use HTTP or TCP. These popular applications leverage the <a href="https://www.cloudflare.com/learning/ddos/glossary/user-datagram-protocol-udp/">User Datagram Protocol</a> (or UDP for short). To understand why these applications use UDP instead of TCP, we’ll need to dig into how these different applications work.</p><p>When you load a web page, you generally want to see the <i>entire</i> web page; the website would be confusing if parts of it are missing. For this reason, HTTP uses TCP as a method of transferring website data. TCP ensures that if a packet ever gets lost as it crosses the Internet, that packet will be resent. Having a reliable protocol like TCP is generally a good idea when 100% of the information sent needs to be loaded. It’s worth noting that later HTTP versions like <a href="https://www.cloudflare.com/learning/performance/what-is-http3/">HTTP/3</a> actually deviated from TCP as a transmission protocol, but they still ensure packet delivery by handling packet retransmission using the <a href="/the-road-to-quic/">QUIC protocol</a>.</p><p>There are other applications that prioritize quickly sending real time data and are less concerned about perfectly delivering 100% of the data. Let’s explore Real-Time Communications (RTC) like video meetings as an example. If two people are streaming video live, all they care about is what is happening <i>now</i>. If a few packets are lost during the initial transmission, retransmission is usually too slow to render the lost packet data in the current video frame. TCP doesn’t really make sense in this scenario.</p><p>Instead, RTC protocols are built on top of UDP. TCP is like a formal back and forth conversation where every sentence matters. UDP is more like listening to your friend's stream of consciousness: you don’t care about every single bit as long as you get the gist of it. UDP transfers packet data with speed and efficiency without guaranteeing the delivery of those packets. This is perfect for applications like RTC where reducing latency is more important than occasionally losing a packet here or there. The same applies to gaming traffic; you generally want the most up-to-date information, and you don’t really care about retransmitting lost packets.</p><p>Gaming and RTC applications <i>really</i> care about <a href="https://www.cloudflare.com/learning/performance/glossary/what-is-latency/">latency</a>. Latency is the length of time it takes a packet to be sent to a server plus the length of time to receive a response from the server (called <a href="https://www.cloudflare.com/learning/cdn/glossary/round-trip-time-rtt/">round-trip time or RTT</a>). In the case of video games, the higher the latency, the longer it will take for you to see other players move and the less time you’ll have to react to the game. With enough latency, games become unplayable: if the players on your screen are constantly blipping around it’s near impossible to interact with them. In RTC applications like video meetings, you’ll experience a delay between yourself and your counterpart. You may find yourselves accidentally talking over each other which isn’t a great experience.</p><p>Companies that host gaming or RTC infrastructure often try to reduce latency by spinning up servers that are geographically closer to their users. However, it’s common to have two users that are trying to have a video call between distant locations like Amsterdam and Los Angeles. No matter where you install your servers, that's still a long distance for that traffic to travel. The longer the path, the higher the chances are that you're going to run into congestion along the way. Congestion is just like a traffic jam on a highway, but for networks. Sometimes certain paths get overloaded with traffic. This causes delays and packets to get dropped. This is where Argo Smart Routing comes in.</p>
    <div>
      <h3>Argo Smart Routing</h3>
      <a href="#argo-smart-routing">
        
      </a>
    </div>
    <p>Cloudflare customers that want the best cross-Internet application performance rely on Argo Smart Routing’s traffic acceleration to reduce latency. Argo Smart Routing is like the GPS of the Internet. It uses real time global network performance measurements to accelerate traffic, actively route around Internet congestion, and increase your traffic’s stability by reducing packet loss and jitter.</p><p>Argo Smart Routing was launched in <a href="/argo/">May 2017</a>, and its first iteration focused on reducing website traffic latency. Since then, we’ve <a href="/argo-v2/">improved Argo Smart Routing</a> and also <a href="/argo-spectrum/">launched Argo Smart Routing for Spectrum TCP traffic</a> which reduces latency in any TCP-based protocols. Today, we’re excited to bring the same Argo Smart Routing technology to customer’s UDP traffic which will reduce latency, packet loss, and jitter in gaming, and live audio/video applications.</p><p>Argo Smart Routing accelerates Internet traffic by sending millions of synthetic probes from every Cloudflare data center to the origin of every Cloudflare customer. These probes measure the latency of all possible routes between Cloudflare’s data centers and a customer’s origin. We then combine that with probes running between Cloudflare’s data centers to calculate possible routes. When an Internet user makes a request to an origin, Cloudflare calculates the results of our real time global latency measurements, examines Internet congestion data, and calculates the optimal route for customer’s traffic. To enable Argo Smart Routing for UDP traffic, Cloudflare extended the route computations typically used for HTTP and TCP traffic and applied them to UDP traffic.</p><p>We knew that Argo Smart Routing offered impressive benefits for HTTP traffic, reducing time to first byte by up to 30% on average for customers. But UDP can be treated differently by networks, so we were curious to see if we would see a similar reduction in round-trip-time for UDP. To validate, we ran a set of tests. We set up an origin in Iowa, USA and had a client connect to it from Tokyo, Japan. Compared to a regular Spectrum setup, we saw a decrease in round-trip-time of up to 17.3% on average. For the standard setup, Spectrum was able to proxy packets to Iowa in 173.3 milliseconds on average. Comparatively, turning on Argo Smart Routing reduced the average round-trip-time down to 143.3 milliseconds. The distance between those two cities is 6,074 miles (9,776 kilometers), meaning we've effectively moved the two closer to each other by over a thousand miles (or 1,609 km) just by turning on this feature.</p><p>We're incredibly excited about Argo Smart Routing for UDP and what our customers will use it for. If you're in gaming or real-time-communications, or even have a different use-case that you think would benefit from speeding up UDP traffic, please contact your account team today. We are currently in closed beta but are excited about accepting applications.</p> ]]></content:encoded>
            <category><![CDATA[Speed Week]]></category>
            <category><![CDATA[Argo Smart Routing]]></category>
            <category><![CDATA[Spectrum]]></category>
            <category><![CDATA[UDP]]></category>
            <category><![CDATA[Performance]]></category>
            <guid isPermaLink="false">5qKIhJCi7nIZIQudfOBtgh</guid>
            <dc:creator>Achiel van der Mandele</dc:creator>
            <dc:creator>Chris Draper</dc:creator>
        </item>
        <item>
            <title><![CDATA[Introducing Cloudflare’s new Network Analytics dashboard]]></title>
            <link>https://blog.cloudflare.com/network-analytics-v2-announcement/</link>
            <pubDate>Wed, 12 Apr 2023 13:49:14 GMT</pubDate>
            <description><![CDATA[ Learn how the new and improved Network Analytics dashboard provides security professionals insights into their DDoS attack and traffic landscape ]]></description>
            <content:encoded><![CDATA[ <p></p><p>We’re pleased to introduce Cloudflare’s new and improved Network Analytics dashboard. It’s now available to Magic Transit and Spectrum customers on the <a href="https://www.cloudflare.com/plans/enterprise/">Enterprise plan</a>.</p><p>The dashboard provides network operators better visibility into traffic behavior, firewall events, and <a href="https://www.cloudflare.com/learning/ddos/what-is-a-ddos-attack/">DDoS attacks</a> as observed across Cloudflare’s global network. Some of the dashboard’s data points include:</p><ol><li><p>Top traffic and attack attributes</p></li><li><p>Visibility into DDoS mitigations and Magic Firewall events</p></li><li><p>Detailed packet samples including full packets headers and metadata</p></li></ol>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/PVxW9aKtDRd0oHEc0vjRy/978384942aa6359443956f1a5e457db7/pasted-image-0-2.png" />
            
            </figure><p>Network Analytics - Drill down by various dimensions</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6qfVj235R9ZjySJXsMy5gz/fbf32c704b34a8f0e8751b9d11b8536a/pasted-image-0--1--2.png" />
            
            </figure><p>Network Analytics - View traffic by mitigation system</p><p>This dashboard was the outcome of a<a href="https://www.cloudflare.com/learning/cloud/how-to-refactor-applications/"> full refactoring</a> of our network-layer data logging pipeline. The new data pipeline is decentralized and much more flexible than the previous one — making it more resilient, performant, and scalable for when we add new mitigation systems, introduce new sampling points, and roll out new <a href="https://www.cloudflare.com/network-security/">services</a>. A technical deep-dive blog is coming soon, so stay tuned.</p><p>In this blog post, we will demonstrate how the dashboard helps network operators:</p><ol><li><p>Understand their network better</p></li><li><p>Respond to DDoS attacks faster</p></li><li><p>Easily generate security reports for peers and managers</p></li></ol>
    <div>
      <h2>Understand your network better</h2>
      <a href="#understand-your-network-better">
        
      </a>
    </div>
    <p>One of the main responsibilities network operators bare is ensuring the operational stability and reliability of their network. Cloudflare’s Network Analytics dashboard shows network operators where their traffic is coming from, where it’s heading, and what type of traffic is being delivered or mitigated. These insights, along with user-friendly drill-down capabilities, help network operators identify changes in traffic, surface abnormal behavior, and can help alert on critical events that require their attention — to help them ensure their network’s stability and reliability.</p><p>Starting at the top, the Network Analytics dashboard shows network operators their traffic rates over time along with the total throughput. The entire dashboard is filterable, you can drill down using select-to-zoom, change the time-range, and toggle between a packet or bit/byte view. This can help gain a quick understanding of traffic behavior and identify sudden dips or surges in traffic.</p><p>Cloudflare customers advertising their own IP prefixes from the Cloudflare network can also see annotations for BGP advertisement and withdrawal events. This provides additional context atop of the traffic rates and behavior.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2kly1EwMZg2c1XP0zyC8uW/9ac35d6723360209eceb8a111353687e/pasted-image-0--2--2.png" />
            
            </figure><p>The Network Analytics dashboard time series and annotations</p>
    <div>
      <h3>Geographical accuracy</h3>
      <a href="#geographical-accuracy">
        
      </a>
    </div>
    <p>One of the many benefits of Cloudflare’s Network Analytics dashboard is its geographical accuracy. Identification of the traffic source usually involves correlating the source IP addresses to a city and country. However, network-layer traffic is subject to <a href="https://www.cloudflare.com/learning/ddos/glossary/ip-spoofing/">IP spoofing</a>. Malicious actors can spoof (alter) their source IP address to obfuscate their origin (or their botnet’s nodes) while attacking your network. Correlating the location (e.g., the source country) based on spoofed IPs would therefore result in <i>spoofed countries</i>. Using <i>spoofed countries</i> would skew the global picture network operators rely on.</p><p>To overcome this challenge and provide our users accurate geoinformation, we rely on the location of the Cloudflare data center wherein the traffic was ingested. We’re able to achieve geographical accuracy with high granularity, because we operate data centers in over 285 locations around the world. We use BGP Anycast which ensures traffic is routed to the nearest data center within BGP catchment.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1clB9VJv46hKMH2wSR8VRz/e64683dccd75628fb506718f5c993aa8/pasted-image-0--3--2.png" />
            
            </figure><p>Traffic by Cloudflare data center country from the Network Analytics dashboard</p>
    <div>
      <h3>Detailed mitigation analytics</h3>
      <a href="#detailed-mitigation-analytics">
        
      </a>
    </div>
    <p>The dashboard lets network operators understand exactly what is happening to their traffic while it’s traversing the Cloudflare network. The <b>All traffic</b> tab provides a summary of attack traffic that was dropped by the three mitigation systems, and the clean traffic that was passed to the origin.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3IoMV0su6xFSMpFtSVIYhV/c306e94675592a7d8efd323ff4a0e274/pasted-image-0--4--2.png" />
            
            </figure><p>The All traffic tab in Network Analytics</p><p>Each additional tab focuses on one mitigation system, showing traffic dropped by the corresponding mitigation system and traffic that was passed through it. This provides network operators almost the same level of visibility as our internal support teams have. It allows them to understand exactly what Cloudflare systems are doing to their traffic and where in the Cloudflare stack an action is being taken.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6AFox2e1zeWhPhuyieGxxX/628c385e4258df31ba3465fafe3defb5/pasted-image-0--5--2.png" />
            
            </figure>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3Wv2Wh7dLTItJE0HJ5qwhX/ec0543b3ad661bd4f136de375f12de45/pasted-image-0--6--2.png" />
            
            </figure><p>Data path for Magic Transit customers</p><p>Using the detailed tabs, users can better understand the systems’ decisions and which rules are being applied to mitigate attacks. For example, in the <b>Advanced TCP Protection</b> tab, you can view how the system is classifying TCP connection states. In the screenshot below, you can see the distribution of packets according to connection state. For example, a sudden spike in <i>Out of sequence</i> packets may result in the system dropping them.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7npFkvFLyWz2jJBaWvZARC/5bc1aff730056de4a84f43f69bbb290d/pasted-image-0--7--2.png" />
            
            </figure><p>The Advanced TCP Protection tab in Network Analytics</p><p>Note that the presence of tabs differ slightly for Spectrum customers because they do not have access to the <b>Advanced TCP Protection</b> and <b>Magic Firewall</b> tabs. Spectrum customers only have access to the first two tabs.</p>
    <div>
      <h2>Respond to DDoS attacks faster</h2>
      <a href="#respond-to-ddos-attacks-faster">
        
      </a>
    </div>
    <p>Cloudflare detects and <a href="https://www.cloudflare.com/learning/ddos/ddos-mitigation/">mitigates</a> the majority of DDoS attacks automatically. However, when a network operator responds to a sudden increase in traffic or a CPU spike in their data centers, they need to understand the nature of the traffic. Is this a legitimate surge due to a new game release for example, or an unmitigated DDoS attack? In either case, they need to act quickly to ensure there are no disruptions to critical services.</p><p>The Network Analytics dashboard can help network operators quickly pattern traffic by switching the time-series’ grouping dimensions. They can then use that pattern to drop packets using the Magic Firewall. The default dimension is the <i>outcome</i> indicating whether traffic was <i>dropped</i> or <i>passed</i>. But by changing the time series dimension to another field such as the <i>TCP flag</i>, <i>Packet size</i>, or <i>Destination port</i> a pattern can emerge.</p><p>In the example below, we have zoomed in on a surge of traffic. By setting the <i>Protocol</i> field as the grouping dimension, we can see that there is a 5 Gbps surge of UDP packets (totalling at 840 GB throughput out of 991 GB in this time period). This is clearly not the traffic we want, so we can hover and click the UDP indicator to filter by it.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2LxunY9V7W71IsqmKjgCnp/b1153917c8b4a98e9e1cbea56ac21a81/pasted-image-0--8--2.png" />
            
            </figure><p>Distribution of a DDoS attack by IP protocols</p><p>We can then continue to pattern the traffic, and so we set the <i>Source port</i> to be the grouping dimension. We can immediately see that, in this case, the majority of traffic (838 GB) is coming from source port 123. That’s no bueno, so let’s filter by that too.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/64JDpgWXxeL3LkWCI3tgC4/f445dadfe59295be155490730b4321b7/pasted-image-0--9--3.png" />
            
            </figure><p>The UDP flood grouped by source port</p><p>We can continue iterating to identify the main pattern of the surge. An example of a field that is not necessarily helpful in this case is the <i>Destination port</i>. The time series is only showing us the top five ports but we can already see that it is quite distributed.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5Nhvhx8T1CgxVRsc2HfNZc/11efa5267579e721bd1b774bbe7d3f43/pasted-image-0--10--1.png" />
            
            </figure><p>The attack targets multiple destination ports</p><p>We move on to see what other fields can contribute to our investigation. Using the <i>Packet size</i> dimension yields good results. Over 771 GB of the traffic are delivered over 286 byte packets.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3VlHeil6yl6dAJOIHYez5B/bac75bc13c10553e7cb38b8b4d5ab20c/pasted-image-0--11--1.png" />
            
            </figure><p>Zooming in on an UDP flood originating from source port 123 </p><p>Assuming that our attack is now sufficiently patterned, we can create a Magic Firewall rule to block the attack by combining those fields. You can combine additional fields to ensure you do not impact your legitimate traffic. For example, if the attack is only targeting a single prefix (e.g., 192.0.2.0/24), you can limit the scope of the rule to that prefix.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6QiFHwR98R2E7d7MytTcKd/d3584d9327e98458705222b4b8b76aa3/pasted-image-0--12--1.png" />
            
            </figure><p>Creating a Magic Firewall rule directly from within the analytics dashboard</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5q6dCn1VJGFhhSrJGGDKPg/62ede0a1b6cee6bd3aefa75ea5bfec48/pasted-image-0--13--1.png" />
            
            </figure><p>Creating a Magic Firewall rule to block a UDP flood</p><p>If needed for attack mitigation or network troubleshooting, you can also view and export packet samples along with the packet headers. This can help you identify the pattern and sources of the traffic.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1SwIswIVMdSCcO7Rvz0Kbz/19d4622d9c54485fcd34955cef870fcc/pasted-image-0--14--2.png" />
            
            </figure><p>Example of packet samples with one sample expanded</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5K3EgUT2eVMpVb1r891m4U/5a19ea1007dc9aa4851a005f5133dbfd/pasted-image-0--15--1.png" />
            
            </figure><p>Example of a packet sample with the header sections expanded</p>
    <div>
      <h2>Generate reports</h2>
      <a href="#generate-reports">
        
      </a>
    </div>
    <p>Another important role of the network security team is to provide decision makers an accurate view of their threat landscape and network security posture. Understanding those will enable teams and decision makers to prepare and ensure their organization is protected and critical services are kept available and performant. This is where, again, the Network Analytics dashboard comes in to help. Network operators can use the dashboard to understand their threat landscape — which endpoints are being targeted, by which types of attacks, where are they coming from, and how does that compare to the previous period.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/IlZua2klazwcmqtUAlTWv/64364eabea6e2d6b20be6f48348a373d/pasted-image-0--16--1.png" />
            
            </figure><p>Dynamic, adaptive executive summary</p><p>Using the Network Analytics dashboard, users can create a custom report — filtered and tuned to provide their decision makers a clear view of the attack landscape that’s relevant to them.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5MhuL9MGK0CvRQTE4be1xP/246d6e6334970598d62f4a7a09c2c127/pasted-image-0--17-.png" />
            
            </figure><p>In addition, Magic Transit and Spectrum users also receive an automated weekly Network DDoS Report which includes key insights and trends.</p>
    <div>
      <h2>Extending visibility from Cloudflare’s vantage point</h2>
      <a href="#extending-visibility-from-cloudflares-vantage-point">
        
      </a>
    </div>
    <p>As we’ve seen in many cases, being unprepared can cost organizations substantial revenue loss, it can negatively impact their reputation, reduce users’ trust as well as <i>burn out</i> teams that need to constantly <i>put out fires</i> reactively. Furthermore, impact to organizations that operate in the healthcare industry, water, and electric and other critical infrastructure industries can cause very serious real-world problems, e.g., hospitals not being able to provide care for patients.</p><p>The Network Analytics dashboard aims to reduce the effort and time it takes network teams to investigate and resolve issues as well as to simplify and automate security reporting. The data is also available via GraphQL API and Logpush to allow teams to integrate the data into their internal systems and cross references with additional data points.</p><p>To learn more about the Network Analytics dashboard, refer to the <a href="https://developers.cloudflare.com/analytics/network-analytics/">developer documentation</a>.</p> ]]></content:encoded>
            <category><![CDATA[DDoS]]></category>
            <category><![CDATA[Magic Transit]]></category>
            <category><![CDATA[Spectrum]]></category>
            <category><![CDATA[Product News]]></category>
            <guid isPermaLink="false">75Ag5427StzOYYMleu9WEo</guid>
            <dc:creator>Omer Yoachimik</dc:creator>
        </item>
        <item>
            <title><![CDATA[Introducing Advanced DDoS Alerts]]></title>
            <link>https://blog.cloudflare.com/advanced-ddos-alerts/</link>
            <pubDate>Mon, 19 Sep 2022 14:00:00 GMT</pubDate>
            <description><![CDATA[ Cloudflare’s Advanced DDoS Alerts provide tailored and actionable notifications in real-time ]]></description>
            <content:encoded><![CDATA[ 
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5OhrpFLnW366qu5hsOXYKA/adc3a7bb75c54d7a0d0911e0194a97a4/image9-2.png" />
            
            </figure><p>We’re pleased to introduce Advanced DDoS Alerts. Advanced DDoS Alerts are customizable and provide users the flexibility they need when managing many Internet properties. Users can easily define which alerts they want to receive — for which DDoS attack sizes, protocols and for which Internet properties.</p><p>This release includes two types of Advanced DDoS Alerts:</p><ol><li><p><b>Advanced HTTP DDoS Attack Alerts</b> - Available to WAF/CDN customers on the <a href="https://www.cloudflare.com/plans/enterprise/">Enterprise plan</a>, who have also subscribed to the Advanced DDoS Protection service.</p></li><li><p><b>Advanced L3/4 DDoS Attack Alerts</b> - Available to Magic Transit and Spectrum BYOIP customers on the Enterprise plan.</p></li></ol><p>Standard DDoS Alerts are available to customers on all plans, including the <a href="https://www.cloudflare.com/plans/free/">Free plan</a>. Advanced DDoS Alerts are part of Cloudflare’s Advanced DDoS service.</p>
    <div>
      <h3>Why alerts?</h3>
      <a href="#why-alerts">
        
      </a>
    </div>
    <p><a href="https://www.cloudflare.com/learning/ddos/what-is-a-ddos-attack/">Distributed Denial of Service attacks</a> are cyber attacks that aim to take down your Internet properties and make them unavailable for your users. As early as 2017, Cloudflare pioneered the <a href="/unmetered-mitigation/">Unmetered DDoS Protection</a> to provide all customers with DDoS protection, without limits, to ensure that their Internet properties remain available. We’re able to provide this level of commitment to our customers thanks to our <a href="/deep-dive-cloudflare-autonomous-edge-ddos-protection/">automated DDoS protection systems</a>. But if the systems operate automatically, why even be alerted?</p><p>Well, to put it plainly, when our DDoS <a href="https://www.cloudflare.com/ddos/">protection systems</a> kick in, they insert ephemeral rules inline to mitigate the attack. Many of our customers operate business critical applications and services. When our systems make a decision to insert a rule, customers might want to be able to verify that all the malicious traffic is mitigated, and that legitimate user traffic is not. Our DDoS alerts begin firing as soon as our systems make a mitigation decision. Therefore, by informing our customers about a decision to insert a rule in real time, they can observe and verify that their Internet properties are both protected and available.</p>
    <div>
      <h3>Managing many Internet properties</h3>
      <a href="#managing-many-internet-properties">
        
      </a>
    </div>
    <p>The <i>standard</i> DDoS Alerts alert you on DDoS attacks that target any and all of your Cloudflare-protected Internet properties. However, some of our customers may manage large numbers of Internet properties ranging from hundreds to hundreds of thousands. The <i>standard</i> DDoS Alerts would notify users every time one of those properties would come <a href="https://www.cloudflare.com/ddos/under-attack/">under attack</a> — which could become very noisy.</p><p>The Advanced DDoS Alerts address this concern by allowing users to select the specific Internet properties that they want to be notified about; zones and hostnames for WAF/CDN customers, and IP prefixes for Magic Transit and Spectrum BYOIP customers.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2Cjr6FmfEQkyF3MvWlHOJj/72bcc9434cab8ca418e99a90edf038eb/image5-3.png" />
            
            </figure><p>Creating an Advanced HTTP DDoS Attack Alert: selecting zones and hostnames</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1UYZi1j82vvbpmwEe8zRhm/30b5f12e2d79bec798daa8bc80be86a6/image8-1.png" />
            
            </figure><p>Creating an Advanced L3/4 DDoS Attack Alert: selecting prefixes</p>
    <div>
      <h3>One (attack) size doesn’t fit all</h3>
      <a href="#one-attack-size-doesnt-fit-all">
        
      </a>
    </div>
    <p>The <i>standard</i> DDoS Alerts alert you on DDoS attacks of any size. Well, almost any size. We implemented minimal alert thresholds to avoid spamming our customers’ email inboxes. Those limits are very small and not customer-configurable. As we’ve seen in the recent <a href="/ddos-attack-trends-for-2022-q2/">DDoS trends report</a>, most of the attacks are very small — another reason why the <i>standard</i> DDoS Alert could become noisy for customers that only care about very large attacks. On the opposite end of the spectrum, choosing not to alert may become too quiet for customers that do want to be notified about smaller attacks.</p><p>The Advanced DDoS Alerts let customers choose their own alert threshold. WAF/CDN customers can define the minimum request-per-second rate of an HTTP DDoS attack alert. Magic Transit and Spectrum BYOIP customers can define the packet-per-second and Megabit-per-second rates of a L3/4 DDoS attack alert.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1nKOY9GmZnN4Wz77ryXsrU/49d88d5345b37974dc3c1a414cd8f11a/image1-13.png" />
            
            </figure><p>Creating an Advanced HTTP DDoS Attack Alert: defining request rate</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6Yb88105uQg4rCgeQOUZbx/0bd4d6daeb4e1b9b62d0a915589ba7f0/image4-4.png" />
            
            </figure><p>Creating an Advanced L3/4 DDoS Attack Alert: defining packet/bit rate</p>
    <div>
      <h3>Not all protocols are created equal</h3>
      <a href="#not-all-protocols-are-created-equal">
        
      </a>
    </div>
    <p>As part of the Advanced L3/4 DDoS Alerts, we also let our users define the protocols to be alerted on. If a Magic Transit customer manages mostly UDP applications, they may not care if TCP-based DDoS attacks target it. Similarly, if a Spectrum BYOIP customer only cares about HTTP/TCP traffic, other-protocol-based attacks could be of no concern to them.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/vmCFMGgWywHa9JsYR8l9x/4d59b6626850fb65eb8ff5d3ea17ebea/image2-12.png" />
            
            </figure>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2A7pIeRAwd8Y79oYlMUB7B/70b2298c0d345d4ec97dc82f5c2405c7/image6-1.png" />
            
            </figure><p>Creating an Advanced L3/4 DDoS Attack Alert: selecting the protocols</p>
    <div>
      <h3>Creating an Advanced DDoS Alert</h3>
      <a href="#creating-an-advanced-ddos-alert">
        
      </a>
    </div>
    <p>We’ll show here how to create an Advanced <i>HTTP</i> DDoS Alert, but the process to create a L3/4 alert is similar. You can view a more detailed guide on our <a href="https://developers.cloudflare.com/ddos-protection/reference/alerts/">developers website</a>.</p><p>First, click <a href="https://dash.cloudflare.com/?to=/:account/notifications/create">here</a> or log in to your Cloudflare account, navigate to <b>Notifications</b> and click <b>Add.</b> Then select the <b>Advanced HTTP DDoS Attack Alert</b> or <b>Advanced L3/4 DDoS Attack Alert</b> (based on your eligibility). Give your alert a name, an optional description, add your preferred delivery method (e.g., Webhook) and click <b>Next</b>.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7GTgWL3bPJMRqMManrLTGK/7a26d2d542af56cc23a221f28beae4fb/image7-1.png" />
            
            </figure><p>Step 1: Creating an Advanced HTTP DDoS Attack Alert</p><p>Second, select the domains you’d like to be alerted on. You can also narrow it down to specific hostnames. Define the minimum request-per-second rate to be alerted on, click <b>Save,</b> and voilà.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5NS3PfATPZb8FZtu5t9Vtr/88799eb6174ef73d2e0ae2045a68766f/image3-8.png" />
            
            </figure><p>Step 2: Defining the Advanced HTTP DDoS Attack Alert conditions</p>
    <div>
      <h3>Actionable alerts for making better decisions</h3>
      <a href="#actionable-alerts-for-making-better-decisions">
        
      </a>
    </div>
    <p>Cloudflare Advanced DDoS Alerts aim to provide our customers with configurable controls to make better decisions for their own environments. Customers can now be alerted on attacks based on which domain/prefix is being attacked, the size of the attack, and the protocol of the attack. We recognize that the power to configure and control DDoS attack alerts should ultimately be left up to our customers, and we are excited to announce the availability of this functionality.</p><p>Want to learn more about Advanced DDoS Alerts? Visit our <a href="https://developers.cloudflare.com/ddos-protection/reference/alerts/">developer site</a>.</p><p>Interested in upgrading to get Advanced DDoS Alerts? Contact your account team.</p><p>New to Cloudflare? <a href="https://www.cloudflare.com/plans/enterprise/discover/contact/">Speak to a Cloudflare expert</a>.</p>
    <div>
      <h3>Watch on Cloudflare TV</h3>
      <a href="#watch-on-cloudflare-tv">
        
      </a>
    </div>
    <div></div> ]]></content:encoded>
            <category><![CDATA[GA Week]]></category>
            <category><![CDATA[General Availability]]></category>
            <category><![CDATA[Advanced DDoS]]></category>
            <category><![CDATA[DDoS]]></category>
            <category><![CDATA[Magic Transit]]></category>
            <category><![CDATA[Spectrum]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[BYOIP]]></category>
            <guid isPermaLink="false">4xaJFRz4JI0tzYVZSB09B9</guid>
            <dc:creator>Omer Yoachimik</dc:creator>
        </item>
        <item>
            <title><![CDATA[Introducing Cloudflare Adaptive DDoS Protection - our new traffic profiling system for mitigating DDoS attacks]]></title>
            <link>https://blog.cloudflare.com/adaptive-ddos-protection/</link>
            <pubDate>Mon, 19 Sep 2022 13:45:00 GMT</pubDate>
            <description><![CDATA[ Cloudflare’s new Adaptive DDoS Protection system learns your unique traffic patterns and constantly adapts to protect you against sophisticated DDoS attacks ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Every Internet property is unique, with its own traffic behaviors and patterns. For example, a website may only expect user traffic from certain geographies, and a network might only expect to see a limited set of protocols.</p><p>Understanding that the traffic patterns of each Internet property are unique is what led us to develop the Adaptive DDoS Protection system. Adaptive DDoS Protection joins our existing suite of <a href="/deep-dive-cloudflare-autonomous-edge-ddos-protection/">automated DDoS defenses</a> and takes it to the next level. The new system learns your unique traffic patterns and adapts to <a href="https://www.cloudflare.com/learning/ddos/how-to-prevent-ddos-attacks/">protect against sophisticated DDoS attacks</a>.</p><p>Adaptive DDoS Protection is now generally available to Enterprise customers:</p><ul><li><p><b>HTTP Adaptive DDoS Protection</b> - available to WAF/CDN customers on the <a href="https://www.cloudflare.com/plans/enterprise/">Enterprise plan</a>, who have also subscribed to the Advanced DDoS Protection service.</p></li><li><p><b>L3/4 Adaptive DDoS Protection</b> - available to Magic Transit and Spectrum customers on an Enterprise plan.</p></li></ul>
    <div>
      <h3>Adaptive DDoS Protection learns your traffic patterns</h3>
      <a href="#adaptive-ddos-protection-learns-your-traffic-patterns">
        
      </a>
    </div>
    <p>The Adaptive DDoS Protection system creates a traffic profile by looking at a customer’s maximal rates of traffic every day, for the past seven days. The profiles are recalculated every day using the past seven-day history. We then store the maximal traffic rates seen for every predefined dimension value. Every profile uses one dimension and these dimensions include the source country of the request, the country where the Cloudflare data center that received the IP packet is located, user agent, IP protocol, destination ports and more.</p><p>So, for example, for the <a href="/location-aware-ddos-protection/">profile that uses the source country as a dimension</a>, the system will log the maximal traffic rates seen per country. e.g. 2,000 requests per second (rps) for Germany, 3,000 rps for France, 10,000 rps for Brazil, and so on. This example is for HTTP traffic, but Adaptive DDoS protection also profiles L3/4 traffic for our Magic Transit and Spectrum Enterprise customers.</p><p>Another note on the maximal rates is that we use the 95th percentile rates. This means that we take a look at the maximal rates and discard the top 5% of the highest rates. The purpose of this is to eliminate outliers from the calculations.</p><p>Calculating traffic profiles is done asynchronously — meaning that it does not induce any latency to our customers’ traffic. The system  then distributes a compact profile representation across our network that can be consumed by our <a href="https://www.cloudflare.com/ddos/">DDoS protection systems</a> to be used to detect and mitigate DDoS attacks in a much more cost-efficient manner.</p><p>In addition to the traffic profiles, the Adaptive DDoS Protection also leverages Cloudflare’s <a href="https://developers.cloudflare.com/bots/concepts/bot-score/#machine-learning">Machine Learning</a> generated <a href="https://developers.cloudflare.com/bots/concepts/bot-score/">Bot Scores</a> as an additional signal to differentiate between user and automated traffic. The purpose of using these scores is to differentiate between legitimate spikes in user traffic that deviates from the traffic profile, and a spike of automated and potentially malicious traffic.</p>
    <div>
      <h3>Out of the box and easy to use</h3>
      <a href="#out-of-the-box-and-easy-to-use">
        
      </a>
    </div>
    <p>Adaptive DDoS Protection just works out of the box. It automatically creates the profiles, and then customers can tweak and tune the settings as they need via <a href="https://developers.cloudflare.com/ddos-protection/managed-rulesets/">DDoS Managed Rules</a>. Customers can change the sensitivity level, leverage expression fields to create overrides (e.g. exclude <i>this</i> type of traffic), and change the mitigation action to tailor the behavior of the system to their specific needs and traffic patterns.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6avwDSeZVfreb140FKSB5e/f59e79bcdcb9e644d87fec94fcdc7d72/image2-11.png" />
            
            </figure><p>Adaptive DDoS Protection complements the existing DDoS protection systems which leverages dynamic fingerprinting to detect and mitigate DDoS attacks. The two work in tandem to protect our customers from DDoS attacks. When Cloudflare customers onboard a new Internet property to Cloudflare, the dynamic fingerprinting protects them automatically and out of the box — without requiring any user action. Once the Adaptive DDoS Protection learns their legitimate traffic patterns and creates a profile, users can turn it on to provide an extra layer of protection.</p>
    <div>
      <h3>Rules included as part of the Adaptive DDoS Protection</h3>
      <a href="#rules-included-as-part-of-the-adaptive-ddos-protection">
        
      </a>
    </div>
    <p>As part of this release, we’re pleased to announce the following capabilities as part of Cloudflare’s Adaptive DDoS Protection:</p>
<table>
<thead>
  <tr>
    <th><span>Profiling Dimension</span></th>
    <th><span>Availability</span></th>
  </tr>
  <tr>
    <th><span>WAF/CDN customers on the Enterprise plan with Advanced DDoS</span></th>
    <th><span>Magic Transit &amp; Spectrum Enterprise customers</span></th>
  </tr>
</thead>
<tbody>
  <tr>
    <td><span>Origin errors</span></td>
    <td><span>✅</span></td>
    <td><span>❌</span></td>
  </tr>
  <tr>
    <td><span>Client IP Country &amp; region</span></td>
    <td><span>✅</span></td>
    <td><span>Coming soon</span></td>
  </tr>
  <tr>
    <td><span>User Agent (globally, not per customer*)</span></td>
    <td><span>✅</span></td>
    <td><span>❌</span></td>
  </tr>
  <tr>
    <td><span>IP Protocol</span></td>
    <td><span>❌</span></td>
    <td><span>✅</span></td>
  </tr>
  <tr>
    <td><span>Combination of IP Protocol and Destination Port</span></td>
    <td><span>❌</span></td>
    <td><span>Coming soon</span></td>
  </tr>
</tbody>
</table><p>*The User-Agent-aware feature analyzes, learns and profiles all the top user agents that we see across the Cloudflare network. This feature helps us identify DDoS attacks that leverage legacy or wrongly configured user agents.</p><p>Excluding UA-aware DDoS Protection, Adaptive DDoS Protection rules are deployed in Log mode. Customers can observe the traffic that’s flagged, tweak the sensitivity if needed, and then deploy the rules in mitigation mode. You can follow the steps outlined in <a href="https://developers.cloudflare.com/ddos-protection/managed-rulesets/adjust-rules/false-positive/">this guide</a> to do so.</p>
    <div>
      <h3>Making the impact of DDoS attacks a thing of the past</h3>
      <a href="#making-the-impact-of-ddos-attacks-a-thing-of-the-past">
        
      </a>
    </div>
    <p>Our mission at Cloudflare is to help build a better Internet. The DDoS Protection team’s vision is derived from this mission: our goal is to make the impact of DDoS attacks a thing of the past. Cloudflare’s Adaptive DDoS Protection takes us one step closer to achieving that vision: making Cloudflare’s DDoS protection even more intelligent, sophisticated, and tailored to our customer’s unique traffic patterns and individual needs.</p><p>Want to learn more about Cloudflare’s Adaptive DDoS Protection? Visit our <a href="https://developers.cloudflare.com/ddos-protection/managed-rulesets/adaptive-protection/">developer site</a>.</p><p>Interested in upgrading to get access to Adaptive DDoS Protection? Contact your account team.</p><p>New to Cloudflare? <a href="https://www.cloudflare.com/plans/enterprise/discover/contact/">Speak to a Cloudflare expert</a>.</p>
    <div>
      <h3>Watch on Cloudflare TV</h3>
      <a href="#watch-on-cloudflare-tv">
        
      </a>
    </div>
    <div></div><p></p> ]]></content:encoded>
            <category><![CDATA[GA Week]]></category>
            <category><![CDATA[General Availability]]></category>
            <category><![CDATA[DDoS]]></category>
            <category><![CDATA[DDoS Alerts]]></category>
            <category><![CDATA[Advanced DDoS]]></category>
            <category><![CDATA[Magic Transit]]></category>
            <category><![CDATA[Spectrum]]></category>
            <category><![CDATA[Product News]]></category>
            <guid isPermaLink="false">7oc5ew54cAi5VUpN6q9ZtS</guid>
            <dc:creator>Omer Yoachimik</dc:creator>
        </item>
        <item>
            <title><![CDATA[Integrating Network Analytics Logs with your SIEM dashboard]]></title>
            <link>https://blog.cloudflare.com/network-analytics-logs/</link>
            <pubDate>Tue, 17 May 2022 15:46:30 GMT</pubDate>
            <description><![CDATA[ We’re excited to announce the availability of Network Analytics Logs for maximum visibility into L3/4 traffic and DDoS attacks ]]></description>
            <content:encoded><![CDATA[ <p></p><p>We’re excited to announce the availability of Network Analytics Logs. <a href="https://www.cloudflare.com/magic-transit/">Magic Transit</a>, <a href="https://www.cloudflare.com/magic-firewall/">Magic Firewall</a>, <a href="https://www.cloudflare.com/magic-wan/">Magic WAN</a>, and <a href="https://www.cloudflare.com/products/cloudflare-spectrum/">Spectrum</a> customers on the Enterprise plan can feed packet samples directly into storage services, <a href="https://www.cloudflare.com/network-services/solutions/network-monitoring-tools/">network monitoring tools</a> such as Kentik, or their <a href="https://www.cloudflare.com/learning/security/what-is-siem/">Security Information Event Management (SIEM)</a> systems such as Splunk to gain near real-time visibility into network traffic and <a href="https://www.cloudflare.com/learning/ddos/what-is-a-ddos-attack/">DDoS attacks</a>.</p>
    <div>
      <h2>What’s included in the logs</h2>
      <a href="#whats-included-in-the-logs">
        
      </a>
    </div>
    <p>By creating a Network Analytics Logs job, Cloudflare will continuously push logs of packet samples directly to the HTTP endpoint of your choice, including Websockets. The logs arrive in JSON format which makes them easy to parse, transform, and aggregate. The logs include packet samples of traffic dropped and passed by the following systems:</p><ol><li><p>Network-layer DDoS Protection Ruleset</p></li><li><p>Advanced TCP Protection</p></li><li><p>Magic Firewall</p></li></ol><p>Note that not all mitigation systems are applicable to all Cloudflare services. Below is a table describing which mitigation service is applicable to which Cloudflare service:</p>
<table>
<thead>
  <tr>
    <th><br /><span>Mitigation System</span></th>
    <th><span>Cloudflare Service</span></th>
  </tr>
  <tr>
    <th><span>Magic Transit</span></th>
    <th><span>Magic WAN</span></th>
    <th><span>Spectrum</span></th>
  </tr>
</thead>
<tbody>
  <tr>
    <td><span>Network-layer DDoS Protection Ruleset</span></td>
    <td><span>✅</span></td>
    <td><span>❌</span></td>
    <td><span>✅</span></td>
  </tr>
  <tr>
    <td><span>Advanced TCP Protection</span></td>
    <td><span>✅</span></td>
    <td><span>❌</span></td>
    <td><span>❌</span></td>
  </tr>
  <tr>
    <td><span>Magic Firewall </span></td>
    <td><span>✅</span></td>
    <td><span>✅</span></td>
    <td><span>❌</span></td>
  </tr>
</tbody>
</table><p>Packets are processed by the mitigation systems in the order outlined above. Therefore, a packet that passed all three systems may produce three packet samples, one from each system. This can be very insightful when troubleshooting and wanting to understand where in the stack a packet was dropped. To avoid overcounting the total passed traffic, Magic Transit users should only take into consideration the passed packets from the last mitigation system, Magic Firewall.</p><p>An example of a packet sample log:</p>
            <pre><code>{"AttackCampaignID":"","AttackID":"","ColoName":"bkk06","Datetime":1652295571783000000,"DestinationASN":13335,"Direction":"ingress","IPDestinationAddress":"(redacted)","IPDestinationSubnet":"/24","IPProtocol":17,"IPSourceAddress":"(redacted)","IPSourceSubnet":"/24","MitigationReason":"","MitigationScope":"","MitigationSystem":"magic-firewall","Outcome":"pass","ProtocolState":"","RuleID":"(redacted)","RulesetID":"(redacted)","RulesetOverrideID":"","SampleInterval":100,"SourceASN":38794,"Verdict":"drop"}</code></pre>
            <p>All the available log fields are documented here: <a href="https://developers.cloudflare.com/logs/reference/log-fields/account/network_analytics_logs/">https://developers.cloudflare.com/logs/reference/log-fields/account/network_analytics_logs/</a></p>
    <div>
      <h2>Setting up the logs</h2>
      <a href="#setting-up-the-logs">
        
      </a>
    </div>
    <p>In this walkthrough, we will demonstrate how to feed the Network Analytics Logs into Splunk via <a href="https://www.postman.com/">Postman</a>. At this time, it is only possible to set up Network Analytics Logs via API. Setting up the logs requires three main steps:</p><ol><li><p>Create a Cloudflare API token.</p></li><li><p>Create a Splunk Cloud HTTP Event Collector (HEC) token.</p></li><li><p>Create and enable a Cloudflare Logpush job.</p></li></ol><p>Let’s get started!</p>
    <div>
      <h3>1) Create a Cloudflare API token</h3>
      <a href="#1-create-a-cloudflare-api-token">
        
      </a>
    </div>
    <ol><li><p>Log in to your Cloudflare account and navigate to <b>My Profile.</b></p></li><li><p>On the left-hand side, in the collapsing navigation menu, click <b>API Tokens.</b></p></li><li><p>Click <b>Create Token</b> and then, under <b>Custom token</b>, click <b>Get started.</b></p></li><li><p>Give your custom token a name, and select an Account scoped permission to edit Logs. You can also scope it to a specific/subset/all of your accounts.</p></li><li><p>At the bottom, click <b>Continue to summary</b>, and then <b>Create Token</b>.</p></li><li><p><b>Copy</b> and save your token. You can also test your token with the provided snippet in Terminal.</p></li></ol><p>When you're using an API token, you don't need to provide your email address as part of the API credentials.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4HD1OtGie9s6CbcVPA7KUx/2f103d64615a7d44e9d3adebc1e12a5b/image5-17.png" />
            
            </figure><p>Read more about creating an API token on the Cloudflare Developers website: <a href="https://developers.cloudflare.com/api/tokens/create/">https://developers.cloudflare.com/api/tokens/create/</a></p>
    <div>
      <h3>2) Create a Splunk token for an HTTP Event Collector</h3>
      <a href="#2-create-a-splunk-token-for-an-http-event-collector">
        
      </a>
    </div>
    <p>In this walkthrough, we’re using a Splunk Cloud free trial, but <a href="https://developers.cloudflare.com/logs/get-started/enable-destinations/">you can use almost any service that can accept logs over HTTPS</a>. In some cases, if you’re using an on-premise SIEM solution, you may need to allowlist <a href="https://www.cloudflare.com/ips/">Cloudflare IP address</a> in your firewall to be able to receive the logs.</p><ol><li><p>Create a Splunk Cloud account. I created a trial account for the purpose of this blog.</p></li><li><p>In the Splunk Cloud dashboard, go to <b>Settings</b> &gt; <b>Data Input.</b></p></li><li><p>Next to <b>HTTP Event Collector</b>, click <b>Add new.</b></p></li><li><p>Follow the steps to create a token.</p></li><li><p>Copy your token and your allocated Splunk hostname and save both for later.</p></li></ol>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3n4C580G6FxELEaOqPO7Wz/58c6f8fa136f71413b008ace676c3ca9/image2-41.png" />
            
            </figure><p>Read more about using Splunk with Cloudflare Logpush on the Cloudflare Developers website: <a href="https://developers.cloudflare.com/logs/get-started/enable-destinations/splunk/">https://developers.cloudflare.com/logs/get-started/enable-destinations/splunk/</a></p><p>Read more about creating an HTTP Event Collector token on Splunk’s website: <a href="https://docs.splunk.com/Documentation/Splunk/8.2.6/Data/UsetheHTTPEventCollector">https://docs.splunk.com/Documentation/Splunk/8.2.6/Data/UsetheHTTPEventCollector</a></p>
    <div>
      <h3>3) Create a Cloudflare Logpush job</h3>
      <a href="#3-create-a-cloudflare-logpush-job">
        
      </a>
    </div>
    <p>Creating and enabling a job is very straightforward. It requires only one API call to Cloudflare to create and enable a job.</p><p>To send the API calls I used <a href="https://www.postman.com/">Postman</a>, which is a user-friendly API client that was recommended to me by a colleague. It allows you to save and customize API calls. You can also use Terminal/CMD or any other API client/script of your choice.</p><p>One thing to notice is Network Analytics Logs are <b>account</b>-scoped. The API endpoint is therefore a tad different from what you would normally use for zone-scoped datasets such as HTTP request logs and DNS logs.</p><p>This is the endpoint for creating an account-scoped Logpush job:</p><p><code>https://api.cloudflare.com/client/v4/accounts/**{account-id}**/logpush/jobs</code></p><p>Your account identifier number is a unique identifier of your account. It is a string of 32 numbers and characters. If you’re not sure what your account identifier is, log in to Cloudflare, select the appropriate account, and copy the string at the end of the URL.</p><p><code>https://dash.cloudflare.com/**{account-id}**</code></p><p>Then, set up a new request in Postman (or any other API client/CLI tool).</p><p>To successfully create a Logpush job, you’ll need the HTTP method, URL, Authorization token, and request body (data). The request body must include a destination configuration (<code>destination_conf</code>), the specified dataset (<code>network_analytics_logs</code>, in our case), and the token (your Splunk token).</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1SlAhohD7nH7Ge6ODVPiWT/132cb3aa127ba2cc4596b2fbc2f2c6e8/image1-48.png" />
            
            </figure><p><b>Method</b>:</p><p><code>POST</code></p><p><b>URL</b>:</p><p><code>https://api.cloudflare.com/client/v4/accounts/**{account-id}**/logpush/jobs</code></p><p><b>Authorization</b>: Define a Bearer authorization in the <b>Authorization</b> tab, or add it to the header, and add your Cloudflare API token.</p><p><b>Body</b>: Select a <b>Raw</b> &gt; <b>JSON</b></p>
            <pre><code>{
"destination_conf": "{your-unique-splunk-configuration}",
"dataset": "network_analytics_logs",
"token": "{your-splunk-hec-tag}",
"enabled": "true"
}</code></pre>
            <p>If you’re using Splunk Cloud, then your unique configuration has the following format:</p><p><code>**{your-unique-splunk-configuration}=**splunk://**{your-splunk-hostname}**.splunkcloud.com:8088/services/collector/raw?channel=**{channel-id}**&amp;header_Authorization=Splunk%20**{your-splunk–hec-token}**&amp;insecure-skip-verify=false</code></p><p>Definition of the variables:</p><p><code><b>{your-splunk-hostname}</b></code>= Your allocated Splunk Cloud hostname.</p><p><code><b>{channel-id}</b></code> = A unique ID that you choose to assign for.<b>`{your-splunk–hec-token}`</b> = The token that you generated for your Splunk HEC.</p><p>An important note is that customers should have a valid <a href="https://www.cloudflare.com/application-services/products/ssl/">SSL/TLS certificate</a> on their Splunk instance to support an encrypted connection.</p><p>After you’ve done that, you can create a GET request to the same URL (no request body needed) to verify that the job was created and is enabled.</p><p>The response should be similar to the following:</p>
            <pre><code>{
    "errors": [],
    "messages": [],
    "result": {
        "id": {job-id},
        "dataset": "network_analytics_logs",
        "frequency": "high",
        "kind": "",
        "enabled": true,
        "name": null,
        "logpull_options": null,
        "destination_conf": "{your-unique-splunk-configuration}",
        "last_complete": null,
        "last_error": null,
        "error_message": null
    },
    "success": true
}</code></pre>
            <p>Shortly after, you should start receiving logs to your Splunk HEC.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/45iJNwBrNbf0ptNsRdc4N/7a2464708c75292b4f704a89a20220f2/image4-27.png" />
            
            </figure><p>Read more about enabling Logpush on the Cloudflare Developers website: <a href="https://developers.cloudflare.com/logs/reference/logpush-api-configuration/examples/example-logpush-curl/">https://developers.cloudflare.com/logs/reference/logpush-api-configuration/examples/example-logpush-curl/</a></p>
    <div>
      <h2>Reduce costs with R2 storage</h2>
      <a href="#reduce-costs-with-r2-storage">
        
      </a>
    </div>
    <p>Depending on the amount of logs that you read and write, the cost of third party cloud storage can skyrocket — forcing you to decide between managing a tight budget and being able to properly investigate networking and security issues. However, we believe that you shouldn’t have to make those trade-offs. With <a href="/logs-r2/">R2’s low costs</a>, we’re making this decision easier for our customers. Instead of feeding logs to a third party, you can reap the cost benefits of <a href="/logs-r2/">storing them in R2</a>.</p><p>To learn more about the <a href="https://www.cloudflare.com/developer-platform/r2/">R2 features and pricing</a>, check out the <a href="/r2-open-beta/">full blog post</a>. To enable R2, contact your account team.</p>
    <div>
      <h2>Cloudflare logs for maximum visibility</h2>
      <a href="#cloudflare-logs-for-maximum-visibility">
        
      </a>
    </div>
    <p><a href="https://www.cloudflare.com/plans/enterprise/">Cloudflare Enterprise</a> customers have access to detailed logs of the metadata generated by our products. These logs are helpful for troubleshooting, identifying network and configuration adjustments, and generating reports, especially when combined with logs from other sources, such as your servers, firewalls, routers, and other appliances.</p><p>Network Analytics Logs joins Cloudflare’s family of products on Logpush: <a href="https://developers.cloudflare.com/logs/reference/log-fields/zone/dns_logs/">DNS logs</a>, <a href="https://developers.cloudflare.com/logs/reference/log-fields/zone/firewall_events/">Firewall events</a>, <a href="https://developers.cloudflare.com/logs/reference/log-fields/zone/http_requests/">HTTP requests</a>, <a href="https://developers.cloudflare.com/logs/reference/log-fields/zone/nel_reports/">NEL reports</a>, <a href="https://developers.cloudflare.com/logs/reference/log-fields/zone/spectrum_events/">Spectrum events</a>, <a href="https://developers.cloudflare.com/logs/reference/log-fields/account/audit_logs/">Audit logs</a>, <a href="https://developers.cloudflare.com/logs/reference/log-fields/account/gateway_dns/">Gateway DNS</a>, <a href="https://developers.cloudflare.com/logs/reference/log-fields/account/gateway_http/">Gateway HTTP</a>, and <a href="https://developers.cloudflare.com/logs/reference/log-fields/account/gateway_network/">Gateway Network</a>.</p><p>Not using Cloudflare yet? <a href="https://dash.cloudflare.com/sign-up">Start now</a> with our Free and <a href="https://www.cloudflare.com/plans/pro/">Pro plans</a> to protect your websites against DDoS attacks, or <a href="https://www.cloudflare.com/magic-transit/">contact us</a> for comprehensive <a href="https://www.cloudflare.com/ddos/">DDoS protection</a> and <a href="https://www.cloudflare.com/learning/cloud/what-is-a-cloud-firewall/">firewall-as-a-service</a> for your entire network.</p> ]]></content:encoded>
            <category><![CDATA[DDoS]]></category>
            <category><![CDATA[Data]]></category>
            <category><![CDATA[Magic Transit]]></category>
            <category><![CDATA[Spectrum]]></category>
            <category><![CDATA[Logs]]></category>
            <category><![CDATA[Network]]></category>
            <category><![CDATA[SIEM]]></category>
            <guid isPermaLink="false">7J0cgdiD9dX3Xb9q1OaN3f</guid>
            <dc:creator>Omer Yoachimik</dc:creator>
            <dc:creator>Kyle Bowman</dc:creator>
        </item>
        <item>
            <title><![CDATA[How to customize your layer 3/4 DDoS protection settings]]></title>
            <link>https://blog.cloudflare.com/l34-ddos-managed-rules/</link>
            <pubDate>Thu, 09 Dec 2021 13:59:16 GMT</pubDate>
            <description><![CDATA[ Cloudflare Enterprise customers using the Magic Transit and Spectrum services can now tune and tweak their L3/4 DDoS protection settings directly from the Cloudflare dashboard or via the Cloudflare API. ]]></description>
            <content:encoded><![CDATA[ 
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4XAmgCsMHnrtq7ExJ8f80q/b6e9f311a614523ffbf7a91a47d96199/image2-28.png" />
            
            </figure><p>After initially providing our customers <a href="/http-ddos-managed-rules/">control over the HTTP-layer DDoS protection settings earlier this year</a>, we’re now excited to extend the control our customers have to the packet layer. Using these new controls, Cloudflare Enterprise customers using the <a href="https://www.cloudflare.com/magic-transit/">Magic Transit</a> and <a href="https://www.cloudflare.com/products/cloudflare-spectrum/">Spectrum</a> services can now tune and tweak their L3/4 DDoS protection settings directly from the Cloudflare dashboard or via the Cloudflare API.</p><p>The new functionality provides customers control over two main DDoS rulesets:</p><ol><li><p><b>Network-layer DDoS Protection</b> <b>ruleset</b> — This ruleset includes rules to detect and mitigate DDoS attacks on layer 3/4 of the <a href="https://www.cloudflare.com/learning/ddos/glossary/open-systems-interconnection-model-osi/">OSI model</a> such as UDP floods, SYN-ACK reflection attacks, SYN Floods, and DNS floods. This ruleset is available for Spectrum and Magic Transit customers on the Enterprise plan.</p></li><li><p><b>Advanced TCP Protection</b> <b>ruleset</b> — This ruleset includes rules to detect and mitigate sophisticated out-of-state TCP attacks such as spoofed ACK Floods, Randomized SYN Floods, and distributed SYN-ACK Reflection attacks. This ruleset is available for Magic Transit customers only.</p></li></ol><p>To learn more, review our <a href="https://developers.cloudflare.com/ddos-protection/managed-rulesets">DDoS Managed Ruleset developer documentation</a>. We’ve put together a few guides that we hope will be helpful for you:</p><ol><li><p><a href="https://developers.cloudflare.com/ddos-protection/get-started">Onboarding &amp; getting started with Cloudflare DDoS protection</a></p></li><li><p><a href="https://developers.cloudflare.com/ddos-protection/managed-rulesets/adjust-rules/false-negative">Handling false negatives</a></p></li><li><p><a href="https://developers.cloudflare.com/ddos-protection/managed-rulesets/adjust-rules/false-positive">Handling false positives</a></p></li><li><p><a href="https://developers.cloudflare.com/ddos-protection/best-practices/third-party">Best practices when using VPNs, VoIP, and other third-party services</a></p></li><li><p><a href="https://developers.cloudflare.com/ddos-protection/reference/simulate-ddos-attack">How to simulate a DDoS attack</a></p></li></ol>
    <div>
      <h2>Cloudflare’s DDoS Protection</h2>
      <a href="#cloudflares-ddos-protection">
        
      </a>
    </div>
    <p>A <a href="https://www.cloudflare.com/learning/ddos/what-is-a-ddos-attack/">Distributed Denial of Service (DDoS) attack</a> is a type of cyberattack that aims to disrupt the victim’s Internet services. There are many types of DDoS attacks, and they can be generated by attackers at different layers of the Internet. One example is the <a href="https://www.cloudflare.com/learning/ddos/http-flood-ddos-attack/">HTTP flood</a>. It aims to disrupt HTTP application servers such as those that power mobile apps and websites. Another example is the <a href="https://www.cloudflare.com/learning/ddos/udp-flood-ddos-attack/">UDP flood</a>. While this type of attack can be used to disrupt HTTP servers, it can also be used in an attempt to disrupt non-HTTP applications. These include TCP-based and UDP-based applications, networking services such as <a href="/update-on-voip-attacks/">VoIP services</a>, gaming servers, cryptocurrency, and more.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2C1TrYyVltpMud4OWi3UgR/693f024b661f0a5231e2a0579360468a/image5-12.png" />
            
            </figure><p>To defend organizations against DDoS attacks, we built and operate software-defined systems that run autonomously. They automatically detect and mitigate DDoS attacks across our entire network. You can read more about our autonomous <a href="https://www.cloudflare.com/ddos/">DDoS protection systems</a> and how they work in our <a href="/deep-dive-cloudflare-autonomous-edge-ddos-protection/">deep-dive technical blog post</a>.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/zVNe9hiMYUaf1DcZzxP2t/281a3ae7d02c862e830cef6e480c7bca/unnamed-33.png" />
            
            </figure>
    <div>
      <h2>Unmetered and unlimited DDoS Protection</h2>
      <a href="#unmetered-and-unlimited-ddos-protection">
        
      </a>
    </div>
    <p>The level of protection that we offer is <a href="/unmetered-mitigation/">unmetered and unlimited</a> — It is not bounded by the size of the attack, the number of the attacks, or the duration of the attacks. This is especially important these days because as we’ve recently seen, attacks are getting larger and more frequent. Consequently, in Q3, network-layer attacks increased by 44% compared to the previous quarter. Furthermore, just recently, our systems automatically detected and mitigated a <a href="/cloudflare-blocks-an-almost-2-tbps-multi-vector-ddos-attack/">DDoS attack that peaked just below 2 Tbps</a> — the largest we’ve seen to date.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/FDNWzHw3jIcywi8qMcKlq/60fd4b3b9bf13f144c9e72ab8a9c1ba9/image4.jpg" />
            
            </figure><p>Mirai botnet launched an almost 2 Tbps DDoS attack</p><p>Read more about <a href="https://radar.cloudflare.com/notebooks/ddos-2021-q3">recent DDoS trends</a>.</p>
    <div>
      <h2>Managed Rulesets</h2>
      <a href="#managed-rulesets">
        
      </a>
    </div>
    <p>You can think of our autonomous DDoS protection systems as groups (rulesets) of intelligent rules. There are rulesets of HTTP DDoS Protection rules, Network-layer DDoS Protection rules and Advanced TCP Protection rules. In this blog post, we will cover the latter two rulesets. We’ve already covered the former in the blog post <a href="/http-ddos-managed-rules/">How to customize your HTTP DDoS protection settings</a>.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/38zn08I5UIGe7IvmFb5aIT/6dac7a5d5ac1ab875410d6e77d765a14/image7-6.png" />
            
            </figure><p>Cloudflare L3/4 DDoS Managed Rules</p><p>In the <b>Network-layer DDoS Protection rulesets</b>, each rule has a unique set of conditional fingerprints, dynamic field masking, activation thresholds, and mitigation actions. These rules are managed (by Cloudflare), meaning that the specifics of each rule is curated in-house by our DDoS experts. Before deploying a new rule, it is first rigorously tested and optimized for mitigation accuracy and efficiency across our entire global network.</p><p>In the <b>Advanced TCP Protection ruleset</b>, we use a novel TCP state classification engine to identify the state of TCP flows. The engine powering this ruleset is <i>flowtrackd</i> — you can read more about it in our <a href="/announcing-flowtrackd/">announcement blog post</a>. One of the unique features of this system is that it is able to operate using only the ingress (inbound) packet flows. The system sees only the ingress traffic and is able to drop, challenge, or allow packets based on their legitimacy. For example, a flood of ACK packets that don’t correspond to open TCP connections will be dropped.</p>
    <div>
      <h2>How attacks are detected and mitigated</h2>
      <a href="#how-attacks-are-detected-and-mitigated">
        
      </a>
    </div>
    
    <div>
      <h3>Sampling</h3>
      <a href="#sampling">
        
      </a>
    </div>
    <p>Initially, traffic is routed through the Internet via <a href="https://www.cloudflare.com/learning/cdn/glossary/anycast-network/">BGP Anycast</a> to the nearest Cloudflare edge data center. Once the traffic reaches our data center, our DDoS systems sample it asynchronously allowing for out-of-path analysis of traffic without introducing latency penalties. The Advanced TCP Protection ruleset needs to view the entire packet flow and so it sits inline for Magic Transit customers only. It, too, does not introduce any latency penalties.</p>
    <div>
      <h3>Analysis &amp; mitigation</h3>
      <a href="#analysis-mitigation">
        
      </a>
    </div>
    <p>The analysis for the <b>Advanced TCP Protection ruleset</b> is straightforward and efficient. The system qualifies TCP flows and tracks their state. In this way, packets that don’t correspond to a legitimate connection and its state are dropped or challenged. The mitigation is activated only above certain thresholds that customers can define.</p><p>The analysis for the <b>Network-layer DDoS Protection ruleset</b> is done using data streaming algorithms. Packet samples are compared to the conditional fingerprints and multiple real-time signatures are created based on the dynamic masking. Each time another packet matches one of the signatures, a counter is increased. When the activation threshold is reached for a given signature, a mitigation rule is compiled and pushed inline. The mitigation rule includes the real-time signature and the mitigation action, e.g., drop.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2fZ5yjyahUgG57t6OeKuNW/3bff52421a5f96a64c41a0573bf7fcba/image9-1.png" />
            
            </figure>
    <div>
      <h3>​​​​Example</h3>
      <a href="#example">
        
      </a>
    </div>
    <p>As a simple example, one fingerprint could include the following fields: source IP, source port, destination IP, and the TCP sequence number. A packet flood attack with a fixed sequence number would match the fingerprint and the counter would increase for every packet match until the activation threshold is exceeded. Then a mitigation action would be applied.</p><p>However, in the case of a <a href="https://www.cloudflare.com/learning/ddos/glossary/ip-spoofing/">spoofed</a> attack where the source IP addresses and ports are randomized, we would end up with multiple signatures for each combination of source IP and port. Assuming a sufficiently randomized/distributed attack, the activation thresholds would not be met and mitigation would not occur. For this reason, we use dynamic masking, i.e. ignoring fields that may not be a strong indicator of the signature. By masking (ignoring) the source IP and port, we would be able to match all the attack packets based on the unique TCP sequence number regardless of how randomized/distributed the attack is.</p>
    <div>
      <h3>Configuring the DDoS Protection Settings</h3>
      <a href="#configuring-the-ddos-protection-settings">
        
      </a>
    </div>
    <p>For now, we’ve only exposed a handful of the Network-layer DDoS protection rules that we’ve identified as the ones most prone to customizations. We will be exposing more and more rules on a regular basis. This shouldn’t affect any of your traffic.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5WfQ9IUiHT6kl2R6i2fsu4/d422a01101609b28f3f61c29bae31ebc/image8-4.png" />
            
            </figure><p>Overriding the sensitivity level and mitigation action</p><p>For the <b>Network-layer DDoS Protection ruleset</b>, for each of the available rules, you can override the <a href="https://developers.cloudflare.com/ddos-protection/managed-rulesets/network/override-parameters#sensitivity-level">sensitivity level</a> (activation threshold), customize the <a href="https://developers.cloudflare.com/ddos-protection/managed-rulesets/network/override-parameters#action">mitigation action</a>, and apply <a href="https://developers.cloudflare.com/ddos-protection/managed-rulesets/network/fields">expression filters</a> to exclude/include traffic from the DDoS protection system based on various packet fields. You can create multiple overrides to customize the protection for your network and your various applications.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4S7gQPtUwI4ksZFqre6wvy/844f25450258c041d5693ac993f660cb/image3-22.png" />
            
            </figure><p>Configuring expression fields for the DDoS Managed Rules to match on</p><p>In the past, you’d have to go through our support channels to customize the rules. In some cases, this may have taken longer to resolve than desired. With today’s announcement, you can tailor and fine-tune the settings of our autonomous edge system by yourself to quickly improve the accuracy of the protection for your specific network needs.</p><p>For the <b>Advanced TCP Protection ruleset</b>, for now, we’ve only exposed the ability to enable or disable it as a whole in the dashboard. To enable or disable the ruleset per IP prefix, you must use <a href="https://developers.cloudflare.com/ddos-protection/managed-rulesets/network/configure-api">the API</a>. At this time, when initially onboarding to Cloudflare, the Cloudflare team must first create a policy for you. After onboarding, if you need to change the sensitivity thresholds, use Monitor mode, or add filter expressions you must contact Cloudflare Support. In upcoming releases, this too will be available via the dashboard and API without requiring help from our Support team.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1UG7jeJmxbiKYel0U9qRol/dcf3d05fc6a7a69f975ffe70d4cde261/image1-45.png" />
            
            </figure>
    <div>
      <h2>Pre-existing customizations</h2>
      <a href="#pre-existing-customizations">
        
      </a>
    </div>
    <p>If you previously contacted Cloudflare Support to apply customizations, your customizations have been preserved, and you can visit the dashboard to view the settings of the Network-layer DDoS Protection ruleset and change them if you need. If you require any changes to your Advanced TCP Protection customizations, please reach out to Cloudflare Support.</p><p>If so far you didn't have the need to customize this protection, there is no action required on your end. However, if you would like to view and customize your DDoS protection settings, follow <a href="https://developers.cloudflare.com/ddos-protection/managed-rulesets/network/configure-dashboard">this dashboard guide</a> or review the <a href="https://developers.cloudflare.com/ddos-protection/managed-rulesets/network/configure-api">API documentation</a> to programmatically configure the DDoS protection settings.</p>
    <div>
      <h2>Helping Build a Better Internet</h2>
      <a href="#helping-build-a-better-internet">
        
      </a>
    </div>
    <p>At Cloudflare, everything we do is guided by our mission to help build a better Internet. The DDoS team’s vision is derived from this mission: our goal is to make the impact of DDoS attacks a thing of the past. Our first step was to build the autonomous systems that detect and mitigate attacks independently. Done. The second step was to expose the control plane over these systems to our customers (announced today). Done. The next step will be to fully automate the configuration with an auto-pilot feature — training the systems to learn your specific traffic patterns to automatically optimize your DDoS protection settings. You can expect many more improvements, automations, and new capabilities to keep your Internet properties safe, available, and performant.</p><p>Not using Cloudflare yet? <a href="https://dash.cloudflare.com/sign-up">Start now</a>.</p> ]]></content:encoded>
            <category><![CDATA[CIO Week]]></category>
            <category><![CDATA[DDoS]]></category>
            <category><![CDATA[Managed Rules]]></category>
            <category><![CDATA[dosd]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[Magic Transit]]></category>
            <category><![CDATA[Spectrum]]></category>
            <guid isPermaLink="false">31YKEgNs7eGl1f4G22B5o2</guid>
            <dc:creator>Omer Yoachimik</dc:creator>
        </item>
        <item>
            <title><![CDATA[Announcing Argo for Spectrum]]></title>
            <link>https://blog.cloudflare.com/argo-spectrum/</link>
            <pubDate>Tue, 23 Nov 2021 13:58:39 GMT</pubDate>
            <description><![CDATA[ Announcing general availability of Argo for Spectrum, a way to turbo-charge any TCP based application. ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Today we're excited to announce the general availability of Argo for Spectrum, a way to turbo-charge any TCP based application. With Argo for Spectrum, you can reduce latency, packet loss and improve connectivity for any TCP application, including common protocols like Minecraft, Remote Desktop Protocol and SFTP.</p>
    <div>
      <h3>The Internet — more than just a browser</h3>
      <a href="#the-internet-more-than-just-a-browser">
        
      </a>
    </div>
    <p>When people think of the Internet, many of us think about using a browser to view websites. Of course, it’s so much more! We often use other ways to connect to each other and to the resources we need for work. For example, you may interact with servers for work using SSH File Transfer Protocol (SFTP), git or Remote Desktop software. At home, you might play a video game on the Internet with friends.</p><p>To help people that protect these services against DDoS attacks, Spectrum launched in 2018 and extends Cloudflare’s <a href="https://www.cloudflare.com/ddos/">DDoS protection</a> to any TCP or UDP based protocol. Customers use it for a wide variety of use cases, including to protect video streaming (RTMP), gaming and internal IT systems. Spectrum also supports common VoIP protocols such as SIP and RTP, which have recently seen an <a href="/attacks-on-voip-providers/">increase in DDoS ransomware attacks</a>. A lot of these applications are also highly sensitive to performance issues. No one likes waiting for a file to upload or dealing with a lagging video game.</p><p>Latency and throughput are the two metrics people generally discuss when talking about network performance. Latency refers to the amount of time a piece of data (a packet) takes to traverse between two systems. Throughput refers to the amount of bits you can actually send per second. This blog will discuss how these two interplay and how we improve them with Argo for Spectrum.</p>
    <div>
      <h3>Argo to the rescue</h3>
      <a href="#argo-to-the-rescue">
        
      </a>
    </div>
    <p>There are a number of factors that cause poor performance between two points on the Internet, including network congestion, the distance between the two points, and packet loss. This is a problem many of our customers have, even on web applications. To help, we launched <a href="/argo/">Argo Smart Routing</a> in 2017, a way to reduce latency (or <i>time to first byte</i>, to be precise) for any HTTP request that goes to an origin.</p><p>That’s great for folks who run websites, but what if you’re working on an application that doesn’t speak HTTP? Up until now people had limited options for improving performance for these applications. That changes today with the general availability of Argo for Spectrum. Argo for Spectrum offers the same benefits as Argo Smart Routing for any TCP-based protocol.</p><p>Argo for Spectrum takes the same smarts from our network traffic and applies it to Spectrum. At time of writing, Cloudflare sits in front of approximately 20% of the Alexa top 10 million websites. That means that we see, in near real-time, which networks are congested, which are slow and which are dropping packets. We use that data and take action by provisioning faster routes, which sends packets through the Internet faster than normal routing. Argo for Spectrum works the exact same way, using the same intelligence and routing plane but extending it to any TCP based application.</p>
    <div>
      <h3>Performance</h3>
      <a href="#performance">
        
      </a>
    </div>
    <p>But what does this mean for real application performance? To find out, we ran a set of benchmarks on Catchpoint. Catchpoint is a service that allows you to set up <a href="https://www.cloudflare.com/application-services/solutions/app-performance-monitoring/">performance monitoring</a> from all over the world. Tests are repeated at intervals and aggregate results are reported. We wanted to use a third party such as Catchpoint to get objective results (as opposed to running themselves).</p><p>For our test case, we used a file server in the Netherlands as our origin. We provisioned various tests on Catchpoint to measure file transfer performance from various places in the world: Rabat, Tokyo, Los Angeles and Lima.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1Dmiv8f30ef7K9FQ6O1Nyi/131f81007fa1c71ecebb4237f1ad759e/image2-28.png" />
            
            </figure><p>Throughput of a 10MB file. Higher is better.</p><p>Depending on location, transfers saw increases of up to 108% (for locations such as Tokyo) and <b>85% on average</b>. Why is it <b>so</b> much faster? The answer is <a href="https://en.wikipedia.org/wiki/Bandwidth-delay_product"><i>bandwidth delay product</i></a>. In layman's terms, bandwidth delay product means that the higher the latency, the lower the throughput. This is because with transmission protocols such as TCP, we need to wait for the other party to acknowledge that they received data before we can send more.</p><p>As an analogy, let’s assume we’re operating a water cleaning facility. We send unprocessed water through a pipe to a cleaning facility, but we’re not sure how much capacity the facility has! To test, we send an amount of water through the pipe. Once the water has arrived, the facility will call us up and say, “we can easily handle this amount of water at a time, please send more.” If the pipe is short, the feedback loop is quick: the water will arrive, and we’ll immediately be able to send more without having to wait. If we have a very, very long pipe, we have to stop sending water for a while before we get confirmation that the water has arrived and there’s enough capacity.</p><p>The same happens with TCP: we send an amount of data to the wire and wait to get confirmation that it arrived. If the <i>latency</i> is high it reduces the throughput because we’re constantly waiting for confirmation. If latency is low we can throttle throughput at a high rate. With Spectrum and Argo, we help in two ways: the first is that Spectrum terminates the TCP connection close to the user, meaning that latency for that link is low. The second is that Argo reduces the latency between our edge and the origin. In concert, they create a set of low-latency connections, resulting in a low overall bandwidth delay product between users in origin. The result is a much higher throughput than you would otherwise get.</p><p>Argo for Spectrum supports any TCP based protocol. This includes commonly used protocols like SFTP, git (over SSH), RDP and SMTP, but also media streaming and gaming protocols such as RTMP and Minecraft. Setting up Argo for Spectrum is easy. When creating a Spectrum application, just hit the “Argo Smart Routing” toggle. Any traffic will automatically be smart routed.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3m5hR3BEdy6PqTp7jo7XyT/1ce3ff692d52b0fa677e27c79311dcf1/image3-35.png" />
            
            </figure><p>Argo for Spectrum covers much more than just these applications: we support any TCP-based protocol. If you're interested, reach out to your account team today to see what we can do for you.</p> ]]></content:encoded>
            <category><![CDATA[Argo Smart Routing]]></category>
            <category><![CDATA[Spectrum]]></category>
            <category><![CDATA[TCP]]></category>
            <category><![CDATA[Performance]]></category>
            <guid isPermaLink="false">7YylXseoJGsIrnn3GLNzq</guid>
            <dc:creator>Achiel van der Mandele</dc:creator>
        </item>
        <item>
            <title><![CDATA[Announcing Spectrum DDoS Analytics and DDoS Insights & Trends]]></title>
            <link>https://blog.cloudflare.com/announcing-spectrum-ddos-analytics-and-ddos-insights-trends/</link>
            <pubDate>Sat, 07 Nov 2020 12:00:00 GMT</pubDate>
            <description><![CDATA[ We’re excited to announce the expansion of the Network Analytics dashboard to Spectrum customers on the Enterprise plan. Additionally, this announcement introduces two major dashboard improvements for easier reporting and investigation. ]]></description>
            <content:encoded><![CDATA[ 
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1GeCROyyq2FzNq9rYeWMl9/84267607178e5059ecec7cb16c9b4be0/image4-4.png" />
            
            </figure><p>We’re excited to announce the expansion of the <a href="https://support.cloudflare.com/hc/en-us/articles/360038696631-Understanding-Cloudflare-Network-Analytics">Network Analytics</a> dashboard to <a href="https://www.cloudflare.com/products/cloudflare-spectrum/">Spectrum</a> customers on the Enterprise plan. Additionally, this announcement introduces two major dashboard improvements for easier reporting and investigation.</p>
    <div>
      <h3>Network Analytics</h3>
      <a href="#network-analytics">
        
      </a>
    </div>
    <p>Cloudflare's packet and bit oriented dashboard, Network Analytics, provides visibility into Internet traffic patterns and DDoS attacks in Layers 3 and 4 of the <a href="https://www.cloudflare.com/learning/ddos/glossary/open-systems-interconnection-model-osi/">OSI model</a>. This allows our users to better understand the traffic patterns and DDoS attacks as observed at the Cloudflare edge.</p><p>When the dashboard was first released in <a href="/announcing-network-analytics/">January</a>, these capabilities were only available to <a href="/bringing-your-own-ips-to-cloudflare-byoip/">Bring Your Own IP</a> customers on the Spectrum and <a href="https://www.cloudflare.com/magic-transit/">Magic Transit</a> services, but now Spectrum customers using Cloudflare’s Anycast IPs are also supported.</p>
    <div>
      <h3>Protecting L4 applications</h3>
      <a href="#protecting-l4-applications">
        
      </a>
    </div>
    <p>Spectrum is Cloudflare’s L4 reverse-proxy service that offers <a href="/unmetered-mitigation/">unmetered DDoS protection</a> and traffic acceleration for TCP and UDP applications. It provides enhanced traffic performance through faster TLS, optimized network routing, and high speed interconnection. It also provides encryption to legacy protocols and applications that don’t come with embedded encryption. Customers who typically use Spectrum operate services in which network performance and resilience to DDoS attacks are of utmost importance to their business, such as email, remote access, and gaming.</p><p>Spectrum customers can now view detailed traffic reports on DDoS attacks on their configured TCP/ UDP applications, including size of attacks, attack vectors, source location of attacks, and permitted traffic. What’s more, users can also configure and receive <a href="/announcing-ddos-alerts/">real-time alerts</a> when their services are attacked.</p>
    <div>
      <h3>Network Analytics: Rebooted</h3>
      <a href="#network-analytics-rebooted">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5uMzb8hEYmCBNXfERE3CRT/17814d9a3460344b545e7c4ffd628121/image3.gif" />
            
            </figure><p>Since releasing the Network Analytics dashboard in January, we have been constantly improving its capabilities. Today, we’re announcing two major improvements that will make both reporting and investigation easier for our customers: <b>DDoS Insights &amp; Trend</b> and <b>Group-by</b> <b>Filtering</b> for grouping-based traffic analysis.</p>
    <div>
      <h3>DDoS Trends Insights</h3>
      <a href="#ddos-trends-insights">
        
      </a>
    </div>
    <p>First and foremost, we are adding a new DDoS Insights &amp; Trends card, which provides dynamic insights into your attack trends over time. This feature provides a real-time view of the number of attacks, the percentage of attack traffic, the maximum attack rates, the total mitigated bytes, the main attack origin country, and the total duration of attacks, which can indicate the potential downtime that was prevented. These data points were surfaced as the most crucial ones by our customers in the feedback sessions. Along with the percentage of change period-over-period, our customers can easily understand how their security landscape evolves.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/22UFqdLkUvXW6XugGIibBi/16e36532d9c508439234d9322d42b514/image1-3.png" />
            
            </figure><p>Trends Insights</p>
    <div>
      <h3>Troubleshooting made easy</h3>
      <a href="#troubleshooting-made-easy">
        
      </a>
    </div>
    <p>In the main time series chart seen in the dashboard, we added an ability for users to change the <i>Group-by</i> field which enables users to customize the Y axis. This way, a user can quickly identify traffic anomalies and sudden changes in traffic based on criteria such as IP protocols, TCP flags, source country, and take action if needed with <a href="https://www.cloudflare.com/magic-transit/">Magic Firewall</a>, <a href="https://www.cloudflare.com/products/cloudflare-spectrum/">Spectrum</a> or <a href="/bringing-your-own-ips-to-cloudflare-byoip/">BYOIP</a>.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5eRWBY2I3j4Gi9efbkk2QO/144b7054eb054da7d6ad62f5c76519ea/image2.gif" />
            
            </figure><p>Time Series Group-By Filtering</p>
    <div>
      <h3>Harnessing Cloudflare’s edge to empower our users</h3>
      <a href="#harnessing-cloudflares-edge-to-empower-our-users">
        
      </a>
    </div>
    <p>The DDoS Insights &amp; Trends, the new investigation tools and the additional user interface enhancements can assist your organization to better understand your security landscape and take more meaningful actions as needed. We have more updates in Network Analytics dashboard, which are not covered in the scope of this post, including:</p><ul><li><p>Export logs as a CSV</p></li><li><p>Zoom-in feature in the time series chart</p></li><li><p>Drop-down view option for average rate and total volume</p></li><li><p>Increased Top N views for source and destination values</p></li><li><p>Addition of country and data center for source values</p></li><li><p>New visualisation of the TCP flag distribution</p></li></ul><p>Details on these updates can be found in our <a href="https://support.cloudflare.com/hc/en-us/articles/360038696631-Understanding-Cloudflare-Network-Analytics">Help Center</a>, which you can now access via the dashboard as well.</p><p>In the near future, we will also expand Network Analytics to Spectrum customers on the Business plan, and <a href="https://www.cloudflare.com/waf/">WAF</a> customers on the Enterprise and Business plans. Stay tuned!</p><p>If you are a customer in Magic Transit, Spectrum or BYOIP, go try out the <a href="https://dash.cloudflare.com/">Network Analytics dashboard</a> yourself today.</p><p>If you operate your own network, try Cloudflare Magic Transit for free with a limited time offer: <a href="https://www.cloudflare.com/lp/better/">https://www.cloudflare.com/lp/better/</a>.</p> ]]></content:encoded>
            <category><![CDATA[DDoS]]></category>
            <category><![CDATA[Internship Experience]]></category>
            <category><![CDATA[Spectrum]]></category>
            <guid isPermaLink="false">4UQ5Lg51Xbadw9hvgqe8nP</guid>
            <dc:creator>Selina Cho</dc:creator>
        </item>
        <item>
            <title><![CDATA[How Cloudflare uses Cloudflare Spectrum: A look into an intern’s project at Cloudflare]]></title>
            <link>https://blog.cloudflare.com/how-cloudflare-uses-cloudflare-spectrum-a-look-into-an-interns-project-at-cloudflare/</link>
            <pubDate>Fri, 21 Aug 2020 11:00:00 GMT</pubDate>
            <description><![CDATA[ As part of my onboarding as an intern on the Spectrum (a layer 4 reverse proxy) team, I learned that many internal services dogfood Spectrum, as they are exposed to the Internet and benefit from layer 4 DDoS protection. ]]></description>
            <content:encoded><![CDATA[ 
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3Z5QW1BQ7OCth8QdTIQddz/6db79a8f7df3cad36ac62c12ca2b65ba/Dogfooding-Spectrum_2x-1.png" />
            
            </figure><p>Cloudflare extensively uses its own products internally in a process known as <a href="https://en.wikipedia.org/wiki/Eating_your_own_dog_food">dogfooding</a>. As part of my onboarding as an intern on the Spectrum (a layer 4 reverse proxy) team, I learned that many internal services dogfood Spectrum, as they are exposed to the Internet and benefit from layer 4 DDoS protection. One of my first tasks was to update the configuration for an internal service that was using Spectrum. The configuration was managed in Salt (<a href="/manage-cloudflare-records-with-salt/">used for configuration management at Cloudflare</a>), which was not particularly user-friendly, and required an engineer on the Spectrum team to handle updating it manually.</p><p>This process took about a week. That should instantly raise some questions, as a typical Spectrum customer can create a new Spectrum app in under a minute through Cloudflare Dashboard. So why couldn’t I?</p><p>This question formed the basis of my intern project for the summer.</p>
    <div>
      <h3>The Process</h3>
      <a href="#the-process">
        
      </a>
    </div>
    <p>Cloudflare uses various IP ranges for its products. Some customers also authorize Cloudflare to announce their IP prefixes on their behalf (this is known as <a href="https://developers.cloudflare.com/spectrum/getting-started/byoip/">BYOIP</a>). Collectively, we can refer to these IPs as <i>managed</i> addresses. To prevent Bad Stuff (defined later) from happening, we prohibit managed addresses from being used as Spectrum origins. To accomplish this, <a href="https://www.cloudflare.com/products/cloudflare-spectrum/">Spectrum</a> had its own table of denied networks that included the managed addresses. For the average customer, this approach works great – they have no legitimate reason to use a managed address as an origin.</p><p>Unfortunately, the services dogfooding Spectrum all use Cloudflare IPs, preventing those teams with a legitimate use-case from creating a Spectrum app through the configuration service (i.e. Cloudflare Dashboard). To bypass this check, these <i>internal customers</i> needed to define a custom Spectrum configuration, which needed to be manually deployed to the edge via a pull request to our Salt repo, resulting in a time consuming process.</p><p>If an internal customer wanted to change their configuration, the same time consuming process must be used. While this allowed internal customers to use Spectrum, it was tedious and error prone.</p>
    <div>
      <h3>Bad Stuff</h3>
      <a href="#bad-stuff">
        
      </a>
    </div>
    <p><i>Bad Stuff</i> is quite vague and deserves a better definition. It may seem arbitrary that we deny Cloudflare managed addresses. To motivate this, consider two Spectrum apps, A and B, where the origin of app A is the Cloudflare edge IP of app B, and the origin of app B is the edge IP of app A. Essentially, app A will proxy incoming connections to app B, and app B will proxy incoming connections to app A, creating a cycle.</p><p>This could potentially crash the daemon or degrade performance. In practice, this configuration is useless and would only be created by a malicious user, as the proxied connection never reaches an origin, so it is never allowed.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1RAT655ZFI4uFjH14ymleJ/93d5e29de5b132b635bbea52560b006d/Spectrum-loop_2x.png" />
            
            </figure><p>In fact, the more general case of setting another Spectrum app as an origin (even when the configuration does not result in a cycle) is (almost<sup><a href="#footnote">1</a></sup>) never needed, so it also needs to be avoided.</p><p>As well, since we are providing a reverse proxy to customer origins, we do not need to allow connections to IP ranges that cannot be used on the public Internet, as specified in this <a href="https://tools.ietf.org/html/rfc6890">RFC</a>.</p>
    <div>
      <h3>The Problem</h3>
      <a href="#the-problem">
        
      </a>
    </div>
    <p>To improve usability and allow internal Spectrum customers to create apps using the Dashboard instead of the static configuration workflow, we needed a way to give particular customers permission to use Cloudflare managed addresses in their Spectrum configuration. Solving this problem was my main project for the internship.</p><p>A good starting point ended up being the Addressing API. The Addressing API is Cloudflare’s solution to IP management, an internal database and suite of tools to keep track of IP prefixes, with the goal of providing a unified source of truth for how IP addresses are being used across the organization. This makes it possible to provide a cross-product platform for products and features such as <a href="https://developers.cloudflare.com/spectrum/getting-started/byoip/">BYOIP</a>, <a href="/tag/bgp/">BGP On Demand</a>, and <a href="https://www.cloudflare.com/magic-transit/">Magic Transit</a>.</p><p>The Addressing API keeps track of all Cloudflare managed IP prefixes, along with who owns the prefix. As well, the owner of a prefix can give permission for someone else to use the prefix. We call this a <i>delegation</i>.</p><p>A user’s permission to use an IP address managed by the Addressing API is determined as followed:</p><ol><li><p>Is the user the owner of the prefix containing the IP address?a) Yes, the user has permission to use the IPb) No, go to step 2</p></li><li><p>Has the user been delegated a prefix containing the IP address?a) Yes, the user has permission to use the IP.b) No, the user does not have permission to use the IP.</p></li></ol>
    <div>
      <h3>The Solution</h3>
      <a href="#the-solution">
        
      </a>
    </div>
    <p>With the information present in the Addressing API, the solution starts to become clear. For a given customer and IP, we use the following algorithm:</p><ol><li><p>Is the IP managed by Cloudflare (or contained in the previous <a href="https://tools.ietf.org/html/rfc6890">RFC</a>)?a) Yes, go to step 2b) No, allow as origin</p></li><li><p>Does the customer have permission to use the IP address?a) Yes, allow as originb) No, deny as origin</p></li></ol><p>As long as the internal customer has been given permission to use the Cloudflare IP (through a delegation in the Addressing API), this approach would allow them to use it as an origin.</p><p>However, we run into a corner case here – since BYOIP customers also have permission to use their own ranges, they would be able to set their own IP as an origin, potentially causing a cycle. To mitigate this, we need to check if the IP is a Spectrum edge IP. Fortunately, the Addressing API also contains this information, so all we have to do is check if the given origin IP is already in use as a Spectrum edge IP, and if so, deny it. Since all of the denied networks checks occur in the Addressing API, we were able to remove Spectrum's own deny network database, reducing the engineering workload to maintain it along the way.</p><p>Let's go through a concrete example. Consider an internal customer who wants to use 104.16.8.54/32 as an origin for their Spectrum app. This address is managed by Cloudflare, and suppose the customer has permission to use it, and the address is not already in use as an edge IP. This means the customer is able to specify this IP as an origin, since it meets all of our criteria.</p><p>For example, a request to the Addressing API could look like this:</p>
            <pre><code>curl --silent 'https://addr-api.internal/edge_services/spectrum/validate_origin_ip_acl?cidr=104.16.8.54/32' -H "Authorization: Bearer $JWT" | jq .
{
  "success": true,
  "errors": [],
  "result": {
    "allowed_origins": {
      "104.16.8.54/32": {
        "allowed": true,
        "is_managed": true,
        "is_delegated": true,
        "is_reserved": false,
        "has_binding": false
      }
    }
  },
  "messages": []
}</code></pre>
            <p>Now we have completely moved the responsibility of validating the use of origin IP addresses from Spectrum’s configuration service to the Addressing API.</p>
    <div>
      <h3>Performance</h3>
      <a href="#performance">
        
      </a>
    </div>
    <p>This approach required making another HTTP request on the critical path of every create app request in the Spectrum configuration service. Some basic performance testing showed (as expected) increased response times for the API call (about 100ms). This led to discussion among the Spectrum team about the performance impact of different HTTP requests throughout the critical path. To investigate, we decided to use OpenTracing.</p><p><a href="https://opentracing.io/">OpenTracing</a> is a standard for providing distributed tracing of microservices. When an HTTP request is received, special headers are added to it to allow it to be traced across the different services. Within a given trace, we can see how long a SQL query took, the time a function took to complete, the amount of time a request spent at a given service, and more.</p><p>We have been deploying a tracing system for our services to provide more visibility into a complex system.</p><p>After instrumenting the Spectrum config service with OpenTracing, we were able to determine that the Addressing API accounted for a very small amount of time in the overall request, and allowed us to identify potentially problematic request times to other services.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4hdvsPKhjS41OxLNPDltxj/27ccf978efbe73a81d47960aa6dcf981/image1-14.png" />
            
            </figure>
    <div>
      <h3>Lessons Learned</h3>
      <a href="#lessons-learned">
        
      </a>
    </div>
    <p>Reading documentation is important! Having a good understanding of how the Addressing API and the config service worked allowed me to create and integrate an endpoint that made sense for my use-case.</p><p>Writing documentation is just as important. For the final part of my project, I had to onboard <a href="/project-crossbow-lessons-from-refactoring-a-large-scale-internal-tool/">Crossbow</a> – an internal Cloudflare tool used for diagnostics – to Spectrum, using the new features I had implemented. I had written an onboarding guide, but some stuff was unclear during the onboarding process, so I made sure to gather feedback from the Crossbow team to improve the guide.</p><p>Finally, I learned not to underestimate the amount of complexity required to implement relatively simple validation logic. In fact, the implementation required understanding the entire system. This includes how multiple microservices work together to validate the configuration and understanding how the data is moved from the Core to the Edge, and then processed there. I found increasing my understanding of this system to be just as important and rewarding as completing the project.</p><p><a href="http://staging.blog.mrk.cfdata.org/introducing-regional-services/">Regional Services</a> actually makes use of proxying a Spectrum connection to another colocation, and then proxying to the origin, but the configuration plane is not involved in this setup.</p> ]]></content:encoded>
            <category><![CDATA[Spectrum]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[Internship Experience]]></category>
            <category><![CDATA[Salt]]></category>
            <guid isPermaLink="false">2I8392q8zDjUlkCyIJ2vAk</guid>
            <dc:creator>Ryan Jacobs</dc:creator>
        </item>
        <item>
            <title><![CDATA[Bringing Your Own IPs to Cloudflare (BYOIP)]]></title>
            <link>https://blog.cloudflare.com/bringing-your-own-ips-to-cloudflare-byoip/</link>
            <pubDate>Thu, 30 Jul 2020 15:00:00 GMT</pubDate>
            <description><![CDATA[ Today we’re thrilled to announce general availability of Bring Your Own IP (BYOIP) across our Layer 7 products as well as Spectrum and Magic Transit services.  ]]></description>
            <content:encoded><![CDATA[ <p>Today we’re thrilled to announce general availability of Bring Your Own IP (BYOIP) across our Layer 7 products as well as <a href="https://www.cloudflare.com/products/cloudflare-spectrum/">Spectrum</a> and <a href="https://www.cloudflare.com/magic-transit/">Magic Transit</a> services. When BYOIP is configured, the Cloudflare edge will announce a customer’s own IP prefixes and the prefixes can be used with our Layer 7 services, Spectrum, or Magic Transit. If you’re not familiar with the term, an IP prefix is a range of IP addresses. Routers create a table of reachable prefixes, known as a routing table, to ensure that packets are delivered correctly across the Internet.</p><p>As part of this announcement, we are listing BYOIP on the relevant product <a href="https://www.cloudflare.com/cdn/">pages</a>, <a href="https://developers.cloudflare.com/byoip/">developer documentation</a>, and UI support for controlling your prefixes. Previous support was API only.</p><p>Customers choose BYOIP with Cloudflare for a number of reasons. It may be the case that your IP prefix is already allow-listed in many important places, and updating firewall rules to also allow Cloudflare address space may represent a large administrative hurdle. Additionally, you may have hundreds of thousands, or even millions, of end users pointed directly to your IPs via DNS, and it would be hugely time consuming to get them all to update their records to point to Cloudflare IPs.</p><p>Over the last several quarters we have been building tooling and processes to support customers bringing their own IPs at scale. At the time of writing this post we’ve successfully onboarded hundreds of customer IP prefixes. Of these, 84% have been for Magic Transit deployments, 14% for Layer 7 deployments, and 2% for Spectrum deployments.</p><p>When you BYOIP with Cloudflare, this means we announce your IP space in over 200 cities around the world and tie your IP prefix to the service (or services!) of your choosing. Your IP space will be protected and accelerated as if they were Cloudflare’s own IPs. We can support regional deployments for BYOIP prefixes as well if you have technical and/or legal requirements limiting where your prefixes can be announced, such as <a href="https://www.cloudflare.com/learning/privacy/what-is-data-sovereignty/">data sovereignty</a>.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4gcn5knjumg5Lmd32FpGFj/3eae9cf0c1335bf60324dc2d88e86b8b/IP-at-the-edge_2x.png" />
            
            </figure><p>You can turn on advertisement of your IPs from the Cloudflare edge with a click of a button and be live across the world in a matter of minutes.</p><p>All BYOIP customers receive <a href="/announcing-network-analytics/">network analytics</a> on their prefixes. Additionally all IPs in BYOIP prefixes can be considered static IPs. There are also benefits specific to the service you use with your IP prefix on Cloudflare.</p>
    <div>
      <h4>Layer 7 + BYOIP:</h4>
      <a href="#layer-7-byoip">
        
      </a>
    </div>
    <p>Cloudflare has a robust Layer 7 product portfolio, including products like Bot Management, Rate Limiting, Web Application Firewall, and Content Delivery, to name just a few. You can choose to BYOIP with our Layer 7 products and receive all of their benefits on your IP addresses.</p><p>For Layer 7 services, we can support a variety of IP to domain mapping requests including sharing IPs between domains or putting domains on dedicated IPs, which can help meet requirements for things such as non-SNI support.</p><p>If you are also an SSL for SaaS customer, using BYOIP, you have increased flexibility to change IP address responses for <code>[custom_hostnames](https://developers.cloudflare.com/ssl/ssl-for-saas/status-codes/custom-hostnames/)</code> in the event an IP is unserviceable for some reason.</p>
    <div>
      <h4>Spectrum + BYOIP:</h4>
      <a href="#spectrum-byoip">
        
      </a>
    </div>
    <p>Spectrum is Cloudflare’s solution to protect and accelerate applications that run any UDP or TCP protocol. The Spectrum <a href="https://developers.cloudflare.com/spectrum/getting-started/byoip/">API</a> supports BYOIP today. Spectrum customers who use BYOIP can specify, through Spectrum’s API, which IPs they would like associated with a Spectrum application.</p>
    <div>
      <h4>Magic Transit + BYOIP:</h4>
      <a href="#magic-transit-byoip">
        
      </a>
    </div>
    <p>Magic Transit is a Layer 3 security service which processes all your network traffic by announcing your IP addresses and attracting that traffic to the Cloudflare edge for processing.  Magic Transit supports sophisticated packet filtering and firewall configurations. BYOIP is a requirement for using the Magic Transit service. As Magic Transit is an IP level service, Cloudflare must be able to announce your IPs in order to provide this service</p>
    <div>
      <h3>Bringing Your IPs to Cloudflare: What is Required?</h3>
      <a href="#bringing-your-ips-to-cloudflare-what-is-required">
        
      </a>
    </div>
    <p>Before Cloudflare can announce your prefix we require some documentation to get started. The first is something called a ‘Letter of Authorization’ (LOA), which details information about your prefix and how you want Cloudflare to announce it. We then share this document with our Tier 1 transit providers in advance of provisioning your prefix. This step is done to ensure that Tier 1s are aware we have authorization to announce your prefixes.</p><p>Secondly, we require that your Internet Routing Registry (IRR) records are up to date and reflect the data in the LOA. This typically means ensuring the entry in your regional registry is updated (i.e. ARIN, RIPE, APNIC).</p><p>Once the administrivia is out of the way, work with your account team to learn when your prefixes will be ready to announce.</p><p>We also encourage customers to use <a href="/tag/rpki/">RPKI</a> and can support this for customer prefixes. We have blogged and built extensive tooling to make adoption of this protocol easier. If you’re interested in BYOIP with RPKI support just let your account team know!</p>
    <div>
      <h3>Configuration</h3>
      <a href="#configuration">
        
      </a>
    </div>
    <p>Each customer prefix can be announced via the ‘dynamic advertisement’ toggle in either the UI or <a href="https://api.cloudflare.com/#ip-address-management-dynamic-advertisement-properties">API</a>, which will cause the Cloudflare edge to either announce or withdraw a prefix on your behalf. This can be done as soon as your account team lets you know your prefixes are ready to go.</p><p>Once the IPs are ready to be announced, you may want to set up ‘delegations’ for your prefixes. Delegations manage how the prefix can be used across multiple Cloudflare accounts and have slightly different implications depending on which service your prefix is bound to. A prefix is owned by a single account, but a delegation can extend some of the prefix functionality to other accounts. This is also captured on our developer docs. Today, delegations can affect Layer 7 and Spectrum BYOIP prefixes.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5gqLS8zRdrCwoB8nbDw2r0/a3292109c4ab4345964eb6775908ca59/delegation-BYOIP_2x.png" />
            
            </figure><p>Layer 7: If you use BYOIP + Layer 7 and also use the <a href="https://developers.cloudflare.com/ssl/ssl-for-saas">SSL for SaaS</a> service, a delegation to another account will allow that account to also use that prefix to validate custom hostnames in addition to the original account which owns the prefix. This means that multiple accounts can use the same IP prefix to serve up custom hostname traffic. Additionally, all of your IPs can serve traffic for custom hostnames, which means you can easily change IP addresses for these hostnames if an IP is blocked for any reason.</p><p>Spectrum: If you used BYOIP + Spectrum, via the <a href="https://developers.cloudflare.com/spectrum/getting-started/byoip/">Spectrum API</a>, you can specify which IP in your prefix you want to create a Spectrum app with. If you create a delegation for prefix to another account, that second account will also be able to specify an IP from that prefix to create an app.</p><p>If you are interested in learning more about BYOIP across either Magic Transit, CDN, or Spectrum, please reach out to your account team if you’re an existing customer or contact <a>sales@cloudflare.com</a> if you’re a new prospect.</p> ]]></content:encoded>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[DNS]]></category>
            <category><![CDATA[Magic Transit]]></category>
            <category><![CDATA[Spectrum]]></category>
            <guid isPermaLink="false">6t1GxKr7I9LZIpahhFVJPQ</guid>
            <dc:creator>Tom Brightbill</dc:creator>
        </item>
        <item>
            <title><![CDATA[Cloudflare for SSH, RDP and Minecraft]]></title>
            <link>https://blog.cloudflare.com/cloudflare-for-ssh-rdp-and-minecraft/</link>
            <pubDate>Mon, 13 Apr 2020 19:06:38 GMT</pubDate>
            <description><![CDATA[ Cloudflare now covers SSH, RDP and Minecraft, offering DDoS protection and increased network performance. Spectrum pay-as-you-go now available on all paid plans. ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Almost exactly two years ago, we <a href="/spectrum/">launched Cloudflare Spectrum</a> for our Enterprise customers. Today, we’re thrilled to extend DDoS protection and traffic acceleration with Spectrum for <a href="https://www.cloudflare.com/products/cloudflare-spectrum/ssh">SSH</a>, <a href="https://www.cloudflare.com/products/cloudflare-spectrum/rdp/">RDP</a>, and <a href="https://www.cloudflare.com/products/cloudflare-spectrum/minecraft">Minecraft</a> to our Pro and Business plan customers.</p><p>When we think of Cloudflare, a lot of the time we think about protecting and improving the performance of websites. But the Internet is so much more, ranging from gaming, to managing servers, to cryptocurrencies. How do we make sure these applications are secure and performant?</p><p>With Spectrum, you can put Cloudflare in front of your SSH, RDP and Minecraft services, protecting them from DDoS attacks and improving network performance. This allows you to protect the management of your servers, not just your website. Better yet, by leveraging the Cloudflare network you also get increased reliability and increased performance: lower latency!</p>
    <div>
      <h3>Remote access to servers</h3>
      <a href="#remote-access-to-servers">
        
      </a>
    </div>
    <p>While access to websites from home is incredibly important, being able to remotely manage your servers can be equally critical. Losing access to your infrastructure can be disastrous: people need to know their infrastructure is safe and connectivity is good and performant. Usually, server management is done through SSH (Linux or Unix based servers) and RDP (Windows based servers). With these protocols, performance and reliability are <i>key</i>: you need to know you can always reliably manage your servers and that the bad people are kept out. What's more, low latency is really important. Every time you type a key in an SSH terminal or click a button in a remote desktop session, that key press or button click has to traverse the Internet to your origin before the server can process the input and send feedback. While increasing bandwidth can help, lowering latency can help even more in getting your sessions to feel like you're working on a local machine and not one half-way across the globe.</p>
    <div>
      <h3>All work and no play makes Jack Steve a dull boy</h3>
      <a href="#all-work-and-no-play-makes-jack-steve-a-dull-boy">
        
      </a>
    </div>
    <p>While we stay at home, many of us are also looking to play and not only work. Video games in particular have seen a huge <a href="https://en.wikipedia.org/wiki/Impact_of_the_2019%E2%80%9320_coronavirus_pandemic_on_the_video_game_industry">increase in popularity</a>. As personal interaction becomes more difficult to come by, Minecraft has become a popular social outlet. Many of us at Cloudflare are using it to stay in touch and have fun with friends and family in the current age of quarantine. And it’s not just employees at Cloudflare that feel this way, we’ve seen a big increase in Minecraft traffic flowing through our network. Traffic per week had remained steady for a while but has more than tripled since many countries have put their citizens in lockdown:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1AXaLJtErEqZ3dQb1wJguw/3453333c635763235c74f16688887d12/image3-10.png" />
            
            </figure><p>Minecraft is a particularly popular target for DDoS attacks: it's not uncommon for people to develop feuds whilst playing the game. When they do, some of the more tech-savvy players of this game opt to take matters into their own hands and launch a (D)DoS attack, rendering it unusable for the duration of the attacks. Our friends at Hypixel and Nodecraft have known this for many years, which is why they’ve chosen to protect their servers using Spectrum.</p><p>While we love recommending their services, we realize some of you prefer to run your own Minecraft server on a VPS (virtual private server like a DigitalOcean droplet) that you maintain. To help you protect your Minecraft server, we're providing Spectrum for Minecraft as well, available on Pro and Business plans. You'll be able to use the entire Cloudflare network to protect your server and increase network performance.</p>
    <div>
      <h3>How does it work?</h3>
      <a href="#how-does-it-work">
        
      </a>
    </div>
    <p>Configuring Spectrum is easy, just log into your dashboard and head on over to the Spectrum tab. From there you can choose a protocol and configure the IP of your server:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2AVtgjvLEEdw2qPCABTjyg/a8ff0da389d7d9cc6defa51e899f98c4/image2-10.png" />
            
            </figure><p>After that all you have to do is use the subdomain you configured to connect instead of your IP. Traffic will be proxied using Spectrum on the Cloudflare network, keeping the bad people out and your services safe.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4tBzB1f6aFcCSNo9awsyXQ/6cb1e87075fef67a752f99248f8745ad/image1-13.png" />
            
            </figure><p>So how much does this cost? We're happy to announce that <a href="https://www.cloudflare.com/plans/">all paid plans</a> will get access to Spectrum for free, with a generous free data allowance. Pro plans will be able to use SSH and Minecraft, up to 5 gigabytes for free each month. Biz plans can go up to 10 gigabytes for free and also get access to RDP. After the free cap you will be billed on a per <a href="https://support.cloudflare.com/hc/en-us/articles/360041721872">gigabyte basis</a>.</p><p>Spectrum is complementary to Access: it offers DDoS protection and improved network performance as a 'drop-in' product, no configuration necessary on your origins. If you want more control over who has access to which services, we highly recommend taking a look at <a href="https://teams.cloudflare.com/access/">Cloudflare for Teams</a>.</p><p>We're very excited to extend Cloudflare's services to not just HTTP traffic, allowing you to protect your core management services and Minecraft gaming servers. In the future, we'll add support for more protocols. If you have a suggestion, let us know! In the meantime, if you have a Pro or Business account, head on over to the dashboard and enable Spectrum today!</p> ]]></content:encoded>
            <category><![CDATA[Spectrum]]></category>
            <category><![CDATA[SSH]]></category>
            <category><![CDATA[DDoS]]></category>
            <category><![CDATA[Speed & Reliability]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[Security]]></category>
            <guid isPermaLink="false">5BPg140vG7il7lQnjTUQr7</guid>
            <dc:creator>Achiel van der Mandele</dc:creator>
        </item>
        <item>
            <title><![CDATA[Project Crossbow: Lessons from Refactoring a Large-Scale Internal Tool]]></title>
            <link>https://blog.cloudflare.com/project-crossbow-lessons-from-refactoring-a-large-scale-internal-tool/</link>
            <pubDate>Tue, 07 Apr 2020 07:00:00 GMT</pubDate>
            <description><![CDATA[ Crossbow is a tool that is now allowing Cloudflare’s Technical Support Engineers to perform diagnostic activities from running commands (like traceroutes, cURL requests and DNS queries) to debugging product features and performance features using bespoke tools. ]]></description>
            <content:encoded><![CDATA[ 
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4DD8CfPAPxEZXvXNS1u6cW/46988ed6b45099cd2827d63a190ee721/Crossbow-tool_2x-1.png" />
          </figure><p>Cloudflare’s <a href="https://www.cloudflare.com/network/">global network</a> currently spans 200 cities in more than 90 countries. Engineers working in product, technical support and operations often need to be able to debug network issues from particular locations or individual servers.</p><p>Crossbow is the internal tool for doing just this; allowing Cloudflare’s Technical Support Engineers to perform diagnostic activities from running commands (like traceroutes, cURL requests and DNS queries) to debugging product features and performance using bespoke tools.</p><p>In September last year, an Engineering Manager at Cloudflare asked to transition Crossbow from a Product Engineering team to the Support Operations team. The tool had been a secondary focus and had been transitioned through multiple engineering teams without developing subject matter knowledge.</p><p>The Support Operations team at Cloudflare is closely aligned with Cloudflare’s Technical Support Engineers; developing diagnostic tooling and Natural Language Processing technology to drive efficiency. Based on this alignment, it was decided that Support Operations was the best team to own this tool.</p>
    <div>
      <h3>Learning from Sisyphus</h3>
      <a href="#learning-from-sisyphus">
        
      </a>
    </div>
    <p>Whilst seeking advice on the transition process, an SRE Engineering Manager in Cloudflare suggested reading: “<a href="https://landing.google.com/sre/resources/practicesandprocesses/case-study-community-driven-software-adoption/">A Case Study in Community-Driven Software Adoption</a>”. This book proved a truly invaluable read for anyone thinking of doing internal tool development or contributing to such tooling. The book describes why multiple tools are often created for the same purpose by different autonomous teams and how this issue can be overcome. The book also describes challenges and approaches to gaining adoption of tooling, especially where this requires some behaviour change for engineers who use such tools.</p><p>That said, there are some things we learnt along the way of taking over Crossbow and performing a refactor and revamp of a large-scale internal tool. This blog post seeks to be an addendum to such guidance and provide some further practical advice.</p><p>In this blog post we won’t dwell too much on the work of the Cloudflare Support Operations team, but this can be found in the SRECon talk: “<a href="https://www.usenix.org/conference/srecon19emea/presentation/ali">Support Operations Engineering: Scaling Developer Products to the Millions</a>”. The software development methodology used in Cloudflare’s Support Operations Group closely resembles <a href="http://www.extremeprogramming.org/">Extreme Programming</a>.</p>
    <div>
      <h3>Cutting The Fat</h3>
      <a href="#cutting-the-fat">
        
      </a>
    </div>
    <p>There were two ways of using Crossbow, a CLI (command line interface) and UI in Cloudflare’s internal tool for Cloudflare’s Technical Support Engineers. Maintaining both interfaces clearly had significant overhead for improvement efforts, and we took the decision to deprecate one of the interfaces. This allowed us to focus our efforts on one platform to achieve large-scale improvements across technology, usability and functionality.</p><p>We set-up a poll to allow engineering, operations, solutions engineering and technical support teams to provide their feedback on how they used the tooling. Polling was not only critical for gaining vital information to how different teams used the tool, but also ensured that prior to deprecation that people knew their views were taken onboard. We polled not only on the option people preferred, but which options they felt were necessary to them and the reasons as to why.</p><p>We found that the reasons for favouring the web UI primarily revolved around the absence of documentation and training. Instead, we discovered those who used the CLI found it far more critical for their workflow. Product Engineering teams do not routinely have access to the support UI but some found it necessary to use Crossbow for their jobs and users wanted to be able to automate commands with shell scripts.</p><p>Technically, the UI was in JavaScript with an <a href="https://www.cloudflare.com/learning/security/api/what-is-an-api-gateway/">API Gateway</a> service that converted HTTP requests to gRPC alongside some configuration to allow it to work in the support UI. The CLI directly interfaced with the gRPC API so it was a simpler system. Given the Cloudflare Support Operations team primarily works on Systems Engineering projects and had limited UI resources, the decision to deprecate the UI was also in our own interest.</p><p>We rolled out a new internal Crossbow user group, trained up teams and created new documentation, provided advance notification of deprecation and abrogated the source code of these services. We also dramatically improved the user experience when using the CLI for users through simple improvements to the help information and easier CLI usage.</p>
    <div>
      <h3>Rearchitecting Pub/Sub with Cloudflare Access</h3>
      <a href="#rearchitecting-pub-sub-with-cloudflare-access">
        
      </a>
    </div>
    <p>One of the primary challenges we encountered was how the system architecture for Crossbow was designed many years ago. A gRPC API ran commands at Cloudflare’s edge network using a configuration management tool which the SRE team expressed a desire to deprecate (with Crossbow being the last user of it).</p><p>During a visit to the Singapore Office, the Edge SRE Engineering Manager locally wanted his team to understand Crossbow and how to contribute. During this meeting, we provided an overview of the current architecture and the team there were forthcoming in providing potential refactoring ideas to handle global network stability and move away from the old pipeline. This provided invaluable insight into the common issues experienced between technical approaches and instances of where the tool would fail requiring Technical Support Engineers to consult the SRE team.</p><p>We decided to adopt a more simple pub/sub pipeline, instead the edge network would expose a gRPC daemon that would listen for new jobs and execute them and then make a callback to the API service with the results (which would be relayed onto the client).</p><p>For authentication between the API service and the client or the API service and the network edge, we implemented a <a href="https://developers.cloudflare.com/access/setting-up-access/json-web-token/">JWT authentication</a> scheme. For a CLI user, the authentication was done by querying an HTTP endpoint behind Cloudflare Access <a href="https://developers.cloudflare.com/access/cli/connecting-from-cli/">using cloudflared</a>, which provided a JWT the client could use for <a href="https://grpc.io/docs/guides/auth/">authentication with gRPC</a>. In practice, this looks something like this:</p><ol><li><p>CLI makes request to authentication server using cloudflared</p></li><li><p>Authentication server responds with signed JWT token</p></li><li><p>CLI makes gRPC request with JWT authentication token to API service</p></li><li><p>API service validates token using a public key</p></li></ol><p>The gRPC API endpoint was placed on <a href="https://www.cloudflare.com/products/cloudflare-spectrum/">Cloudflare Spectrum</a>; as users were authenticated using Cloudflare Access, we could remove the requirement for users to be on the company VPN to use the tool. The new authentication pipeline, combined with a single user interface, also allowed us to improve the collection of metrics and usage logs of the tool.</p>
    <div>
      <h3>Risk Management</h3>
      <a href="#risk-management">
        
      </a>
    </div>
    <blockquote><p>Risk is inherent in the activities undertaken by engineering professionals, meaning that members of the profession have a significant role to play in managing and limiting it.
- <a href="https://www.engc.org.uk/standards-guidance/guidance/guidance-on-risk/">Guidance on Risk</a>, Engineering Council</p></blockquote><p>As with all engineering projects, it was critical to manage risk. However, the risk to manage is different for different engineering projects. Availability wasn’t the largest factor, given that Technical Support Engineers could escalate issues to the SRE team if the tool wasn’t available. The main risk was security of the Cloudflare network and ensuring Crossbow did not affect the availability of any other services. To this end we took methodical steps to improve isolation and engaged the InfoSec team early to assist with specification and code reviews of the new pipeline. Where a risk to availability existed, we ensured this was properly communicated to the support team and the internal Crossbow user group to communicate the risk/reward that existed.</p>
    <div>
      <h3>Feedback, Build, Refactor, Measure</h3>
      <a href="#feedback-build-refactor-measure">
        
      </a>
    </div>
    <p>The Support Operations team at Cloudflare works using a methodology based on Extreme Programming. A key tenant of Extreme Programming is that of Test Driven Development, this is often described as a “red-green-green” pattern or “<a href="https://www.codecademy.com/articles/tdd-red-green-refactor">red-green-refactor</a>”. First the engineer enshrines the requirements in tests, then they make those tests pass and then refactor to improve code quality before pushing the software.</p><p>As we took on this project, the Cloudflare Support and SRE teams were working on Project Baton - an effort to allow Technical Support Engineers to handle more customer escalations without handover to the SRE teams.</p><p>As part of this effort, they had already created an invaluable resource in the form of a feature wish list for Crossbow. We associated JIRAs with all these items and prioritised this work to deliver such feature requests using a Test Driven Development workflow and the introduction of Continuous Integration. Critically we measured such improvements once deployed. Adding simple functionality like support for MTR (a Linux network diagnostic tool) and exposing support for different cURL flags provided improvements in usage.</p><p>We were also able to embed Crossbow support for other tools available at the network edge created by other teams, allowing them to maintain such tools and expose features to Crossbow users. Through the creation of an improved development environment and documentation, we were able to drive Product Engineering teams to contribute functionality that was in the mutual interest of them and the customer support team.</p><p>Finally, we owned a number of tools which were used by Technical Support Engineers to discover what Cloudflare configuration was applied to a given URL and performing distributed performance testing, we deprecated these tools and rolled them into Crossbow. Another tool owned by the <a href="https://workers.cloudflare.com/">Cloudflare Workers</a> team, called Edge Worker Debug was rolled into Crossbow and the team deprecated their tool.</p>
    <div>
      <h3>Results</h3>
      <a href="#results">
        
      </a>
    </div>
    <p>From implementing user analytics on the tool on the 16 December 2019 to the week ending the 22 January 2020, we found a found usage increase of 4.5x. This growth primarily happened within a 4 week period; by adding the most wanted functionality, we were able to achieve a critical saturation of usage amongst Technical Support Engineers.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/48JaaCMLrwL9EWw9WtDNP5/4958cf9e7a94f0759983f475ca168d4b/image1.png" />
          </figure><p>Beyond this point, it became critical to use the number of checks being run as a metric to evaluate how useful the tool was. For example, only the week starting January 27 saw no meaningful increase in unique users (a 14% usage increase over the previous week - within the normal fluctuation of stable usage). However, over the same timeframe, we saw a 2.6x increase in the number of tests being run - coinciding with introduction of a number of new high-usage functionalities.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1ndjgHwGG3rVpuwOczuc6L/8158b2a71dbf87aa7dfeaf4650118ed8/pasted-image-0--6-.png" />
          </figure>
    <div>
      <h3>Conclusion</h3>
      <a href="#conclusion">
        
      </a>
    </div>
    <p>Through removing low-value/high-maintenance functionality and merciless refactoring, we were dramatically able to improve the quality of Crossbow and therefore improve the velocity of delivery. We were able to dramatically improve usage through enabling functionality to measure usage, receive feature requests in feedback loops with users and test-driven development. Consolidation of tooling reduced overhead of developing support tooling across the business, providing a common framework for developing and exposing functionality for Technical Support Engineers.</p><p>There are two key counterintuitive learnings from this project. The first is that cutting functionality can drive usage, providing this is done intelligently. In our case, the web UI contained no additional functionality that wasn’t in the CLI, yet caused substantial engineering overhead for maintenance. By deprecating this functionality, we were able to reduce technical debt and thereby improve the velocity of delivering more important functionality. This effort requires effective communication of the decision making process and involvement from those who are impacted by such a decision.</p><p>Secondly, tool development efforts are often focussed by user feedback but lack a means of objectively measuring such improvements. When logging is added, it is often done purely for security and audit logging purposes. Whilst feedback loops with users are invaluable, it is critical to have an objective measure of how successful such a feature is and how it is used. Effective measurement drives the decision making process of future tooling and therefore, in the long run, the usage data can be more important than the original feature itself.</p><p>If you're interested in debugging interesting technical problems on a network with these tools, we're hiring for <a href="https://www.cloudflare.com/careers/jobs/?department=Customer+Support">Support Engineers</a> (including Security Operations, Technical Support and Support Operations Engineering) in San Francisco, Austin, Champaign, London, Lisbon, Munich and Singapore.</p> ]]></content:encoded>
            <category><![CDATA[Tools]]></category>
            <category><![CDATA[Cloudflare Access]]></category>
            <category><![CDATA[Support]]></category>
            <category><![CDATA[Spectrum]]></category>
            <guid isPermaLink="false">17EMKPLIbfOeVwXlT4cDK8</guid>
            <dc:creator>Junade Ali</dc:creator>
            <dc:creator>Peter Weaver</dc:creator>
        </item>
        <item>
            <title><![CDATA[When TCP sockets refuse to die]]></title>
            <link>https://blog.cloudflare.com/when-tcp-sockets-refuse-to-die/</link>
            <pubDate>Fri, 20 Sep 2019 15:53:33 GMT</pubDate>
            <description><![CDATA[ We noticed something weird - the TCP sockets which we thought should have been closed - were lingering around. We realized we don't really understand when TCP sockets are supposed to time out!

We naively thought enabling TCP keepalives would be enough... but it isn't! ]]></description>
            <content:encoded><![CDATA[ <p>While working on our <a href="https://www.cloudflare.com/products/cloudflare-spectrum/">Spectrum server</a>, we noticed something weird: the TCP sockets which we thought should have been closed were lingering around. We realized we don't really understand when TCP sockets are supposed to time out!</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4J7NyByY5rLMwGCjildxuX/a80e344de39529860fe89230fff4259c/Tcp_state_diagram_fixed_new.svga.png" />
            
            </figure><p><a href="https://commons.wikimedia.org/wiki/File:Tcp_state_diagram_fixed_new.svg">Image</a> by Sergiodc2 CC BY SA 3.0</p><p>In our code, we wanted to make sure we don't hold connections to dead hosts. In our early code we naively thought enabling TCP keepalives would be enough... but it isn't. It turns out a fairly modern <a href="https://tools.ietf.org/html/rfc5482">TCP_USER_TIMEOUT</a> socket option is equally important. Furthermore, it interacts with TCP keepalives in subtle ways. <a href="http://codearcana.com/posts/2015/08/28/tcp-keepalive-is-a-lie.html">Many people</a> are confused by this.</p><p>In this blog post, we'll try to show how these options work. We'll show how a TCP socket can time out during various stages of its lifetime, and how TCP keepalives and user timeout influence that. To better illustrate the internals of TCP connections, we'll mix the outputs of the <code>tcpdump</code> and the <code>ss -o</code> commands. This nicely shows the transmitted packets and the changing parameters of the TCP connections.</p>
    <div>
      <h2>SYN-SENT</h2>
      <a href="#syn-sent">
        
      </a>
    </div>
    <p>Let's start from the simplest case - what happens when one attempts to establish a connection to a server which discards inbound SYN packets?</p><p>The scripts used here <a href="https://github.com/cloudflare/cloudflare-blog/tree/master/2019-09-tcp-keepalives">are available on our GitHub</a>.</p><p><code>$ sudo ./test-syn-sent.py
# all packets dropped
00:00.000 IP host.2 &gt; host.1: Flags [S] # initial SYN

State    Recv-Q Send-Q Local:Port Peer:Port
SYN-SENT 0      1      host:2     host:1    timer:(on,940ms,0)

00:01.028 IP host.2 &gt; host.1: Flags [S] # first retry
00:03.044 IP host.2 &gt; host.1: Flags [S] # second retry
00:07.236 IP host.2 &gt; host.1: Flags [S] # third retry
00:15.427 IP host.2 &gt; host.1: Flags [S] # fourth retry
00:31.560 IP host.2 &gt; host.1: Flags [S] # fifth retry
01:04.324 IP host.2 &gt; host.1: Flags [S] # sixth retry
02:10.000 connect ETIMEDOUT</code></p><p>Ok, this was easy. After the <code>connect()</code> syscall, the operating system sends a SYN packet. Since it didn't get any response the OS will by default retry sending it 6 times. This can be tweaked by the sysctl:</p><p><code>$ sysctl net.ipv4.tcp_syn_retries
net.ipv4.tcp_syn_retries = 6</code></p><p>It's possible to overwrite this setting per-socket with the TCP_SYNCNT setsockopt:</p><p><code>setsockopt(sd, IPPROTO_TCP, TCP_SYNCNT, 6);</code></p><p>The retries are staggered at 1s, 3s, 7s, 15s, 31s, 63s marks (the inter-retry time starts at 2s and then doubles each time). By default, the whole process takes 130 seconds, until the kernel gives up with the ETIMEDOUT errno. At this moment in the lifetime of a connection, SO_KEEPALIVE settings are ignored, but TCP_USER_TIMEOUT is not. For example, setting it to 5000ms, will cause the following interaction:</p><p><code>$ sudo ./test-syn-sent.py 5000
# all packets dropped
00:00.000 IP host.2 &gt; host.1: Flags [S] # initial SYN

State    Recv-Q Send-Q Local:Port Peer:Port
SYN-SENT 0      1      host:2     host:1    timer:(on,996ms,0)

00:01.016 IP host.2 &gt; host.1: Flags [S] # first retry
00:03.032 IP host.2 &gt; host.1: Flags [S] # second retry
00:05.016 IP host.2 &gt; host.1: Flags [S] # what is this?
00:05.024 IP host.2 &gt; host.1: Flags [S] # what is this?
00:05.036 IP host.2 &gt; host.1: Flags [S] # what is this?
00:05.044 IP host.2 &gt; host.1: Flags [S] # what is this?
00:05.050 connect ETIMEDOUT</code></p><p>Even though we set user-timeout to 5s, we still saw the six SYN retries on the wire. This behaviour is probably a bug (as tested on 5.2 kernel): we would expect only two retries to be sent - at 1s and 3s marks and the socket to expire at 5s mark. Instead, we saw this, but also we saw further 4 retransmitted SYN packets aligned to 5s mark - which makes no sense. Anyhow, we learned a thing - the TCP_USER_TIMEOUT does affect the behaviour of <code>connect()</code>.</p>
    <div>
      <h2>SYN-RECV</h2>
      <a href="#syn-recv">
        
      </a>
    </div>
    <p>SYN-RECV sockets are usually hidden from the application. They live as mini-sockets on the SYN queue. We wrote about <a href="/syn-packet-handling-in-the-wild/">the SYN and Accept queues in the past</a>. Sometimes, when SYN cookies are enabled, the sockets may skip the SYN-RECV state altogether.</p><p>In SYN-RECV state, the socket will retry sending SYN+ACK 5 times as controlled by:</p><p><code>$ sysctl net.ipv4.tcp_synack_retries
net.ipv4.tcp_synack_retries = 5</code></p><p>Here is how it looks on the wire:</p><p><code>$ sudo ./test-syn-recv.py
00:00.000 IP host.2 &gt; host.1: Flags [S]
# all subsequent packets dropped
00:00.000 IP host.1 &gt; host.2: Flags [S.] # initial SYN+ACK

State    Recv-Q Send-Q Local:Port Peer:Port
SYN-RECV 0      0      host:1     host:2    timer:(on,996ms,0)

00:01.033 IP host.1 &gt; host.2: Flags [S.] # first retry
00:03.045 IP host.1 &gt; host.2: Flags [S.] # second retry
00:07.301 IP host.1 &gt; host.2: Flags [S.] # third retry
00:15.493 IP host.1 &gt; host.2: Flags [S.] # fourth retry
00:31.621 IP host.1 &gt; host.2: Flags [S.] # fifth retry
01:04:610 SYN-RECV disappears</code></p><p>With default settings, the SYN+ACK is re-transmitted at 1s, 3s, 7s, 15s, 31s marks, and the SYN-RECV socket disappears at the 64s mark.</p><p>Neither SO_KEEPALIVE nor TCP_USER_TIMEOUT affect the lifetime of SYN-RECV sockets.</p>
    <div>
      <h2>Final handshake ACK</h2>
      <a href="#final-handshake-ack">
        
      </a>
    </div>
    <p>After receiving the second packet in the TCP handshake - the SYN+ACK - the client socket moves to an ESTABLISHED state. The server socket remains in SYN-RECV until it receives the final ACK packet.</p><p>Losing this ACK doesn't change anything - the server socket will just take a bit longer to move from SYN-RECV to ESTAB. Here is how it looks:</p><p><code>00:00.000 IP host.2 &gt; host.1: Flags [S]
00:00.000 IP host.1 &gt; host.2: Flags [S.]
00:00.000 IP host.2 &gt; host.1: Flags [.] # initial ACK, dropped

State    Recv-Q Send-Q Local:Port  Peer:Port
SYN-RECV 0      0      host:1      host:2 timer:(on,1sec,0)
ESTAB    0      0      host:2      host:1

00:01.014 IP host.1 &gt; host.2: Flags [S.]
00:01.014 IP host.2 &gt; host.1: Flags [.]  # retried ACK, dropped

State    Recv-Q Send-Q Local:Port Peer:Port
SYN-RECV 0      0      host:1     host:2    timer:(on,1.012ms,1)
ESTAB    0      0      host:2     host:1</code></p><p>As you can see SYN-RECV, has the "on" timer, the same as in example before. We might argue this final ACK doesn't really carry much weight. This thinking lead to the development of TCP_DEFER_ACCEPT feature - it basically causes the third ACK to be silently dropped. With this flag set the socket remains in SYN-RECV state until it receives the first packet with actual data:</p><p><code>$ sudo ./test-syn-ack.py
00:00.000 IP host.2 &gt; host.1: Flags [S]
00:00.000 IP host.1 &gt; host.2: Flags [S.]
00:00.000 IP host.2 &gt; host.1: Flags [.] # delivered, but the socket stays as SYN-RECV

State    Recv-Q Send-Q Local:Port Peer:Port
SYN-RECV 0      0      host:1     host:2    timer:(on,7.192ms,0)
ESTAB    0      0      host:2     host:1

00:08.020 IP host.2 &gt; host.1: Flags [P.], length 11  # payload moves the socket to ESTAB

State Recv-Q Send-Q Local:Port Peer:Port
ESTAB 11     0      host:1     host:2
ESTAB 0      0      host:2     host:1</code></p><p>The server socket remained in the SYN-RECV state even after receiving the final TCP-handshake ACK. It has a funny "on" timer, with the counter stuck at 0 retries. It is converted to ESTAB - and moved from the SYN to the accept queue - after the client sends a data packet or after the TCP_DEFER_ACCEPT timer expires. Basically, with DEFER ACCEPT the SYN-RECV mini-socket <a href="https://marc.info/?l=linux-netdev&amp;m=118793048828251&amp;w=2">discards the data-less inbound ACK</a>.</p>
    <div>
      <h2>Idle ESTAB is forever</h2>
      <a href="#idle-estab-is-forever">
        
      </a>
    </div>
    <p>Let's move on and discuss a fully-established socket connected to an unhealthy (dead) peer. After completion of the handshake, the sockets on both sides move to the ESTABLISHED state, like:</p><p><code>State Recv-Q Send-Q Local:Port Peer:Port
ESTAB 0      0      host:2     host:1
ESTAB 0      0      host:1     host:2</code></p><p>These sockets have no running timer by default - they will remain in that state forever, even if the communication is broken. The TCP stack will notice problems only when one side attempts to send something. This raises a question - what to do if you don't plan on sending any data over a connection? How do you make sure an idle connection is healthy, without sending any data over it?</p><p>This is where TCP keepalives come in. Let's see it in action - in this example we used the following toggles:</p><ul><li><p>SO_KEEPALIVE = 1 - Let's enable keepalives.</p></li><li><p>TCP_KEEPIDLE = 5 - Send first keepalive probe after 5 seconds of idleness.</p></li><li><p>TCP_KEEPINTVL = 3 - Send subsequent keepalive probes after 3 seconds.</p></li><li><p>TCP_KEEPCNT = 3 - Time out after three failed probes.</p></li></ul><p><code>$ sudo ./test-idle.py
00:00.000 IP host.2 &gt; host.1: Flags [S]
00:00.000 IP host.1 &gt; host.2: Flags [S.]
00:00.000 IP host.2 &gt; host.1: Flags [.]

State Recv-Q Send-Q Local:Port Peer:Port
ESTAB 0      0      host:1     host:2
ESTAB 0      0      host:2     host:1  timer:(keepalive,2.992ms,0)

# all subsequent packets dropped
00:05.083 IP host.2 &gt; host.1: Flags [.], ack 1 # first keepalive probe
00:08.155 IP host.2 &gt; host.1: Flags [.], ack 1 # second keepalive probe
00:11.231 IP host.2 &gt; host.1: Flags [.], ack 1 # third keepalive probe
00:14.299 IP host.2 &gt; host.1: Flags [R.], seq 1, ack 1</code></p><p>Indeed! We can clearly see the first probe sent at the 5s mark, two remaining probes 3s apart - exactly as we specified. After a total of three sent probes, and a further three seconds of delay, the connection dies with ETIMEDOUT, and final the RST is transmitted.</p><p>For keepalives to work, the send buffer must be empty. You can notice the keepalive timer active in the "timer:(keepalive)" line.</p>
    <div>
      <h2>Keepalives with TCP_USER_TIMEOUT are confusing</h2>
      <a href="#keepalives-with-tcp_user_timeout-are-confusing">
        
      </a>
    </div>
    <p>We mentioned the TCP_USER_TIMEOUT option before. It sets the maximum amount of time that transmitted data may remain unacknowledged before the kernel forcefully closes the connection. On its own, it doesn't do much in the case of idle connections. The sockets will remain ESTABLISHED even if the connectivity is dropped. However, this socket option does change the semantics of TCP keepalives. <a href="https://linux.die.net/man/7/tcp">The tcp(7) manpage</a> is somewhat confusing:</p><p><i>Moreover, when used with the TCP keepalive (SO_KEEPALIVE) option, TCP_USER_TIMEOUT will override keepalive to determine when to close a connection due to keepalive failure.</i></p><p>The original commit message has slightly more detail:</p><ul><li><p><a href="https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=dca43c75e7e545694a9dd6288553f55c53e2a3a3">tcp: Add TCP_USER_TIMEOUT socket option</a></p></li></ul><p>To understand the semantics, we need to look at the <a href="https://github.com/torvalds/linux/blob/b41dae061bbd722b9d7fa828f35d22035b218e18/net/ipv4/tcp_timer.c#L693-L697">kernel code in linux/net/ipv4/tcp_timer.c:693</a>:</p><p><code>if ((icsk-&gt;icsk_user_timeout != 0 &amp;&amp;
elapsed &gt;= msecs_to_jiffies(icsk-&gt;icsk_user_timeout) &amp;&amp;
icsk-&gt;icsk_probes_out &gt; 0) ||</code></p><p>For the user timeout to have any effect, the <code>icsk_probes_out</code> must not be zero. The check for user timeout is done only <i>after</i> the first probe went out. Let's check it out. Our connection settings:</p><ul><li><p>TCP_USER_TIMEOUT = 5*1000 - 5 seconds</p></li><li><p>SO_KEEPALIVE = 1 - enable keepalives</p></li><li><p>TCP_KEEPIDLE = 1 - send first probe quickly - 1 second idle</p></li><li><p>TCP_KEEPINTVL = 11 - subsequent probes every 11 seconds</p></li><li><p>TCP_KEEPCNT = 3 - send three probes before timing out</p></li></ul><p><code>00:00.000 IP host.2 &gt; host.1: Flags [S]
00:00.000 IP host.1 &gt; host.2: Flags [S.]
00:00.000 IP host.2 &gt; host.1: Flags [.]

# all subsequent packets dropped
00:01.001 IP host.2 &gt; host.1: Flags [.], ack 1 # first probe
00:12.233 IP host.2 &gt; host.1: Flags [R.] # timer for second probe fired, socket aborted due to TCP_USER_TIMEOUT</code></p><p>So what happened? The connection sent the first keepalive probe at the 1s mark. Seeing no response the TCP stack then woke up 11 seconds later to send a second probe. This time though, it executed the USER_TIMEOUT code path, which decided to terminate the connection immediately.</p><p>What if we bump TCP_USER_TIMEOUT to larger values, say between the second and third probe? Then, the connection will be closed on the third probe timer. With TCP_USER_TIMEOUT set to 12.5s:</p><p><code>00:01.022 IP host.2 &gt; host.1: Flags [.] # first probe
00:12.094 IP host.2 &gt; host.1: Flags [.] # second probe
00:23.102 IP host.2 &gt; host.1: Flags [R.] # timer for third probe fired, socket aborted due to TCP_USER_TIMEOUT</code></p><p>We’ve shown how TCP_USER_TIMEOUT interacts with keepalives for small and medium values. The last case is when TCP_USER_TIMEOUT is extraordinarily large. Say we set it to 30s:</p><p><code>00:01.027 IP host.2 &gt; host.1: Flags [.], ack 1 # first probe
00:12.195 IP host.2 &gt; host.1: Flags [.], ack 1 # second probe
00:23.207 IP host.2 &gt; host.1: Flags [.], ack 1 # third probe
00:34.211 IP host.2 &gt; host.1: Flags [.], ack 1 # fourth probe! But TCP_KEEPCNT was only 3!
00:45.219 IP host.2 &gt; host.1: Flags [.], ack 1 # fifth probe!
00:56.227 IP host.2 &gt; host.1: Flags [.], ack 1 # sixth probe!
01:07.235 IP host.2 &gt; host.1: Flags [R.], seq 1 # TCP_USER_TIMEOUT aborts conn on 7th probe timer</code></p><p>We saw six keepalive probes on the wire! With TCP_USER_TIMEOUT set, the TCP_KEEPCNT is totally ignored. If you want TCP_KEEPCNT to make sense, the only sensible USER_TIMEOUT value is slightly smaller than:</p>
            <pre><code>TCP_KEEPIDLE + TCP_KEEPINTVL * TCP_KEEPCNT</code></pre>
            
    <div>
      <h2>Busy ESTAB socket is not forever</h2>
      <a href="#busy-estab-socket-is-not-forever">
        
      </a>
    </div>
    <p>Thus far we have discussed the case where the connection is idle. Different rules apply when the connection has unacknowledged data in a send buffer.</p><p>Let's prepare another experiment - after the three-way handshake, let's set up a firewall to drop all packets. Then, let's do a <code>send</code> on one end to have some dropped packets in-flight. An experiment shows the sending socket dies after ~16 minutes:</p><p><code>00:00.000 IP host.2 &gt; host.1: Flags [S]
00:00.000 IP host.1 &gt; host.2: Flags [S.]
00:00.000 IP host.2 &gt; host.1: Flags [.]

# All subsequent packets dropped
00:00.206 IP host.2 &gt; host.1: Flags [P.], length 11 # first data packet
00:00.412 IP host.2 &gt; host.1: Flags [P.], length 11 # early retransmit, doesn't count
00:00.620 IP host.2 &gt; host.1: Flags [P.], length 11 # 1nd retry
00:01.048 IP host.2 &gt; host.1: Flags [P.], length 11 # 2rd retry
00:01.880 IP host.2 &gt; host.1: Flags [P.], length 11 # 3th retry

State Recv-Q Send-Q Local:Port Peer:Port
ESTAB 0      0      host:1     host:2
ESTAB 0      11     host:2     host:1    timer:(on,1.304ms,3)

00:03.543 IP host.2 &gt; host.1: Flags [P.], length 11 # 4th
00:07.000 IP host.2 &gt; host.1: Flags [P.], length 11 # 5th
00:13.656 IP host.2 &gt; host.1: Flags [P.], length 11 # 6th
00:26.968 IP host.2 &gt; host.1: Flags [P.], length 11 # 7th
00:54.616 IP host.2 &gt; host.1: Flags [P.], length 11 # 8th
01:47.868 IP host.2 &gt; host.1: Flags [P.], length 11 # 9th
03:34.360 IP host.2 &gt; host.1: Flags [P.], length 11 # 10th
05:35.192 IP host.2 &gt; host.1: Flags [P.], length 11 # 11th
07:36.024 IP host.2 &gt; host.1: Flags [P.], length 11 # 12th
09:36.855 IP host.2 &gt; host.1: Flags [P.], length 11 # 13th
11:37.692 IP host.2 &gt; host.1: Flags [P.], length 11 # 14th
13:38.524 IP host.2 &gt; host.1: Flags [P.], length 11 # 15th
15:39.500 connection ETIMEDOUT</code></p><p>The data packet is retransmitted 15 times, as controlled by:</p><p><code>$ sysctl net.ipv4.tcp_retries2
net.ipv4.tcp_retries2 = 15</code></p><p>From the <a href="https://www.kernel.org/doc/Documentation/networking/ip-sysctl.txt"><code>ip-sysctl.txt</code></a> documentation:</p><p><i>The default value of 15 yields a hypothetical timeout of 924.6 seconds and is a lower bound for the effective timeout. TCP will effectively time out at the first RTO which exceeds the hypothetical timeout.</i></p><p>The connection indeed died at ~940 seconds. Notice the socket has the "on" timer running. It doesn't matter at all if we set SO_KEEPALIVE - when the "on" timer is running, keepalives are not engaged.</p><p>TCP_USER_TIMEOUT keeps on working though. The connection will be aborted <i>exactly</i> after user-timeout specified time since the last received packet. With the user timeout set the <code>tcp_retries2</code> value is ignored.</p>
    <div>
      <h2>Zero window ESTAB is... forever?</h2>
      <a href="#zero-window-estab-is-forever">
        
      </a>
    </div>
    <p>There is one final case worth mentioning. If the sender has plenty of data, and the receiver is slow, then TCP flow control kicks in. At some point the receiver will ask the sender to stop transmitting new data. This is a slightly different condition than the one described above.</p><p>In this case, with flow control engaged, there is no in-flight or unacknowledged data. Instead the receiver throttles the sender with a "zero window" notification. Then the sender periodically checks if the condition is still valid with "window probes". In this experiment we reduced the receive buffer size for simplicity. Here's how it looks on the wire:</p><p><code>00:00.000 IP host.2 &gt; host.1: Flags [S]
00:00.000 IP host.1 &gt; host.2: Flags [S.], win 1152
00:00.000 IP host.2 &gt; host.1: Flags [.]</code></p><p><code>00:00.202 IP host.2 &gt; host.1: Flags [.], length 576 # first data packet
00:00.202 IP host.1 &gt; host.2: Flags [.], ack 577, win 576
00:00.202 IP host.2 &gt; host.1: Flags [P.], length 576 # second data packet
00:00.244 IP host.1 &gt; host.2: Flags [.], ack 1153, win 0 # throttle it! zero-window</code></p><p><code>00:00.456 IP host.2 &gt; host.1: Flags [.], ack 1 # zero-window probe
00:00.456 IP host.1 &gt; host.2: Flags [.], ack 1153, win 0 # nope, still zero-window</code></p><p><code>State Recv-Q Send-Q Local:Port Peer:Port
ESTAB 1152   0      host:1     host:2
ESTAB 0      129920 host:2     host:1  timer:(persist,048ms,0)</code></p><p>The packet capture shows a couple of things. First, we can see two packets with data, each 576 bytes long. They both were immediately acknowledged. The second ACK had "win 0" notification: the sender was told to stop sending data.</p><p>But the sender is eager to send more! The last two packets show a first "window probe": the sender will periodically send payload-less "ack" packets to check if the window size had changed. As long as the receiver keeps on answering, the sender will keep on sending such probes forever.</p><p>The socket information shows three important things:</p><ul><li><p>The read buffer of the reader is filled - thus the "zero window" throttling is expected.</p></li><li><p>The write buffer of the sender is filled - we have more data to send.</p></li><li><p>The sender has a "persist" timer running, counting the time until the next "window probe".</p></li></ul><p>In this blog post we are interested in timeouts - what will happen if the window probes are lost? Will the sender notice?</p><p>By default, the window probe is retried 15 times - adhering to the usual <code>tcp_retries2</code> setting.</p><p>The tcp timer is in <code>persist</code> state, so the TCP keepalives will <i>not</i> be running. The SO_KEEPALIVE settings don't make any difference when window probing is engaged.</p><p>As expected, the TCP_USER_TIMEOUT toggle keeps on working. A slight difference is that similarly to user-timeout on keepalives, it's engaged only when the retransmission timer fires. During such an event, if more than user-timeout seconds since the last good packet passed, the connection will be aborted.</p>
    <div>
      <h2>Note about using application timeouts</h2>
      <a href="#note-about-using-application-timeouts">
        
      </a>
    </div>
    <p>In the past we have shared an interesting war story:</p><ul><li><p><a href="/the-curious-case-of-slow-downloads/">The curious case of slow downloads</a></p></li></ul><p>Our HTTP server gave up on the connection after an application-managed timeout fired. This was a bug - a slow connection might have correctly slowly drained the send buffer, but the application server didn't notice that.</p><p>We abruptly dropped slow downloads, even though this wasn't our intention. We just wanted to make sure the client connection was still healthy. It would be better to use TCP_USER_TIMEOUT than rely on application-managed timeouts.</p><p>But this is not sufficient. We also wanted to guard against a situation where a client stream is valid, but is stuck and doesn't drain the connection. The only way to achieve this is to periodically check the amount of unsent data in the send buffer, and see if it shrinks at a desired pace.</p><p>For typical applications sending data to the Internet, I would recommend:</p><ol><li><p>Enable TCP keepalives. This is needed to keep some data flowing in the idle-connection case.</p></li><li><p>Set TCP_USER_TIMEOUT to <code>TCP_KEEPIDLE + TCP_KEEPINTVL * TCP_KEEPCNT</code>.</p></li><li><p>Be careful when using application-managed timeouts. To detect TCP failures use TCP keepalives and user-timeout. If you want to spare resources and make sure sockets don't stay alive for too long, consider periodically checking if the socket is draining at the desired pace. You can use <code>ioctl(TIOCOUTQ)</code> for that, but it counts both data buffered (notsent) on the socket and in-flight (unacknowledged) bytes. A better way is to use TCP_INFO tcpi_notsent_bytes parameter, which reports only the former counter.</p></li></ol><p>An example of checking the draining pace:</p><p><code>while True:
notsent1 = get_tcp_info(c).tcpi_notsent_bytes
notsent1_ts = time.time()
...
poll.poll(POLL_PERIOD)
...
notsent2 = get_tcp_info(c).tcpi_notsent_bytes
notsent2_ts = time.time()
pace_in_bytes_per_second = (notsent1 - notsent2) / (notsent2_ts - notsent1_ts)
if pace_in_bytes_per_second &gt; 12000:
# pace is above effective rate of 96Kbps, ok!
else:
# socket is too slow...</code></p><p>There are ways to further improve this logic. We could use <a href="https://lwn.net/Articles/560082/"><code>TCP_NOTSENT_LOWAT</code></a>, although it's generally only useful for situations where the send buffer is relatively empty. Then we could use the <a href="https://www.kernel.org/doc/Documentation/networking/timestamping.txt"><code>SO_TIMESTAMPING</code></a> interface for notifications about when data gets delivered. Finally, if we are done sending the data to the socket, it's possible to just call <code>close()</code> and defer handling of the socket to the operating system. Such a socket will be stuck in FIN-WAIT-1 or LAST-ACK state until it correctly drains.</p>
    <div>
      <h2>Summary</h2>
      <a href="#summary">
        
      </a>
    </div>
    <p>In this post we discussed five cases where the TCP connection may notice the other party going away:</p><ul><li><p>SYN-SENT: The duration of this state can be controlled by <code>TCP_SYNCNT</code> or <code>tcp_syn_retries</code>.</p></li><li><p>SYN-RECV: It's usually hidden from application. It is tuned by <code>tcp_synack_retries</code>.</p></li><li><p>Idling ESTABLISHED connection, will never notice any issues. A solution is to use TCP keepalives.</p></li><li><p>Busy ESTABLISHED connection, adheres to <code>tcp_retries2</code> setting, and ignores TCP keepalives.</p></li><li><p>Zero-window ESTABLISHED connection, adheres to <code>tcp_retries2</code> setting, and ignores TCP keepalives.</p></li></ul><p>Especially the last two ESTABLISHED cases can be customized with TCP_USER_TIMEOUT, but this setting also affects other situations. Generally speaking, it can be thought of as a hint to the kernel to abort the connection after so-many seconds since the last good packet. This is a dangerous setting though, and if used in conjunction with TCP keepalives should be set to a value slightly lower than <code>TCP_KEEPIDLE + TCP_KEEPINTVL * TCP_KEEPCNT</code>. Otherwise it will affect, and potentially cancel out, the TCP_KEEPCNT value.</p><p>In this post we presented scripts showing the effects of timeout-related socket options under various network conditions. Interleaving the <code>tcpdump</code> packet capture with the output of <code>ss -o</code> is a great way of understanding the networking stack. We were able to create reproducible test cases showing the "on", "keepalive" and "persist" timers in action. This is a very useful framework for further experimentation.</p><p>Finally, it's surprisingly hard to tune a TCP connection to be confident that the remote host is actually up. During our debugging we found that looking at the send buffer size and currently active TCP timer can be very helpful in understanding whether the socket is actually healthy. The bug in our Spectrum application turned out to be a wrong TCP_USER_TIMEOUT setting - without it sockets with large send buffers were lingering around for way longer than we intended.</p><p>The scripts used in this article <a href="https://github.com/cloudflare/cloudflare-blog/tree/master/2019-09-tcp-keepalives">can be found on our GitHub</a>.</p><p>Figuring this out has been a collaboration across three Cloudflare offices. Thanks to <a href="https://twitter.com/Hirenpanchasara">Hiren Panchasara</a> from San Jose, <a href="https://twitter.com/warrncn">Warren Nelson</a> from Austin and <a href="https://twitter.com/jkbs0">Jakub Sitnicki</a> from Warsaw. Fancy joining the team? <a href="https://www.cloudflare.com/careers/departments/?utm_referrer=blog">Apply here!</a></p> ]]></content:encoded>
            <category><![CDATA[SYN]]></category>
            <category><![CDATA[TCP]]></category>
            <category><![CDATA[Spectrum]]></category>
            <category><![CDATA[Tech Talks]]></category>
            <guid isPermaLink="false">PTYUwpDIf4wDZ50CejAvL</guid>
            <dc:creator>Marek Majkowski</dc:creator>
        </item>
        <item>
            <title><![CDATA[Spectrum for UDP: DDoS protection and firewalling for unreliable protocols]]></title>
            <link>https://blog.cloudflare.com/spectrum-for-udp-ddos-protection-and-firewalling-for-unreliable-protocols/</link>
            <pubDate>Wed, 20 Mar 2019 15:01:00 GMT</pubDate>
            <description><![CDATA[ Today, we're announcing Spectrum for UDP. Spectrum for UDP works the same as Spectrum for TCP: Spectrum sits between your clients and your origin. Incoming connections are proxied through, whilst applying our DDoS protection and IP Firewall rules.  ]]></description>
            <content:encoded><![CDATA[ <p>Today, we're announcing Spectrum for UDP. Spectrum for UDP works the same as Spectrum for TCP: Spectrum sits between your clients and your origin. Incoming connections are <i>proxied</i> through, whilst applying our DDoS protection and IP Firewall rules. This allows you to protect your services from all sorts of nasty attacks and completely hides your origin behind Cloudflare.</p><p>Last year, we launched <a href="/spectrum/">Spectrum</a>. Spectrum brought the power of our DDoS and firewall features to all TCP ports and services. Spectrum for TCP allows you to protect your SSH services, gaming protocols, and as of last month, even <a href="https://developers.cloudflare.com/spectrum/getting-started/ftp/">FTP servers</a>. We’ve seen customers running all sorts of applications behind Spectrum, such as <a href="https://www.bitfly.at">Bitfly</a>, <a href="https://nicehash.com">Nicehash</a>, and <a href="https://hypixel.net">Hypixel</a>.</p><p>This is great if you're running TCP services, but plenty of our customers also have workloads running over UDP. As an example, many multiplayer games prefer the low cost and lighter weight of UDP and don't care about whether packets arrive or not.</p><p>UDP applications have historically been hard to protect and secure, which is why we built Spectrum for UDP. Spectrum for UDP allows you to protect standard UDP services (such as RDP over UDP), but can also protect any custom protocol you come up with! The only requirement is that it uses UDP as an underlying protocol.</p>
    <div>
      <h3>Configuring a UDP application on Spectrum</h3>
      <a href="#configuring-a-udp-application-on-spectrum">
        
      </a>
    </div>
    <p>To configure on the dashboard, simply switch the application type from TCP to UDP:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6mq0LQqGOJk7r1M8rwwC8P/fb5cab669371da0048342fc1c00a018e/image1.png" />
            
            </figure>
    <div>
      <h3>Retrieving client information</h3>
      <a href="#retrieving-client-information">
        
      </a>
    </div>
    <p>With Spectrum, we terminate the connection and open a new one to your origin. But, what if you want to still see who's actually connecting to you? For TCP, there's <a href="https://www.haproxy.org/download/1.8/doc/proxy-protocol.txt">Proxy Protocol</a>. Whilst initially introduced by HAProxy, it has since been adopted by more parties, such as <a href="https://docs.nginx.com/nginx/admin-guide/load-balancer/using-proxy-protocol/">nginx</a>. We added <a href="https://developers.cloudflare.com/spectrum/getting-started/proxy-protocol/">support</a> late 2018, allowing you to easily read the client's IP and port from a header that precedes each data stream.</p><p>Unfortunately, there is no equivalent for UDP, so we're rolling our own. Due to the fact that UDP is connection-less, we can't get away with the Proxy Protocol approach for TCP, which prepends the entire stream with one header. Instead, we are forced to prepend each packet with a small header that specifies:</p><ul><li><p>the original client IP</p></li><li><p>the Spectrum IP</p></li><li><p>the original client port</p></li><li><p>the Spectrum port</p></li></ul><p>Schema representing a UDP packet prefaced with our <i>Simple Proxy Protocol</i> header.</p>
            <pre><code>0                   1                   2                   3
 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|          Magic Number         |                               |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+                               +
|                                                               |
+                                                               +
|                                                               |
+                         Client Address                        +
|                                                               |
+                               +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|                               |                               |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+                               +
|                                                               |
+                                                               +
|                                                               |
+                         Proxy Address                         +
|                                                               |
+                               +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|                               |         Client Port           |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|           Proxy Port          |          Payload...           |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+</code></pre>
            <p>Simple Proxy Protocol is turned off by default, which means UDP packets will arrive at your origin as if they were sent from Spectrum. To enable, just enable it on your Spectrum app.</p>
    <div>
      <h3>Getting access to Spectrum for UDP</h3>
      <a href="#getting-access-to-spectrum-for-udp">
        
      </a>
    </div>
    <p>We're excited about launching this and and even more excited to see what you'll build and protect with it. In fact, what if you could build serverless services on Spectrum, without actually having an origin running? Stay tuned for some cool announcements in the near future.</p><p>Spectrum for UDP is currently an Enterprise-only feature. To get UDP enabled for your account, please reach out to your account team and we’ll get you set up.</p><p>One more thing... if you’re at <a href="https://gdconf.com">GDC</a> this year, say hello at booth <a href="https://www.expocad.com/host/fx/ubm/gdc19/exfx.html#floorplan">P1639</a>! We’d love to talk more and learn about what you’d like to do with Spectrum.</p> ]]></content:encoded>
            <category><![CDATA[DDoS]]></category>
            <category><![CDATA[Spectrum]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[TCP]]></category>
            <category><![CDATA[Speed & Reliability]]></category>
            <category><![CDATA[Firewall]]></category>
            <guid isPermaLink="false">5sYOiRAlrMZkxNKFcfX66T</guid>
            <dc:creator>Achiel van der Mandele</dc:creator>
        </item>
        <item>
            <title><![CDATA[Introducing Spectrum with Load Balancing]]></title>
            <link>https://blog.cloudflare.com/introducing-spectrum-with-load-balancing/</link>
            <pubDate>Thu, 25 Oct 2018 13:00:00 GMT</pubDate>
            <description><![CDATA[ We’re excited to announce the full integration of Cloudflare Spectrum with Load Balancing. Combining Spectrum with Load Balancing enables traffic management of TCP connections utilising the same battle tested Load Balancer our customers already use for billions of HTTP requests every day. ]]></description>
            <content:encoded><![CDATA[ <p></p><p>We’re excited to announce the full integration of Cloudflare Spectrum with Load Balancing. Combining Spectrum with Load Balancing enables traffic management of TCP connections utilising the same battle tested Load Balancer our customers already use for billions of HTTP requests every day.</p><p>Customers can configure load balancers with TCP health checks, failover, and steering policies to dictate where traffic should flow. This is live in the Cloudflare dashboard and API — give it a shot!</p>
    <div>
      <h3>TCP Health Checks</h3>
      <a href="#tcp-health-checks">
        
      </a>
    </div>
    <p>You can now configure <a href="https://www.cloudflare.com/load-balancing/">Cloudflare’s Load Balancer</a> health checks to probe any TCP port for an accepted connection. This is in addition to the existing HTTP and HTTPS options.</p><p>Health checks are an optional feature within Cloudflare’s Load Balancing product. Without health checks, the Cloudflare Load Balancer will distribute traffic to all origins in the first pool. While this is in itself useful, adding a health check to a Load Balancer provides additional functionality.</p><p>With a health check configured for a pool in a Load Balancer, Cloudflare will automatically distribute traffic within a pool to any origins that are marked up by the health check. Unhealthy origins will be dropped automatically. This allows for intelligent failover both within a pool and amongst pools. Health checks can be configured from multiple regions (and even all of Cloudflare’s PoPs as an Enterprise customer) to detect local and global connectivity issues from your origins.</p><p>In this example, we will configure a TCP health check for an application running on port 2408 with a refresh rate of every 30 seconds via either the dashboard or our API.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/wwdhg0k3y6QUdo9oSRic1/23425df3294af54b150d95f5ec05e1fc/Load-Balancing-Manage-Monitors.png" />
            
            </figure><p>Configuring a TCP health check</p>
            <pre><code># POST accounts/:account_identifier/load_balancers/monitors

{
  "description": "Spectrum Health Check",
  "type": "tcp",
  "port": 2048,
  "interval": 30,
  "retries": 2,
  "timeout": 5,
  "method": "connection_established",
}</code></pre>
            
    <div>
      <h3>Weights</h3>
      <a href="#weights">
        
      </a>
    </div>
    <p>Origin weights are beneficial should you have origins that are not of equal capacity or if you want to unequally split traffic for any other reason.</p><p>Weights configured within a load balancer pool will be honored with transport load balancing through Spectrum. If configured, Cloudflare will distribute traffic amongst the available origins within a pool according to the relative weights assigned to each origin.</p><p>For further information on weighted steering, see the <a href="https://support.cloudflare.com/hc/en-us/articles/360001372131-Load-Balancing-Configurable-Origin-Weights">knowledge base article</a>.</p>
    <div>
      <h3>Steering Modes</h3>
      <a href="#steering-modes">
        
      </a>
    </div>
    <p>All steering modes are available for transport load balancing through Spectrum: You can choose standard failover, dynamic steering, or geo steering:</p><ul><li><p><b>Failover</b>In this mode, the Cloudflare Load Balancer will <a href="https://www.cloudflare.com/learning/performance/what-is-server-failover/">fail over</a> amongst pools listed in a given load balancer configuration as they are marked down by health checks. If all pools are marked down, Cloudflare will send traffic to the fallback pool. The fallback pool is the last pool in the list in the dashboard or specifically nominated via a parameter in the API. If no health checks are configured, Cloudflare will send to the primary pool exclusively.</p></li><li><p><b>Dynamic Steering</b><a href="/i-wanna-go-fast-load-balancing-dynamic-steering/">Dynamic steering</a> was recently introduced by Cloudflare as a way of directing traffic to the fastest pool for a given user. In this mode, the Cloudflare load balancer will select the fastest pool for the given Cloudflare Region or PoP (ENT only) through health check data. If there is no health check data for a given colo or region, the load balancer will select a pool in failover order. It is important to note that with TCP health checks, latency calculated may not be representative of true latency to origin if you are terminating TCP at a cloud provider edge location.</p></li><li><p><b>Geo Steering</b><a href="https://support.cloudflare.com/hc/en-us/articles/115000540888-Load-Balancing-Geographic-Regions">Geo Steering</a> allows you to specify pools for a given Region or PoP (ENT only). In this configuration, Cloudflare will direct traffic from specified Cloudflare locations to configured pools. You may configure multiple pools, and the load balancer will use them in failover order. If this steering mode is selected and there is no configuration for a region or pool, the load balancer will use the default failover order.</p></li></ul>
    <div>
      <h3>Build Scalable TCP Applications</h3>
      <a href="#build-scalable-tcp-applications">
        
      </a>
    </div>
    <p>Once your load balancer is configured, it’s available for use as an origin with your Spectrum application:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7dMaJShOEXdDpK5pu4sbLm/7fe4496c05d50895bd3c48e4896e26c4/Load-balancing-Edit-Application.png" />
            
            </figure><p>Configuring a Spectrum application with Load Balancing</p><p>Combining Spectrum’s ability to proxy TCP applications, our Load Balancer’s full feature set, and Cloudflare’s global network allows our customers to build performant, reliable, and secure network applications with minimal effort.</p><p>We’ve seen customers combine Spectrum and Load Balancing to build scalable gaming platforms, make their live streaming infrastructure more robust, push the envelope with interesting cryptocurrency use cases, and lots more. What will you build?</p><p>Spectrum with Load Balancing is available to all current Spectrum and Load Balancing users. Want access to Spectrum? <a href="https://cloudflare.com/products/cloudflare-spectrum/">Get in touch with our team</a>. Spectrum is available for applications on the Enterprise plan.</p> ]]></content:encoded>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[Load Balancing]]></category>
            <category><![CDATA[Spectrum]]></category>
            <category><![CDATA[Speed & Reliability]]></category>
            <guid isPermaLink="false">4tdZv1JvUSLlr67ji4kXC1</guid>
            <dc:creator>Rustam Lalkaka</dc:creator>
            <dc:creator>Sergi Isasi</dc:creator>
        </item>
        <item>
            <title><![CDATA[Q2 FY 18 Product Releases, for a better Internet “end-to-end”]]></title>
            <link>https://blog.cloudflare.com/better-internet-end-to-end/</link>
            <pubDate>Thu, 26 Jul 2018 18:35:23 GMT</pubDate>
            <description><![CDATA[ In Q2, Cloudflare released several products which enable a better Internet “end-to-end” — from the mobile client to host infrastructure. Now, anyone from an individual developer to large companies and governments, can control, secure, and accelerate their applications from “perimeter” to “host.” ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Photo by <a href="https://unsplash.com/@allenliu?utm_source=ghost&amp;utm_medium=referral&amp;utm_campaign=api-credit">Liu Zai Hou</a> / <a href="https://unsplash.com/?utm_source=ghost&amp;utm_medium=referral&amp;utm_campaign=api-credit">Unsplash</a></p><p>In Q2, Cloudflare released several products which enable a better Internet “end-to-end” — from the mobile client to host infrastructure. Now, anyone from an individual developer to large companies and governments, can control, secure, and accelerate their applications from the “<a href="https://www.cloudflare.com/learning/access-management/what-is-the-network-perimeter/">perimeter</a>” back to the “host.”</p><p>On the client side, <a href="https://www.cloudflare.com/products/mobile-sdk/">Cloudflare’s Mobile SDK</a> extends control directly into your mobile apps, providing visibility into application performance and load times across any global carrier network.</p><p>On the host side, <a href="https://www.cloudflare.com/products/cloudflare-workers/">Cloudflare Workers</a> lets companies move workloads from their host to the Cloudflare Network, reducing infrastructure costs and speeding up the user experience. <a href="https://www.cloudflare.com/products/argo-tunnel/">Argo Tunnel</a> lets you securely connect your host directly to a Cloudflare data center. If your host infrastructure is running other TCP services besides HTTP(S), you can now protect it with Cloudflare’s DDoS protection using <a href="https://www.cloudflare.com/products/cloudflare-spectrum/">Spectrum</a>.</p><p>So for end-to-end control that is easy and fast to deploy, these recent products are all incredible “workers” across the “spectrum” of your needs.</p>
    <div>
      <h3>But there’s more to the story</h3>
      <a href="#but-theres-more-to-the-story">
        
      </a>
    </div>
    <p>End users want richer experiences, such as more video, interactivity, and images. Meeting those needs can incur real costs in bandwidth, hardware, and time. Cloudflare addresses these with three products that improve video delivery, reduce paint times, and shrink the round-trip times.</p><p>Cloudflare now simplifies and reduces delivery cost of video with <a href="https://www.cloudflare.com/products/stream-delivery/">Stream Delivery</a>. Pages using plenty of Javascript now have faster paint times and wider mobile-device support with <a href="/we-have-lift-off-rocket-loader-ga-is-mobile/">Rocket Loader</a>. If you’re managing multiple origins and want to ensure fastest delivery based on the shortest round-trip time, Cloudflare Load Balancer now supports <a href="/i-wanna-go-fast-load-balancing-dynamic-steering/">Dynamic Steering</a>.</p><p>Attackers are shifting their focus to the application layer. Some security features, like CAPTCHA and Javascript Challenge, give you more control and reduce false-positives when blocking rate-based threats at the edge, such as layer 7 DDoS or brute-force attacks.</p><p>Finally, Cloudflare extended privacy to consumers through the launch of our DNS resolver <a href="https://1.1.1.1">1.1.1.1</a> on 4/1/2018! Now users who set their DNS resolvers to 1.1.1.1 can browse faster while protecting browser data with Cloudflare’s privacy-first consumer DNS service.</p>
    <div>
      <h3>Here is a recap from April to June of the features we released in Q2</h3>
      <a href="#here-is-a-recap-from-april-to-june-of-the-features-we-released-in-q2">
        
      </a>
    </div>
    
    <div>
      <h4>Dynamic Steering</h4>
      <a href="#dynamic-steering">
        
      </a>
    </div>
    <p><i>Tue, July 10, 2018</i>Dynamic steering is a load balancing feature that automates traffic steering across origins in multiple geographic regions. <a href="https://www.cloudflare.com/learning/cdn/glossary/round-trip-time-rtt/">Round-trip time (RTT)</a> for health checks is calculated across multiple pools of load balanced servers and origins to determine the fastest server pools. This RTT data enables the load balancers to identify the fastest pools, and to direct user requests to the most responsive origins.</p><ul><li><p><a href="/i-wanna-go-fast-load-balancing-dynamic-steering/">Blog Post</a></p></li><li><p><a href="https://www.cloudflare.com/load-balancing/">Product Page</a></p></li><li><p><a href="https://support.cloudflare.com/hc/en-us/articles/360006900952">Support Article</a></p></li></ul>
    <div>
      <h4>Support for New DNS Record Types</h4>
      <a href="#support-for-new-dns-record-types">
        
      </a>
    </div>
    <p><i>Thu, July 5, 2018</i>Cloudflare's Authoritative DNS now supports the following record types: CERT, DNSKEY, DS, NAPTR, SMIMEA, SSHFP, TLSA, and URI via the web and API.</p>
    <div>
      <h4>Developer Portal Q2 Update</h4>
      <a href="#developer-portal-q2-update">
        
      </a>
    </div>
    <p><i>Mon, June 11, 2018</i>The Developer Portal has been updated in Q2 to include improved search, documentation for new products, and listings of upcoming Cloudflare community events.</p><ul><li><p><a href="https://developers.cloudflare.com/">Developer Portal</a></p></li></ul>
    <div>
      <h4>Rocket Loader Upgrade</h4>
      <a href="#rocket-loader-upgrade">
        
      </a>
    </div>
    <p><i>Fri, June 1, 2018</i>Rocket Loader has been updated to deliver faster performance for website paint &amp; load times by prioritizing website content over JavaScript. Majority of mobile devices are now supported. Increased compliance with strict content security policies.</p><ul><li><p><a href="https://support.cloudflare.com/hc/en-us/articles/200168056-What-does-Rocket-Loader-do-">Support</a></p></li><li><p><a href="/we-have-lift-off-rocket-loader-ga-is-mobile/">Blog</a></p></li></ul>
    <div>
      <h4>Stream Delivery</h4>
      <a href="#stream-delivery">
        
      </a>
    </div>
    <p><i>Thu, May 31, 2018</i>Cloudflare’s Stream Delivery solution offers fast caching and delivery of video content across our network of 150+ global data centers.</p><ul><li><p><a href="https://www.cloudflare.com/products/stream-delivery/">Product Page</a></p></li></ul>
    <div>
      <h4>Deprecating TLS 1.0 and 1.1 on api.cloudflare.com</h4>
      <a href="#deprecating-tls-1-0-and-1-1-on-api-cloudflare-com">
        
      </a>
    </div>
    <p><i>Tue, May 29, 2018</i>On June 4, Cloudflare will be dropping support for TLS 1.0 and 1.1 on <a href="http://api.cloudflare.com/">api.cloudflare.com</a>. Additionally, the dashboard will be moved from <a href="http://www.cloudflare.com/a">www.cloudflare.com/a</a> to <a href="http://dash.cloudflare.com/">dash.cloudflare.com</a> and will require a browser that supports TLS 1.2 or higher.</p><ul><li><p><a href="https://www.cloudflare.com/ssl/">Product Page</a></p></li><li><p><a href="/deprecating-old-tls-versions-on-cloudflare-dashboard-and-api/">Blog Post</a></p></li></ul>
    <div>
      <h4>Rate Limiting has new Actions and Triggers</h4>
      <a href="#rate-limiting-has-new-actions-and-triggers">
        
      </a>
    </div>
    <p><i>Mon, May 21, 2018</i>Rate Limiting has two new features: challenges (CAPTCHA and JS Challenge) as an Action; and matching Header attributes in the response (from either origin or the cache) as the Trigger. These features give more control over how Cloudflare Rate Limiting responds to threshold violations, giving customers granularity over the types of requests to "count" to fit their different applications. To learn more, go to the blog post.</p><ul><li><p><a href="https://www.cloudflare.com/rate-limiting/">Product Page</a></p></li><li><p><a href="/rate-limiting-delivering-more-rules-and-greater-control/">Blog Post</a></p></li></ul>
    <div>
      <h4>Support purge-by-tag for large tag sizes</h4>
      <a href="#support-purge-by-tag-for-large-tag-sizes">
        
      </a>
    </div>
    <p><i>Thu, May 10, 2018</i>The Cache-Tag header now supports up to 1000 tags and a total header length of 16kb. This update simplifies file purges for customers who deploy websites with Drupal.</p><ul><li><p><a href="https://support.cloudflare.com/hc/en-us/articles/206596608-How-to-Purge-Cache-Using-Cache-Tags-Enterprise-only-">Support article</a></p></li></ul>
    <div>
      <h4>Multi-User Access on dash.cloudflare.com</h4>
      <a href="#multi-user-access-on-dash-cloudflare-com">
        
      </a>
    </div>
    <p><i>Wed, May 2, 2018</i>Starting May 2 2018, users can go to the new home of Cloudflare’s Dashboard at <a href="http://dash.cloudflare.com/">dash.cloudflare.com</a> and share account access. This has been supported at our Enterprise level of service, but is now being extended to all customers.</p><ul><li><p><a href="/expanding-multi-user-access/">Blog Post</a></p></li></ul>
    <div>
      <h4>Support full SSL (Strict) mode validation for CNAME domains</h4>
      <a href="#support-full-ssl-strict-mode-validation-for-cname-domains">
        
      </a>
    </div>
    <p><i>Thu, April 12, 2018</i>Cloudflare is now able to validate origin certificates that use a hostname's CNAME target in Full SSL (Strict) mode. Previously, Cloudflare would not validate any certificate without a direct match of the HTTP hostname and the certificate's Common Name or SAN. This update allows SSL for SaaS customers to more easily enable end-to-end security.</p><ul><li><p><a href="https://support.cloudflare.com/hc/en-us/articles/200170416-What-do-the-SSL-options-mean-">Support</a></p></li><li><p><a href="https://www.cloudflare.com/ssl-for-saas-providers">SSL for SaaS</a></p></li></ul>
    <div>
      <h4>Cloudflare Spectrum</h4>
      <a href="#cloudflare-spectrum">
        
      </a>
    </div>
    <p><i>Thu, April 12, 2018</i>Spectrum protects TCP applications and ports from volumetric DDoS attacks and data theft by proxying non-web traffic through Cloudflare’s Anycast network.</p><ul><li><p><a href="https://www.cloudflare.com/products/cloudflare-spectrum/">Product Page</a></p></li><li><p><a href="/spectrum/">Blog Post</a></p></li></ul>
    <div>
      <h4>Workers Can Control Cache TTL by Response Code</h4>
      <a href="#workers-can-control-cache-ttl-by-response-code">
        
      </a>
    </div>
    <p><i>Wed, April 11, 2018</i>Cloudflare workers can now control cache TTL by response code. This provides greater control over cached assets with Cloudflare Workers.</p><ul><li><p><a href="https://developers.cloudflare.com/workers/reference/cloudflare-features/">Documentation</a></p></li></ul>
    <div>
      <h4>Argo Tunnel</h4>
      <a href="#argo-tunnel">
        
      </a>
    </div>
    <p><i>Thu, April 5, 2018</i>Argo Tunnel ensures that no visitor or attacker can reach your web server unless they first pass through Cloudflare. Using a lightweight agent installed on your origin, Cloudflare creates an encrypted tunnel between your host infrastructure and our nearest data centers without opening a public inbound port. It’s more secure, more performant, and easier to manage than exposing your services publically.</p><ul><li><p><a href="https://developers.cloudflare.com/argo-tunnel/quickstart/">Developer Doc</a></p></li><li><p><a href="https://www.cloudflare.com/products/argo-tunnel/">Read More</a></p></li></ul> ]]></content:encoded>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[Mobile SDK]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Load Balancing]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Spectrum]]></category>
            <category><![CDATA[Rocket Loader]]></category>
            <category><![CDATA[Speed & Reliability]]></category>
            <category><![CDATA[Cloudflare Tunnel]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <guid isPermaLink="false">30mcTficRd4F27mbRddmAS</guid>
            <dc:creator>Timothy Fong</dc:creator>
        </item>
        <item>
            <title><![CDATA[mmproxy - Creative Linux routing to preserve client IP addresses in L7 proxies]]></title>
            <link>https://blog.cloudflare.com/mmproxy-creative-way-of-preserving-client-ips-in-spectrum/</link>
            <pubDate>Tue, 17 Apr 2018 22:11:00 GMT</pubDate>
            <description><![CDATA[ In previous blog post we discussed how we use the TPROXY iptables module to power Cloudflare Spectrum. With TPROXY we solved a major technical issue on the server side, and we thought we might find another use for it on the client side of our product. ]]></description>
            <content:encoded><![CDATA[ <p>In previous blog post we discussed <a href="/how-we-built-spectrum/">how we use the <code>TPROXY</code> iptables module</a> to power <a href="https://www.cloudflare.com/products/cloudflare-spectrum/">Cloudflare Spectrum</a>. With <code>TPROXY</code> we solved a major technical issue on the server side, and we thought we might find another use for it on the client side of our product.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/bo1gYWkQihp0vs8Nk10Xr/5fc224388fa52c30a2f25e982178b5d3/Address-machine-1_-ru-tech-enc-.png" />
            
            </figure><p>This is <a href="https://en.wikipedia.org/wiki/Addressograph">Addressograph</a>. Source <a href="https://upload.wikimedia.org/wikipedia/commons/b/b0/Address-machine-1_%28ru-tech-enc%29.png">Wikipedia</a></p><p>When building an application level proxy, the first consideration is always about retaining real client source IP addresses. Some protocols make it easy, e.g. HTTP has a defined <code>X-Forwarded-For</code> header<a href="#fn1">[1]</a>, but there isn't a similar thing for generic TCP tunnels.</p><p>Others have faced this problem before us, and have devised three general solutions:</p>
    <div>
      <h4>(1) Ignore the client IP</h4>
      <a href="#1-ignore-the-client-ip">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6zazUmVWQqBmngOT2c0nro/bcb758fa95b1439b28c41ee2257b18e8/Screen-Shot-2018-04-15-at-12.26.16-PM.png" />
            
            </figure><p>For certain applications it may be okay to ignore the real client IP address. For example, sometimes the client needs to identify itself with a username and password anyway, so the source IP doesn't really matter. In general, it's not a good practice because...</p>
    <div>
      <h4>(2) Nonstandard TCP header</h4>
      <a href="#2-nonstandard-tcp-header">
        
      </a>
    </div>
    <p>A second method was developed by Akamai: the client IP is saved inside a custom option in the TCP header in the SYN packet. Early implementations of this method weren't conforming to any standards, e.g. using <a href="https://support.radware.com/app/answers/answer_view/a_id/16143/~/client-ip-visibility-from-akamai-servers-appshape%2B%2B-script-sample">option field 28</a>, but recently <a href="https://tools.ietf.org/html/rfc7974">RFC7974</a> was ratified for this option. We don't support this method for a number of reasons:</p><ul><li><p>The space in TCP headers is very limited. It's insufficient to store the full 128 bits of client IPv6 addresses, especially with 15%+ of Cloudflare’s traffic being IPv6.</p></li><li><p>No software or hardware supports the RFC7974 yet.</p></li><li><p>It's surprisingly hard to add support for RFC7947 in real world applications. One option is to patch the operating system and overwrite <code>getpeername(2)</code> and <code>accept4(2)</code> syscalls, another is to use <code>getsockopt(TCP_SAVED_SYN)</code> to extract the client IP from a SYN packet in the userspace application. Neither technique is simple.</p></li></ul>
    <div>
      <h4>(3) Use the PROXY protocol</h4>
      <a href="#3-use-the-proxy-protocol">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1o8aOHx1OoHCBYvExEoE7S/e81802cb18ce686ad50071d3cc4a1de0/Screen-Shot-2018-04-15-at-12.26.04-PM.png" />
            
            </figure><p>Finally, there is the last method. HAProxy developers, faced with this problem developed <a href="http://www.haproxy.org/download/1.8/doc/proxy-protocol.txt">the "PROXY protocol"</a>. The premise of this protocol is to prepend client metadata in front of the original data stream. For example, this string could be sent to the origin server in front of proxied data:</p>
            <pre><code>PROXY TCP4 192.0.2.123 104.16.112.25 19235 80\r\n</code></pre>
            <p>As you can see, the PROXY protocol is rather trivial to implement, and is generally sufficient for most use cases. However, it requires application support. The PROXY protocol (v1) is supported by Cloudflare Spectrum, and we highly encourage using it over other methods of keeping client source IP addresses.</p>
    <div>
      <h3>Mmproxy to the rescue</h3>
      <a href="#mmproxy-to-the-rescue">
        
      </a>
    </div>
    <p>But sometimes adding PROXY protocol support to the application isn't an option. This can be the case when the application isn’t open source, or when it's hard to edit. A good example is "sshd" - it doesn't support PROXY protocol and adding the support would be far from trivial. For such applications it may just be impossible to use any application level load balancer whatsoever. This is very unfortunate.</p><p>Fortunately we think we found a workaround.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/24geDq3Y5M6aq37IdIW5oA/aa1c73aeaf35267720d2b395763730c8/Screen-Shot-2018-04-15-at-12.26.28-PM-1.png" />
            
            </figure><p>Allow me to present <code>mmproxy</code>, a PROXY protocol gateway. <code>mmproxy</code> listens for remote connections coming from an application level load balancer, like Spectrum. It then reads a PROXY protocol header, opens a localhost connection to the target application, and duly proxies data in and out.</p><p>Such a proxy wouldn't be too useful if not for one feature—the localhost connection from <code>mmproxy</code> to the target application is sent with a real client source IP.</p><p>That's right, <code>mmproxy</code> spoofs the client IP address. From the application’s point of view, this spoofed connection, coming through Spectrum and <code>mmproxy</code>, is indistinguishable from a real one, connecting directly to the application.</p><p>This technique requires some Linux routing trickery. The <code>mmproxy</code> daemon will walk you through the necessary details, but there are the important bits:</p><ul><li><p><code>mmproxy</code> works only on Linux.</p></li><li><p>Since it forwards traffic over the loopback interface, it must be run on the same machine as the target application.</p></li><li><p>It requires kernel 2.6.28 or newer.</p></li><li><p>It guides the user to add four <code>iptables</code> firewall rules, and four <code>iproute2</code> routing rules, covering both IPv4 and IPv6.</p></li><li><p>For IPv4, <code>mmproxy</code> requires the <code>route_localnet</code> sysctl to be set.</p></li><li><p>For IPv6, it needs a working IPv6 configuration. A working <code>ping6 cloudflare.com</code> is a prerequisite.</p></li><li><p><code>mmproxy</code> needs root or <code>CAP_NET_RAW</code> permissions to set the <code>IP_TRANSPARENT</code> socket option. Once started, it jails itself with <code>seccomp-bpf</code> for a bit of added security.</p></li></ul>
    <div>
      <h3>How to run mmproxy</h3>
      <a href="#how-to-run-mmproxy">
        
      </a>
    </div>
    <p>To run <code>mmproxy</code>, first download the <a href="https://github.com/cloudflare/mmproxy">source</a> and compile it:</p>
            <pre><code>git clone https://github.com/cloudflare/mmproxy.git --recursive
cd mmproxy
make</code></pre>
            <p><a href="https://github.com/cloudflare/mmproxy/issues">Please report any issues on GitHub</a>.</p><p>Then set up the needed configuration:</p>
            <pre><code>sudo iptables -t mangle -I PREROUTING -m mark --mark 123 -j CONNMARK --save-mark
sudo iptables -t mangle -I OUTPUT -m connmark --mark 123 -j CONNMARK --restore-mark
sudo ip rule add fwmark 123 lookup 100
sudo ip route add local 0.0.0.0/0 dev lo table 100
sudo ip6tables -t mangle -I PREROUTING -m mark --mark 123 -j CONNMARK --save-mark
sudo ip6tables -t mangle -I OUTPUT -m connmark --mark 123 -j CONNMARK --restore-mark
sudo ip -6 rule add fwmark 123 lookup 100
sudo ip -6 route add local ::/0 dev lo table 100</code></pre>
            <p>You will also need <code>route_localnet</code> to be set on your default outbound interface, for example for <code>eth0</code>:</p>
            <pre><code>echo 1 | sudo tee /proc/sys/net/ipv4/conf/eth0/route_localnet</code></pre>
            <p>Finally, verify your IPv6 connectivity:</p>
            <pre><code>$ ping6 cloudflare.com
PING cloudflare.com(2400:cb00:2048:1::c629:d6a2) 56 data bytes
64 bytes from 2400:cb00:2048:1::c629:d6a2: icmp_seq=1 ttl=61 time=0.650 ms</code></pre>
            <p>Now, you are ready to run <code>mmproxy</code>. For example, forwarding localhost SSH would look like this:</p>
            <pre><code>$ sudo ./mmproxy --allowed-subnets ./cloudflare-ip-ranges.txt \
      -l 0.0.0.0:2222 \
      -4 127.0.0.1:22 -6 '[::1]:22'
root@ubuntu:~# ./mmproxy -a cloudflare-ip-ranges.txt -l 0.0.0.0:2222 -4 127.0.0.1:22 -6 [::1]:22[ ] Remember to set the reverse routing rules correctly:
iptables -t mangle -I PREROUTING -m mark --mark 123 -m comment --comment mmproxy -j CONNMARK --save-mark        # [+] VERIFIED
iptables -t mangle -I OUTPUT -m connmark --mark 123 -m comment --comment mmproxy -j CONNMARK --restore-mark     # [+] VERIFIED
ip6tables -t mangle -I PREROUTING -m mark --mark 123 -m comment --comment mmproxy -j CONNMARK --save-mark       # [+] VERIFIED
ip6tables -t mangle -I OUTPUT -m connmark --mark 123 -m comment --comment mmproxy -j CONNMARK --restore-mark    # [+] VERIFIED
ip rule add fwmark 123 lookup 100               # [+] VERIFIED
ip route add local 0.0.0.0/0 dev lo table 100   # [+] VERIFIED
ip -6 rule add fwmark 123 lookup 100            # [+] VERIFIED
ip -6 route add local ::/0 dev lo table 100     # [+] VERIFIED
[+] OK. Routing to 127.0.0.1 points to a local machine.
[+] OK. Target server 127.0.0.1:22 is up and reachable using conventional connection.
[+] OK. Target server 127.0.0.1:22 is up and reachable using spoofed connection.
[+] OK. Routing to ::1 points to a local machine.
[+] OK. Target server [::1]:22 is up and reachable using conventional connection.
[+] OK. Target server [::1]:22 is up and reachable using spoofed connection.
[+] Listening on 0.0.0.0:2222</code></pre>
            <p>On startup, <code>mmproxy</code> performs a number of self checks. Since we prepared the necessary routing and firewall rules, its self check passes with a "VERIFIED" mark. It's important to confirm these pass.</p><p>We're almost ready to go! The last step is to create a Spectrum application that sends PROXY protocol traffic to <code>mmproxy</code>, port 2222. Here is an example configuration<a href="#fn2">[2]</a>:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6dCc5n5QPZDEHlGMDd5jL2/a2c48e81f2519be09ce8a8ad379b23f9/Screen-Shot-2018-04-15-at-4.06.17-PM.png" />
            
            </figure><p>With Spectrum we are forwarding TCP/22 on domain "ssh.example.org", to our origin at 192.0.2.1, port 2222. We’ve enabled the PROXY protocol toggle.</p>
    <div>
      <h3>mmproxy in action</h3>
      <a href="#mmproxy-in-action">
        
      </a>
    </div>
    <p>Now we can see if it works. My testing VPS has IP address 79.1.2.3. Let's see if the whole setup behaves:</p>
            <pre><code>vps$ nc ssh.example.org 22
SSH-2.0-OpenSSH_7.2p2 Ubuntu-4ubuntu2.1</code></pre>
            <p>Hurray, this worked! The "ssh.example.org" on port 22 is indeed tunneled over Spectrum. Let's see <code>mmproxy</code> logs:</p>
            <pre><code>[+] 172.68.136.1:32654 connected, proxy protocol source 79.1.2.3:0,
        local destination 127.0.0.1:22</code></pre>
            <p>The log confirmed what happened - Cloudflare IP 172.68.136.1 has connected, advertised client IP 79.1.2.3 over the PROXY protocol, and established a spoofed connection to 127.0.0.1:22. The ssh daemon logs show:</p>
            <pre><code>$ tail /var/log/auth.log
Apr 15 14:39:09 ubuntu sshd[7703]: Did not receive identification
        string from 79.1.2.3</code></pre>
            <p>Hurray! All works! sshd recorded the real client IP address, and with <code>mmproxy</code>’s help we never saw that it's actually traffic flowing through Cloudflare Spectrum.</p>
    <div>
      <h3>Under the hood</h3>
      <a href="#under-the-hood">
        
      </a>
    </div>
    <p>Under the hood <code>mmproxy</code> relies on two hacks.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/55C3wnXIZ6x95nvIFJZZcu/c7bfeb61122486e53231927cfa464e44/Screen-Shot-2018-04-15-at-12.26.44-PM-1.png" />
            
            </figure><p>The first hack is about setting source IP on outgoing connections. We are using the well known <a href="https://idea.popcount.org/2014-04-03-bind-before-connect/">bind-before-connect</a> technique to do this.</p><p>Normally, it's only possible to set a valid source IP that is actually handled by a local machine. We can override this by using the <code>IP_TRANSPARENT</code> socket option. With it set, we can select arbitrary source IP addresses before establishing a legitimate connection handled by kernel. For example, we can have a localhost socket between, say 8.8.8.8 and 127.0.0.1, even though 8.8.8.8 may not be explicitly assigned to our server.</p><p>It's worth saying that <code>IP_TRANSPARENT</code> was not created for this use case. This socket option was specifically added as support <a href="/how-we-built-spectrum/">for TPROXY module</a>.</p><p>The second hack is about routing. Normally, response packets coming from the application are routed to the Internet - via a default gateway. We must prevent that from happening, and instead direct these packets towards the loopback interface. To achieve this, we rely on <code>CONNMARK</code> and an additional routing table selected by <code>fwmark</code>. <code>mmproxy</code> sets a MARK value of 123 (by default) on packets it sends, which is preserved at the <code>CONNMARK</code> layer, and restored for the return packets. Then we route the packets with MARK == 123 to a specific routing table (number 100 by default), which force-routes everything back to the loopback interface. We do this by totally <a href="/how-we-built-spectrum/">abusing the AnyIP trick</a> and assigning 0.0.0.0/0 to "local" - meaning that entire internet shall be treated as belonging to our machine.</p>
    <div>
      <h3>Summary</h3>
      <a href="#summary">
        
      </a>
    </div>
    <p><code>mmproxy</code> is not the only tool that uses this IP spoofing technique to preserve real client IP addresses. One example is <a href="https://man.openbsd.org/relayd.conf.5">OpenBSD's <code>relayd</code></a> "transparent" mode. Another is the <a href="https://github.com/UlricE/pen/wiki/Transparent-Reverse-Proxy"><code>pen</code> load balancer</a>. Compared to <code>mmproxy</code>, these tools look heavyweight and require more complex routing.</p><p><code>mmproxy</code> is the first daemon to do just one thing: unwrap the PROXY protocol and spoof the client IP address on locally running connections going to the application process. While it requires some firewall and routing setup, it's small enough to make an <code>mmproxy</code> deployment acceptable in many situations.</p><p>We hope that <code>mmproxy</code>, while a gigantic hack, could help some of our customers with onboarding onto Cloudflare Spectrum.</p><p>However, frankly speaking - we don't know. <code><i>mmproxy</i></code><i> should be treated as a great experiment</i>. If you find it useful, let us know! If you find a problem, <a href="https://github.com/cloudflare/mmproxy/issues">please report it</a>!We are looking for feedback. If our users will find the <code>mmproxy</code> approach useful, we will repackage it and release as an easier to use tool.</p><hr /><p><i>Doing low level socket work sound interesting? Join our </i><a href="https://boards.greenhouse.io/cloudflare/jobs/589572"><i>world famous team</i></a><i> in London, Austin, San Francisco, Champaign and our elite office in Warsaw, Poland</i>.</p><hr /><ol><li><p>In addition to supporting standard <code>X-Forwarded-For</code> HTTP header, Cloudflare supports custom a <code>CF-Connecting-IP</code> header. <a href="#fnref1">↩︎</a></p></li><li><p>Spectrum is available for Enterprise plan domains and can be enabled by your account manager. <a href="#fnref2">↩︎</a></p></li></ol> ]]></content:encoded>
            <category><![CDATA[Linux]]></category>
            <category><![CDATA[Tech Talks]]></category>
            <category><![CDATA[Reliability]]></category>
            <category><![CDATA[Spectrum]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[Deep Dive]]></category>
            <guid isPermaLink="false">2t7J0btuLV7WMxngCOuKEP</guid>
            <dc:creator>Marek Majkowski</dc:creator>
        </item>
    </channel>
</rss>