
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/">
    <channel>
        <title><![CDATA[ The Cloudflare Blog ]]></title>
        <description><![CDATA[ Get the latest news on how products at Cloudflare are built, technologies used, and join the teams helping to build a better Internet. ]]></description>
        <link>https://blog.cloudflare.com</link>
        <atom:link href="https://blog.cloudflare.com/" rel="self" type="application/rss+xml"/>
        <language>en-us</language>
        
        <lastBuildDate>Fri, 03 Apr 2026 21:53:06 GMT</lastBuildDate>
        <item>
            <title><![CDATA[Encrypted Client Hello - the last puzzle piece to privacy]]></title>
            <link>https://blog.cloudflare.com/announcing-encrypted-client-hello/</link>
            <pubDate>Fri, 29 Sep 2023 13:00:52 GMT</pubDate>
            <description><![CDATA[ We're excited to announce a contribution to improving privacy for everyone on the Internet. Encrypted Client Hello, a new standard that prevents networks from snooping on which websites a user is visiting, is now available on all Cloudflare plans.  ]]></description>
            <content:encoded><![CDATA[ <p></p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4tab7Qbtn2MGjXuTbcAsux/c40ac1cfda644402d4022573129a588b/image2-29.png" />
            
            </figure><p>Today we are excited to announce a contribution to improving privacy for everyone on the Internet. Encrypted Client Hello, a <a href="https://datatracker.ietf.org/doc/draft-ietf-tls-esni/">new proposed standard</a> that prevents networks from snooping on which websites a user is visiting, is now available on all Cloudflare plans.  </p><p>Encrypted Client Hello (ECH) is a successor to <a href="https://www.cloudflare.com/learning/ssl/what-is-encrypted-sni/">ESNI</a> and masks the Server Name Indication (SNI) that is used to negotiate a TLS handshake. This means that whenever a user visits a website on Cloudflare that has ECH enabled, no one except for the user, Cloudflare, and the website owner will be able to determine which website was visited. Cloudflare is a big proponent of privacy for everyone and is excited about the prospects of bringing this technology to life.</p>
    <div>
      <h3>Browsing the Internet and your privacy</h3>
      <a href="#browsing-the-internet-and-your-privacy">
        
      </a>
    </div>
    <p>Whenever you visit a website, your browser sends a request to a web server. The web server responds with content and the website starts loading in your browser. Way back in the early days of the Internet this happened in 'plain text', meaning that your browser would just send bits across the network that everyone could read: the corporate network you may be browsing from, the Internet Service Provider that offers you Internet connectivity and any network that the request traverses before it reaches the web server that hosts the website. Privacy advocates have long been concerned about how much information could be seen in "plain text":  If any network between you and the web server can see your traffic, that means they can also see exactly what you are doing. If you are initiating a bank transfer any intermediary can see the destination and the amount of the transfer.</p><p>So how to start making this data more private? To prevent eavesdropping, encryption was introduced in the form of <a href="https://www.cloudflare.com/learning/ssl/what-is-ssl/">SSL</a> and later <a href="https://www.cloudflare.com/learning/ssl/transport-layer-security-tls/">TLS</a>. These are amazing protocols that safeguard not only your privacy but also ensure that no intermediary can tamper with any of the content you view or upload. But encryption only goes so far.</p><p>While the actual content (which particular page on a website you're visiting and any information you upload) is encrypted and shielded from intermediaries, there are still ways to determine what a user is doing. For example, the DNS request to determine the address (IP) of the website you're visiting and the <a href="https://www.cloudflare.com/learning/ssl/what-is-sni/">SNI</a> are both common ways for intermediaries to track usage.</p><p>Let's start with <a href="https://www.cloudflare.com/learning/dns/what-is-dns/">DNS</a>. Whenever you visit a website, your operating system needs to know which IP address to connect to. This is done through a DNS request. DNS by default is unencrypted, meaning anyone can see which website you're asking about. To help users shield these requests from intermediaries, Cloudflare introduced <a href="/dns-encryption-explained/">DNS over HTTPS</a> (DoH) in 2019. In 2020, we went one step further and introduced <a href="/oblivious-dns/">Oblivious DNS over HTTPS</a> which prevents even Cloudflare from seeing which websites a user is asking about.</p><p>That leaves SNI as the last unencrypted bit that intermediaries can use to determine which website you're visiting. After performing a DNS query, one of the first things a browser will do is perform a <a href="https://www.cloudflare.com/learning/ssl/what-happens-in-a-tls-handshake/">TLS handshake</a>. The handshake constitutes several steps, including which cipher to use, which TLS version and which certificate will be used to verify the web server's identity. As part of this handshake, the browser will indicate the name of the server (website) that it intends to visit: the Server Name Indication.</p><p>Due to the fact that the session is not encrypted yet, and the server doesn't know which certificate to use, the browser must transmit this information in plain text. Sending the SNI in plaintext means that any intermediary can view which website you’re visiting simply by checking the first packet for a connection:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/74acW9qWQyuJFltzJagc2S/5693f381e386d44283121cdd9dd65106/pasted-image-0--8--2.png" />
            
            </figure><p>This means that despite the amazing efforts of TLS and DoH, which websites you’re visiting on the Internet still isn't truly private. Today, we are adding the final missing piece of the puzzle with ECH. With ECH, the browser performs a TLS handshake with Cloudflare, but not a customer-specific hostname. This means that although intermediaries will be able to see that you are visiting <i>a</i> website on Cloudflare, they will never be able to determine which one.</p>
    <div>
      <h3>How does ECH work?</h3>
      <a href="#how-does-ech-work">
        
      </a>
    </div>
    <p>In order to explain how ECH works, it helps to first understand how TLS handshakes are performed. A TLS handshake starts with a <a href="https://www.cloudflare.com/learning/ssl/what-happens-in-a-tls-handshake/">ClientHello</a> part, which allows a client to say which ciphers to use, which TLS version and most importantly, which server it's trying to visit (the SNI).</p><p>With ECH, the ClientHello message part is split into two separate messages: an inner part and an outer part. The outer part contains the non-sensitive information such as which ciphers to use and the TLS version. It also includes an "outer SNI". The inner part is encrypted and contains an "inner SNI".</p><p>The outer SNI is a common name that, in our case, represents that a user is trying to visit an encrypted website on Cloudflare. We chose cloudflare-ech.com as the SNI that all websites will share on Cloudflare. Because Cloudflare controls that domain we have the appropriate certificates to be able to negotiate a TLS handshake for that server name.</p><p>The inner SNI contains the actual server name that the user is trying to visit. This is encrypted using a public key and can only be read by Cloudflare. Once the handshake completes the web page is loaded as normal, just like any other website loaded over TLS.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/MOw4dKZFQbisxTiD3PIrD/6b173801a10ece203e82d8bf3ed28c0b/pasted-image-0-8.png" />
            
            </figure><p>In practice, this means that any intermediary that is trying to establish which website you’re visiting will simply see normal TLS handshakes with one caveat: any time you visit an ECH enabled website on Cloudflare the server name will look the same. Every TLS handshake will appear identical in that it looks like it's trying to load a website for cloudflare-ech.com, as opposed to the actual website. We've solved the last puzzle-piece in preserving privacy for users that don't like intermediaries seeing which websites they are visiting.</p><p>Visist our introductory blog for full details on the nitty-gritty of <a href="/encrypted-client-hello/">ECH technology</a>.</p>
    <div>
      <h3>The future of privacy</h3>
      <a href="#the-future-of-privacy">
        
      </a>
    </div>
    <p>We're excited about what this means for privacy on the Internet. Browsers like <a href="https://chromestatus.com/feature/6196703843581952">Google Chrome</a> and <a href="https://groups.google.com/a/mozilla.org/g/dev-platform/c/uv7PNrHUagA/m/BNA4G8fOAAAJ">Firefox</a> are starting to ramp up support for ECH already. If you're a website, and you care about users visiting your website in a fashion that doesn't allow any intermediary to see what users are doing, enable ECH today on Cloudflare. We've enabled ECH for all free zones already. If you're an existing paying customer, just head on over to the Cloudflare dashboard and <a href="https://dash.cloudflare.com/?to=/:account/:zone/ssl-tls/edge-certificates">apply for the feature</a>. We’ll be enabling this for everyone that signs up over the coming few weeks.</p><p>Over time, we hope others will follow our footsteps, leading to a more private Internet for everyone. The more providers that offer ECH, the harder it becomes for anyone to listen in on what users are doing on the Internet. Heck, we might even solve privacy for good.</p><p>If you're looking for more information on ECH, how it works and how to enable it head on over to our <a href="https://developers.cloudflare.com/ssl/edge-certificates/ech/">developer documentation on ECH</a>.</p> ]]></content:encoded>
            <category><![CDATA[Birthday Week]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[Encrypted SNI]]></category>
            <category><![CDATA[Research]]></category>
            <guid isPermaLink="false">3qwZPNqFZ0Cj1noi8cSLG9</guid>
            <dc:creator>Achiel van der Mandele</dc:creator>
            <dc:creator>Alessandro Ghedini</dc:creator>
            <dc:creator>Christopher Wood</dc:creator>
            <dc:creator>Rushil Mehra</dc:creator>
        </item>
        <item>
            <title><![CDATA[Are you measuring what matters? A fresh look at Time To First Byte]]></title>
            <link>https://blog.cloudflare.com/ttfb-is-not-what-it-used-to-be/</link>
            <pubDate>Tue, 20 Jun 2023 13:00:59 GMT</pubDate>
            <description><![CDATA[ Time To First Byte (TTFB) is not a good way to measure your websites performance. In this blog we’ll cover what TTFB is a good indicator of, what it's not great for, and what you should be using instead ]]></description>
            <content:encoded><![CDATA[ 
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3DDRz6sPcW8kWs2Xw8iDv4/f99afbb10dad72d9d1f28855a71edb49/image1-18.png" />
            
            </figure><p>Today, we’re making the case for why Time To First Byte (TTFB) is not a good metric for evaluating how fast web pages load. There are better metrics out there that give a more accurate representation of how well a server or <a href="https://www.cloudflare.com/learning/cdn/what-is-a-cdn/">content delivery network</a> performs for end users. In this blog, we’ll go over the ambiguity of measuring TTFB, touch on more meaningful metrics such as <a href="https://www.cloudflare.com/learning/performance/what-are-core-web-vitals/">Core Web Vitals</a> that should be used instead, and finish on scenarios where TTFB still makes sense to measure.</p><p>Many of our customers ask what the best way would be to evaluate how well a network like ours works. This is a good question! Measuring performance is difficult. It’s easy to simplify the question to “How close is Cloudflare to end users?” The predominant metric that’s been used to measure that is <a href="https://www.cloudflare.com/learning/cdn/glossary/round-trip-time-rtt/">round trip time (RTT)</a>. This is the time it takes for one network packet to travel from an end user to Cloudflare and back. We measure this metric and mention it from time to time: Cloudflare has an average RTT of 50 milliseconds for 95% of the Internet-connected population.</p><p>Whilst RTT is a relatively good indicator of the quality of a network, it doesn’t necessarily tell you that much about how good it is at actually delivering actual websites to end users. For instance, what if the web server is really slow? A user might be very close to the data center that serves the traffic, but if it takes a long time to actually grab the asset from disk and serve it the result will still be a poor experience.</p><p>This is where TTFB comes in. It measures the time it takes between a request being sent from an end user until the very first byte of the response being received. This sounds great on paper! However it doesn’t capture how a webpage or web application loads, and what happens <i>after</i> the first byte is received.</p><p>In this blog we’ll cover what TTFB is a good indicator of, what it's not great for, and what you should be using instead.</p>
    <div>
      <h3>What is TTFB?</h3>
      <a href="#what-is-ttfb">
        
      </a>
    </div>
    <p>TTFB is a metric which reports the duration between sending the request from the client to a server for a given file, and the receipt of the first byte of said file. For example, if you were to download the Cloudflare logo from our website the TTFB would be how long it took to receive the first byte of that image. Similarly, if you were to measure the TTFB of a request to cloudflare.com the metric would return the TTFB of how long it took from request to receiving the first byte of the first HTTP response. Not how long it took for the image to be fully visible or for the web page to be loaded in a state that allowed a user to begin using it.</p><p>The simplest answer therefore is to look at the diametrically opposite measurement, Time to Last Byte (TTLB). TTLB, as you’d expect, measures how long it takes until the last byte of data is received from the server. For the Cloudflare logo file this would make sense, as until the image is fully downloaded it's not exactly useful. But what about for a webpage? Do you really need to wait until every single file is fully downloaded, even those images at the bottom of the page you can't immediately see? TTLB is fine for measuring how long it took to download a single file from a CDN / server. However for multi-faceted traffic, like web pages, it is too conservative, as it doesn’t tell you how long it took for the web page to be <i>usable.</i></p><p>As an analogy we can look at measuring how long it takes to process an incoming airplane full of passengers. What's important is to understand how long it takes for those passengers to disembark, pass through passport control, collect their baggage and leave the terminal, if no onward journeys. TTFB would measure success as how long it took to get the first passenger off of the airplane. TTLB would measure how long it took the last passenger to leave the terminal, even if this passenger remained in the terminal for hours afterwards due to passport issues or getting lost. Neither are a good measure of success for the airline.</p>
    <div>
      <h3>Why TTFB doesn't make sense</h3>
      <a href="#why-ttfb-doesnt-make-sense">
        
      </a>
    </div>
    <p>TTFB is a widely-used metric because it is easy-to-understand and it is a great signal for connection setup time, server time and network latency. It can help website owners identify when performance issues originate from their server. But is TTFB a good signal for how real users experience the loading speed of a web page in a browser?</p><p>When a web page loads in a browser, the user’s perception of speed isn’t related to the moment the browser first receives bytes of data. It is related to when the user starts to see the page rendering on the screen.</p><p>The loading of a web page in a browser is a very complex process. Almost all of this process happens after TTFB is reported. After the first byte has been received, the browser still has to load the main HTML file. It also has to load fonts, stylesheets, javascript, images and other resources. Often these resources link to other resources that also must be downloaded. Often these resources entirely block the rendering of the page. Alongside all these downloads, the browser is also parsing the HTML, CSS and JavaScript. It is building data structures that represent the content of the web page as well as how it is styled. All of this is in preparation to start rendering the final page onto the screen for the user.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5CEjRuDYe3eMAChrWxwmoC/29a7e28b2c9b961cb2bf0297edacffca/image2-12.png" />
            
            </figure><p>When the user starts seeing the web page actually rendered on the screen, TTFB has become a distant memory for the browser. For a metric that signals the loading speed as perceived by the user, TTFB falls dramatically short.</p><p>Receiving the first byte isn't sufficient to determine a good end user experience as most pages have additional render blocking resources that get loaded after the initial document request. Render-blocking resources are scripts, stylesheets, and HTML imports that prevent a web page from loading quickly. From a TTFB perspective it means the client could stop the ‘TTFB clock’ on receipt of the first byte of one of these files, but the web browser is blocked from showing anything to the user until the remaining critical assets are downloaded.</p><p>This is because browsers need instructions for what to render and what resources need to be fetched to complete “painting” a given web page. These instructions come from a server response. But the servers sending these responses often need time to compile these resources — this is known as “server think time.” While the servers are busy during this time… browsers sit idle and wait. And the TTFB counter goes up.</p><p>There have been a number of attempts over the years to benefit from this “think time”. First came Server Push, which was superseded last year by <b>Early Hints</b>. Early Hints take advantage of “server think time” to asynchronously send instructions to the browser to begin loading resources while the origin server is compiling the full response. By sending these hints to a browser before the full response is prepared, the browser can figure out what it needs to do to load the webpage faster for the end user. It also stops the TTFB clock, meaning a lower TTFB. This helps ensure the browser gets the critical files sooner to begin loading the webpage, and it also means the first byte is delivered sooner as there is no waiting on the server for the whole dataset to be prepared and ready to send. Even with Early Hints, though, TTFB doesn’t accurately define how long it took the web page to be in a usable state.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5HWEDVlv82i1e7QA4M9MHO/f7847d5bc8b344c6f2131ed9451ece9c/image3-10.png" />
            
            </figure><p>TTFB also does not take into account multiplexing benefits of <a href="https://www.cloudflare.com/learning/performance/http2-vs-http1.1/">HTTP/2</a> and <a href="https://www.cloudflare.com/learning/performance/what-is-http3/">HTTP/3</a> which allow browsers to load files in parallel. It also doesn't take into account compression on the origin, which would result in a higher TTFB but a quicker page load overall due to the time the server took to compress the assets and send them in a small format over the network.</p><p>Cloudflare offers many features that can improve the loading speed of a website, but don’t necessarily impact the TTFB score. These features include Zaraz, Rocket Loader, HTTP/2 and HTTP/3 Prioritization, Mirage, Polish, Image Resizing, Auto Minify and Cache. These features <a href="https://www.cloudflare.com/learning/performance/speed-up-a-website/">improve the loading time of a webpage</a>, ensuring they load optimally through a series of enhancements from <a href="https://www.cloudflare.com/developer-platform/cloudflare-images/">image optimization and compression</a> to render blocking elimination by optimizing the sending of assets from the server to the browser in the best possible order.</p><p>More comprehensive metrics are required to illustrate the full loading process of a web page, and the benefit provided by these features. This is where <b>Real User Monitoring</b> helps.  At Cloudflare we are all-in on Real User Monitoring (RUM) as the future of <a href="https://www.cloudflare.com/learning/performance/why-site-speed-matters/">website performance</a>. We’re investing heavily in it: both from an observation point of view and from an optimization one also.</p><p>For those unfamiliar with RUM, we typically optimize websites for three main metrics - known as the “Core Web Vitals”. This is a set of key metrics which are believed to be the best and most accurate representation of a poorly performing website vs a well performing one. These key metrics are Largest Contentful Paint, First Input Delay and Cumulative Layout Shift.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5sWYD1VyB6OkS1avUesrta/d6ddf78c645c6175987cbbaf1df9a029/image4-10.png" />
            
            </figure><p>Source: <a href="https://addyosmani.com/blog/web-vitals-extension/">https://addyosmani.com/blog/web-vitals-extension/</a> </p><p>LCP measures loading performance; typically how long it takes to load the largest image or text block visible in the browser. FID measures interactivity. For example, the time between when a user clicks or taps on a button to when the browser responds and starts doing something. Finally, CLS measures visual stability. A good, or bad example of CLS is when you go to a website on your mobile phone, tap on a link and the page moves at the last second meaning you tap something you didn't want to. That would be a lower CLS score as its poor user experience.</p><p>Looking at these metrics gives us a good idea of how the end user is truly experiencing your website (RUM) vs. how quickly the first byte of the file was retrieved from the nearest Cloudflare data center (TTFB).</p>
    <div>
      <h3>Good TTFB, bad user experience</h3>
      <a href="#good-ttfb-bad-user-experience">
        
      </a>
    </div>
    <p>One of the “sub parts” that comprise LCP is TTFB. That means a poor TTFB is very likely to result in a poor LCP. If it takes you 20 seconds to retrieve the first byte of the first image, your user isn't going to have a good experience - regardless of your outlook on TTFB vs RUM.</p><p>Conversely, we found that a <a href="https://web.dev/ttfb/#what-is-a-good-ttfb-score">good TTFB</a> does not always mean a good LCP score, or FID or CLS. We ran a query to collect RUM metrics of web pages we served which had a good TTFB score. Good is defined as a TTFB as less than 800ms. This allowed us to ask the question: TTFB says these websites are good. Does the RUM data support that?</p><p>We took four distinct samples from our RUM data in June. Each sample had a different date-range and sample-rate combination. In each sample we queried for 200,000 page views. From these 200,000 page views we filtered for only the page views that reported a 'Good' TTFB. Across the samples, of all page views that have a good TTFB, about 21% of them did not have a <a href="https://web.dev/lcp/#what-is-a-good-lcp-score">“good” LCP score</a>. 46% of them did not have a <a href="https://web.dev/fid/#what-is-a-good-fid-score">“good” FID score</a>. And 57% of them did not have a good <a href="https://web.dev/cls/#what-is-a-good-cls-score">CLS</a> score.</p><p>This clearly shows the disparity between measuring the time it takes to receive the first byte of traffic, vs the time it takes for a webpage to become stable and interactive. In summary, LCP includes TTFB but also includes other parts of the loading experience. LCP is a more comprehensive, user-centric metric.</p>
    <div>
      <h3>TTFB is not all bad</h3>
      <a href="#ttfb-is-not-all-bad">
        
      </a>
    </div>
    <p>Reading this post and others from Speed Week 2023 you may conclude we really don't like TTFB and you should stop using it. That isn't the case.</p><p>There are a few situations where TTFB does matter. For starters, there are many applications that aren’t websites. File servers, APIs and all sorts of streaming protocols don’t have the same semantics as web pages and the best way to objectively measure performance is to in fact look at exactly when the first byte is returned from a server.</p><p>To help optimize TTFB for these scenarios we are announcing <a href="/introducing-timing-insights">Timing Insights</a>, a new analytics tool to help you understand what is contributing to "Time to First Byte" (TTFB) of Cloudflare and your origin. Timing Insights breaks down TTFB from the perspective of our servers to help you understand what is slow, so that you can begin addressing it.</p>
    <div>
      <h3>Get started with RUM today</h3>
      <a href="#get-started-with-rum-today">
        
      </a>
    </div>
    <p>To help you understand the real user experience of your website we have today launched <a href="/cloudflare-observatory-generally-available"><b>Cloudflare Observatory</b></a> <b>-</b> the new home of performance at Cloudflare.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1UbKEuzHKLSwcjmGfQIrfW/bcbbdb8272fbbbdf17a7f5238fad0812/image5-2-1.png" />
            
            </figure><p>Cloudflare users can now easily <a href="https://www.cloudflare.com/application-services/solutions/app-performance-monitoring/">monitor website performance</a> using Real User Monitoring (RUM) data along with scheduled synthetic tests from different regions in a single dashboard. This will identify any performance issues your website may have. The best bit? Once we’ve identified any issues, Observatory will highlight customized recommendations to resolve these issues, all with a single click.</p><p>Start making your website faster today with <a href="https://dash.cloudflare.com/?to=/:account/:zone/speed/test">Observatory</a>.</p> ]]></content:encoded>
            <category><![CDATA[Speed Week]]></category>
            <category><![CDATA[TTFB]]></category>
            <category><![CDATA[Performance]]></category>
            <guid isPermaLink="false">1ckVp4U6xrlEipotlKstbo</guid>
            <dc:creator>Sam Marsh</dc:creator>
            <dc:creator>Achiel van der Mandele</dc:creator>
            <dc:creator>Shih-Chiang Chien</dc:creator>
        </item>
        <item>
            <title><![CDATA[Introducing HTTP/3 Prioritization]]></title>
            <link>https://blog.cloudflare.com/better-http-3-prioritization-for-a-faster-web/</link>
            <pubDate>Tue, 20 Jun 2023 13:00:46 GMT</pubDate>
            <description><![CDATA[ Today, Cloudflare is very excited to announce full support for HTTP/3 Extensible Priorities, a new standard that speeds the loading of webpages by up to 37% ]]></description>
            <content:encoded><![CDATA[ 
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5WMquEFDro1TjSvsKvdb2X/1ebf5bacb443b2c7c611b2600cfe1352/image4-9.png" />
            
            </figure><p>Today, Cloudflare is very excited to announce full support for HTTP/3 Extensible Priorities, a new standard that speeds the loading of webpages by up to 37%. Cloudflare worked closely with standards builders to help form the specification for HTTP/3 priorities and is excited to help push the web forward. HTTP/3 Extensible Priorities is available on all plans on Cloudflare. For paid users, there is an enhanced version available that improves performance even more.</p><p>Web pages are made up of many objects that must be downloaded before they can be processed and presented to the user. Not all objects have equal importance for web performance. The role of HTTP prioritization is to load the right bytes at the most opportune time, to achieve the best results. Prioritization is most important when there are multiple objects all competing for the same constrained resource. In <a href="https://www.cloudflare.com/learning/performance/what-is-http3/">HTTP/3</a>, this resource is the QUIC connection. In most cases, bandwidth is the bottleneck from server to client. Picking what objects to dedicate bandwidth to, or share bandwidth amongst, is a critical foundation to web performance. When it goes askew, the other optimizations we build on top can suffer.</p><p>Today, we're announcing support for prioritization in HTTP/3, using the full capabilities of the HTTP Extensible Priorities (<a href="https://www.rfc-editor.org/rfc/rfc9218.html">RFC 9218)</a> standard, augmented with Cloudflare's knowledge and experience of enhanced HTTP/2 prioritization. This change is compatible with all mainstream web browsers and can improve key metrics such as <a href="https://web.dev/lcp/">Largest Contentful Paint</a> (LCP) by up to 37% in our test. Furthermore, site owners can apply server-side overrides, using Cloudflare Workers or directly from an origin, to customize behavior for their specific needs.</p>
    <div>
      <h3>Looking at a real example</h3>
      <a href="#looking-at-a-real-example">
        
      </a>
    </div>
    <p>The ultimate question when it comes to features like HTTP/3 Priorities is: how well does this work and should I turn it on? The details are interesting and we'll explain all of those shortly but first lets see some demonstrations.</p><p>In order to evaluate prioritization for HTTP/3, we have been running many simulations and tests. Each web page is unique. Loading a web page can require many TCP or QUIC connections, each of them idiosyncratic. These all affect how prioritization works and how effective it is.</p><p>To evaluate the effectiveness of priorities, we ran a set of tests measuring Largest Contentful Paint (LCP). As an example, we benchmarked blog.cloudflare.com to see how much we could improve performance:</p><div></div>
<p></p><p>As a film strip, this is what it looks like:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1nUzro5TRNUdT66D48SAX9/eea3706754ab1adcdcf2de1520b4e8b2/unnamed.png" />
            
            </figure><p>In terms of actual numbers, we see Largest Contentful Paint drop from 2.06 seconds down to 1.29 seconds. Let’s look at why that is. To analyze exactly what’s going on we have to look at a waterfall diagram of how this web page is loading. A waterfall diagram is a way of visualizing how assets are loading. Some may be loaded in parallel whilst some might be loaded sequentially. Without smart prioritization, the waterfall for loading assets for this web page looks as follows:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7GEe1xRnxvauMVfOKy5KH2/ab43c1561d3d512589a6de0b063419a1/BLOG-1879-waterfall-analysis-2.png" />
            
            </figure><p>There are several interesting things going on here so let's break it down. The LCP image at request 21 is for 1937-1.png, weighing 30.4 KB. Although it is the LCP image, the browser requests it as priority u=3,i, which informs the server to put it in the same round-robin bandwidth-sharing bucket with all of the other images. Ahead of the LCP image is index.js, a JavaScript file that is loaded with a "defer" attribute. This JavaScript is non-blocking and shouldn't affect key aspects of page layout.</p><p>What appears to be happening is that the browser gives index.js the priority u=3,i=?0, which places it ahead of the images group on the server-side. Therefore, the 217 KB of index.js is sent in preference to the LCP image. Far from ideal. Not only that, once the script is delivered, it needs to be processed and executed. This saturates the CPU and prevents the LCP image from being painted, for about 300 milliseconds, even though it was delivered already.</p><p>The waterfall with prioritization looks much better:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3Y2mU3vMrf4DVgI6Cq7CSv/980f93870fee27984d40dd310ee38c8d/BLOG-1879-waterfall-analysis-1.png" />
            
            </figure><p>We used a server-side override to promote the priority of the LCP image 1937-1.png from u=3,i to u=2,i. This has the effect of making it leapfrog the "defer" JavaScript. We can see at around 1.2 seconds, transmission of index.js is halted while the image is delivered in full. And because it takes another couple of hundred milliseconds to receive the remaining JavaScript, there is no CPU competition for the LCP image paint. These factors combine together to drastically improve LCP times.</p>
    <div>
      <h3>How Extensible Priorities actually works</h3>
      <a href="#how-extensible-priorities-actually-works">
        
      </a>
    </div>
    <p>First of all, you don't need to do anything yourselves to make it work. Out of the box, browsers will send Extensible Priorities signals alongside HTTP/3 requests, which we'll feed into our priority scheduling decision making algorithms. We'll then decide the best way to send HTTP/3 response data to ensure speedy page loads.</p><p>Extensible Priorities has a similar interaction model to HTTP/2 priorities, client send priorities and servers act on them to schedule response data, we'll explain exactly how that works in a bit.</p><p>HTTP/2 priorities used a dependency tree model. While this was very powerful it turned out hard to implement and use. When the IETF came to try and port it to HTTP/3 during the standardization process, we hit major issues. If you are interested in all that background, go and read my blog post describing why we adopted a <a href="/adopting-a-new-approach-to-http-prioritization/">new approach to HTTP/3 prioritization</a>.</p><p>Extensible Priorities is a far simpler scheme. HTTP/2's dependency tree with 255 weights and dependencies (that can be mutual or exclusive) is complex, hard to use as a web developer and could not work for HTTP/3. Extensible Priorities has just two parameters: urgency and incremental, and these are capable of achieving exactly the same web performance goals.</p><p>Urgency is an integer value in the range 0-7. It indicates the importance of the requested object, with 0 being most important and 7 being the least. The default is 3. Urgency is comparable to HTTP/2 weights. However, it's simpler to reason about 8 possible urgencies rather than 255 weights. This makes developer's lives easier when trying to pick a value and predicting how it will work in practice.</p><p>Incremental is a boolean value. The default is false. A true value indicates the requested object can be processed as parts of it are received and read - commonly referred to as streaming processing. A false value indicates the object must be received in whole before it can be processed.</p><p>Let's consider some example web objects to put these parameters into perspective:</p><ul><li><p>An HTML document is the most important piece of a webpage. It can be processed as parts of it arrive. Therefore, urgency=0 and incremental=true is a good choice.</p></li><li><p>A CSS style is important for page rendering and could block visual completeness. It needs to be processed in whole. Therefore, urgency=1 and incremental=false is suitable, this would mean it doesn't interfere with the HTML.</p></li><li><p>An image file that is outside the browser viewport is not very important and it can be processed and painted as parts arrive. Therefore, urgency=3 and incremental=true is appropriate to stop it interfering with sending other objects.</p></li><li><p>An image file that is the "hero image" of the page, making it the Largest Contentful Pain element. An urgency of 1 or 2 will help it avoid being mixed in with other images. The choice of incremental value is a little subjective and either might be appropriate.</p></li></ul><p>When making an HTTP request, clients decide the Extensible Priority value composed of the urgency and incremental parameters. These are sent either as an HTTP header field in the request (meaning inside the HTTP/3 HEADERS frame on a request stream), or separately in an HTTP/3 PRIORITY_UPDATE frame on the control stream. HTTP headers are sent once at the start of a request; a client might change its mind so the PRIORITY_UPDATE frame allows it to reprioritize at any point in time.</p><p>For both the header field and PRIORITY_UPDATE, the parameters are exchanged using the Structured Fields Dictionary format (<a href="https://www.rfc-editor.org/info/rfc8941">RFC 8941</a>) and serialization rules. In order to save bytes on the wire, the parameters are shortened – urgency to 'u', and incremental to 'i'.</p><p>Here's how the HTTP header looks alongside a GET request for important HTML, using HTTP/3 style notation:</p>
            <pre><code>HEADERS:
    :method = GET
    :scheme = https
    :authority = example.com
    :path = /index.html
     priority = u=0,i</code></pre>
            <p>The PRIORITY_UPDATE frame only carries the serialized Extensible Priority value:</p>
            <pre><code>PRIORITY_UPDATE:
    u=0,i</code></pre>
            <p>Structured Fields has some other neat tricks. If you want to indicate the use of a default value, then that can be done via omission. Recall that the urgency default is 3, and incremental default is false. A client could send "u=1" alongside our important CSS request (urgency=1, incremental=false). For our lower priority image it could send just "i=?1" (urgency=3, incremental=true). There's even another trick, where boolean true dictionary parameters are sent as just "i". You should expect all of these formats to be used in practice, so it pays to be mindful about their meaning.</p><p>Extensible Priority servers need to decide how best to use the available connection bandwidth to schedule the response data bytes. When servers receive priority client signals, they get one form of input into a decision making process. RFC 9218 provides a set of <a href="https://www.rfc-editor.org/rfc/rfc9218.html#name-server-scheduling">scheduling recommendations</a> that are pretty good at meeting a board set of needs. These can be distilled down to some golden rules.</p><p>For starters, the order of requests is crucial. Clients are very careful about asking for things at the moment they want it. Serving things in request order is good. In HTTP/3, because there is no strict ordering of stream arrival, servers can use stream IDs to determine this. Assuming the order of the requests is correct, the next most important thing is urgency ordering. Serving according to urgency values is good.</p><p>Be wary of non-incremental requests, as they mean the client needs the object in full before it can be used at all. An incremental request means the client can process things as and when they arrive.</p><p>With these rules in mind, the scheduling then becomes broadly: for each urgency level, serve non-incremental requests in whole serially, then serve incremental requests in round robin fashion in parallel. What this achieves is dedicated bandwidth for very important things, and shared bandwidth for less important things that can be processed or rendered progressively.</p><p>Let's look at some examples to visualize the different ways the scheduler can work. These are generated by using <a href="https://github.com/cloudflare/quiche">quiche's</a> <a href="https://datatracker.ietf.org/doc/draft-ietf-quic-qlog-main-schema/">qlog</a> support and running it via the <a href="https://qvis.quictools.info/">qvis</a> analysis tool. These diagrams are similar to a waterfall chart; the y-dimension represents stream IDs (0 at the top, increasing as we move down) and the x-dimension shows reception of stream data.</p><p>Example 1: all streams have the same urgency and are non-incremental so get served in serial order of stream ID.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2mzTrM4iI9h7uEJXa2TOuT/40d1ba7c1d13949107d68a2e1fb5398f/u-same.png" />
            
            </figure><p>Example 2: the streams have the same urgency and are incremental so get served in round-robin fashion.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1V44aLuoHvqxR4gpETKJ2x/fdb8ddb148353333b4aaceff11858ff6/u-same-i.png" />
            
            </figure><p>Example 3: the streams have all different urgency, with later streams being more important than earlier streams. The data is received serially but in a reverse order compared to example 1.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2koT4ij8bzGMc06OVNIFGZ/26572e5bffa63b588860050e005fe64e/u-reversed.png" />
            
            </figure><p>Beyond the Extensible Priority signals, a server might consider other things when scheduling, such as file size, content encoding, how the application vs content origins are configured etc.. This was true for HTTP/2 priorities but Extensible Priorities introduces a new neat trick, a priority signal can also be sent as a response header to override the client signal.</p><p>This works especially well in a proxying scenario where your HTTP/3 terminating proxy is sat in front of some backend such as Workers. The proxy can pass through the request headers to the backend, it can inspect these and if it wants something different, return response headers to the proxy. This allows powerful tuning possibilities and because we operate on a semantic request basis (rather than HTTP/2 priorities dependency basis) we don't have all the complications and dangers. Proxying isn't the only use case. Often, one form of "API" to your local server is via setting response headers e.g., via configuration. Leveraging that approach means we don't have to invent new APIs.</p><p>Let's consider an example where server overrides are useful. Imagine we have a webpage with multiple images that are referenced via  tags near the top of the HTML. The browser will process these quite early in the page load and want to issue requests. At this point, <b>it might not know enough</b> about the page structure to determine if an image is in the viewport or outside the viewport. It can guess, but that might turn out to be wrong if the page is laid out a certain way. Guessing wrong means that something is misprioritized and might be taking bandwidth away from something that is more important. While it is possible to reprioritize things mid-flight using the PRIORITY_UPDATE frame, this action is "laggy" and by the time the server realizes things, it might be too late to make much difference.</p><p>Fear not, the web developer who built the page knows exactly how it is supposed to be laid out and rendered. They can overcome client uncertainty by overriding the Extensible Priority when they serve the response. For instance, if a client guesses wrong and requests the LCP image at a low priority in a shared bandwidth bucket, the image will load slower and web performance metrics will be adversely affected. Here's how it might look and how we can fix it:</p>
            <pre><code>Request HEADERS:
    :method = GET
    :scheme = https
    :authority = example.com
    :path = /lcp-image.jpg
     priority = u=3,i</code></pre>
            
            <pre><code>Response HEADERS:
:status = 200
content-length: 10000
content-type: image/jpeg
priority = u=2</code></pre>
            <p>Priority response headers are one tool to tweak client behavior and they are complementary to other web performance techniques. Methods like efficiently ordering elements in HTML, using attributes like "async" or "defer", augmenting HTML links with Link headers, or using more descriptive link relationships like “<a href="https://html.spec.whatwg.org/multipage/links.html#link-type-preload">preload</a>” all help to improve a browser's understanding of the resources comprising a page. A website that optimizes these things provides a better chance for the browser to make the best choices for prioritizing requests.</p><p>More recently, a new attribute called “<a href="https://web.dev/fetch-priority/">fetchpriority</a>” has emerged that allows developers to tune some of the browser behavior, by boosting or dropping the priority of an element relative to other elements of the same type. The attribute can help the browser do two important things for Extensible priorities: first, the browser might send the request earlier or later, helping to satisfy our golden rule #1 - ordering. Second, the browser might pick a different urgency value, helping to satisfy rule #2. However, "fetchpriority" is a nudge mechanism and it doesn't allow for directly setting a desired priority value. The nudge can be a bit opaque. Sometimes the circumstances benefit greatly from just knowing plainly what the values are and what the server will do, and that's where the response header can help.</p>
    <div>
      <h3>Conclusions</h3>
      <a href="#conclusions">
        
      </a>
    </div>
    <p>We’re excited about bringing this new standard into the world. Working with standards bodies has always been an amazing partnership and we’re very pleased with the results. We’ve seen great results with HTTP/3 priorities, reducing Largest Contentful Paint by up to 37% in our test. We’ll be rolling this feature out over the next few weeks as part of the HTTP Priorities feature for HTTP/2 that’s already available today.</p> ]]></content:encoded>
            <category><![CDATA[Speed Week]]></category>
            <category><![CDATA[HTTP3]]></category>
            <category><![CDATA[QUIC]]></category>
            <guid isPermaLink="false">3sxwiYeGEwXXvE9ltToeUB</guid>
            <dc:creator>Lucas Pardue</dc:creator>
            <dc:creator>Achiel van der Mandele</dc:creator>
        </item>
        <item>
            <title><![CDATA[Argo Smart Routing for UDP: speeding up gaming, real-time communications and more]]></title>
            <link>https://blog.cloudflare.com/turbo-charge-gaming-and-streaming-with-argo-for-udp/</link>
            <pubDate>Tue, 20 Jun 2023 13:00:40 GMT</pubDate>
            <description><![CDATA[ Today, Cloudflare is super excited to announce that we’re bringing traffic acceleration to customer’s UDP traffic. Now, you can improve the latency of UDP-based applications like video games, voice calls, and video meetings by up to 17% ]]></description>
            <content:encoded><![CDATA[ 
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/64tixskqgONiSTACdvMbMX/3502932df801cd9691f432892495f379/image1-14.png" />
            
            </figure><p>Today, Cloudflare is super excited to announce that we’re bringing traffic acceleration to customer’s UDP traffic. Now, you can improve the latency of UDP-based applications like video games, voice calls, and video meetings by up to 17%. Combining the power of Argo Smart Routing (our traffic acceleration product) with UDP gives you the ability to supercharge your UDP-based traffic.</p>
    <div>
      <h3>When applications use TCP vs. UDP</h3>
      <a href="#when-applications-use-tcp-vs-udp">
        
      </a>
    </div>
    <p>Typically when people talk about the Internet, they think of websites they visit in their browsers, or apps that allow them to order food. This type of traffic is sent across the Internet via <a href="https://www.cloudflare.com/learning/ddos/glossary/hypertext-transfer-protocol-http/">HTTP</a> which is built on top of the <a href="https://www.cloudflare.com/learning/ddos/glossary/tcp-ip/">Transmission Control Protocol</a> (TCP). However, there’s a lot more to the Internet than just browsing websites and using apps. Gaming, <a href="https://www.cloudflare.com/developer-platform/solutions/live-streaming/">live video</a>, or tunneling traffic to different networks via a VPN are all common applications that don’t use HTTP or TCP. These popular applications leverage the <a href="https://www.cloudflare.com/learning/ddos/glossary/user-datagram-protocol-udp/">User Datagram Protocol</a> (or UDP for short). To understand why these applications use UDP instead of TCP, we’ll need to dig into how these different applications work.</p><p>When you load a web page, you generally want to see the <i>entire</i> web page; the website would be confusing if parts of it are missing. For this reason, HTTP uses TCP as a method of transferring website data. TCP ensures that if a packet ever gets lost as it crosses the Internet, that packet will be resent. Having a reliable protocol like TCP is generally a good idea when 100% of the information sent needs to be loaded. It’s worth noting that later HTTP versions like <a href="https://www.cloudflare.com/learning/performance/what-is-http3/">HTTP/3</a> actually deviated from TCP as a transmission protocol, but they still ensure packet delivery by handling packet retransmission using the <a href="/the-road-to-quic/">QUIC protocol</a>.</p><p>There are other applications that prioritize quickly sending real time data and are less concerned about perfectly delivering 100% of the data. Let’s explore Real-Time Communications (RTC) like video meetings as an example. If two people are streaming video live, all they care about is what is happening <i>now</i>. If a few packets are lost during the initial transmission, retransmission is usually too slow to render the lost packet data in the current video frame. TCP doesn’t really make sense in this scenario.</p><p>Instead, RTC protocols are built on top of UDP. TCP is like a formal back and forth conversation where every sentence matters. UDP is more like listening to your friend's stream of consciousness: you don’t care about every single bit as long as you get the gist of it. UDP transfers packet data with speed and efficiency without guaranteeing the delivery of those packets. This is perfect for applications like RTC where reducing latency is more important than occasionally losing a packet here or there. The same applies to gaming traffic; you generally want the most up-to-date information, and you don’t really care about retransmitting lost packets.</p><p>Gaming and RTC applications <i>really</i> care about <a href="https://www.cloudflare.com/learning/performance/glossary/what-is-latency/">latency</a>. Latency is the length of time it takes a packet to be sent to a server plus the length of time to receive a response from the server (called <a href="https://www.cloudflare.com/learning/cdn/glossary/round-trip-time-rtt/">round-trip time or RTT</a>). In the case of video games, the higher the latency, the longer it will take for you to see other players move and the less time you’ll have to react to the game. With enough latency, games become unplayable: if the players on your screen are constantly blipping around it’s near impossible to interact with them. In RTC applications like video meetings, you’ll experience a delay between yourself and your counterpart. You may find yourselves accidentally talking over each other which isn’t a great experience.</p><p>Companies that host gaming or RTC infrastructure often try to reduce latency by spinning up servers that are geographically closer to their users. However, it’s common to have two users that are trying to have a video call between distant locations like Amsterdam and Los Angeles. No matter where you install your servers, that's still a long distance for that traffic to travel. The longer the path, the higher the chances are that you're going to run into congestion along the way. Congestion is just like a traffic jam on a highway, but for networks. Sometimes certain paths get overloaded with traffic. This causes delays and packets to get dropped. This is where Argo Smart Routing comes in.</p>
    <div>
      <h3>Argo Smart Routing</h3>
      <a href="#argo-smart-routing">
        
      </a>
    </div>
    <p>Cloudflare customers that want the best cross-Internet application performance rely on Argo Smart Routing’s traffic acceleration to reduce latency. Argo Smart Routing is like the GPS of the Internet. It uses real time global network performance measurements to accelerate traffic, actively route around Internet congestion, and increase your traffic’s stability by reducing packet loss and jitter.</p><p>Argo Smart Routing was launched in <a href="/argo/">May 2017</a>, and its first iteration focused on reducing website traffic latency. Since then, we’ve <a href="/argo-v2/">improved Argo Smart Routing</a> and also <a href="/argo-spectrum/">launched Argo Smart Routing for Spectrum TCP traffic</a> which reduces latency in any TCP-based protocols. Today, we’re excited to bring the same Argo Smart Routing technology to customer’s UDP traffic which will reduce latency, packet loss, and jitter in gaming, and live audio/video applications.</p><p>Argo Smart Routing accelerates Internet traffic by sending millions of synthetic probes from every Cloudflare data center to the origin of every Cloudflare customer. These probes measure the latency of all possible routes between Cloudflare’s data centers and a customer’s origin. We then combine that with probes running between Cloudflare’s data centers to calculate possible routes. When an Internet user makes a request to an origin, Cloudflare calculates the results of our real time global latency measurements, examines Internet congestion data, and calculates the optimal route for customer’s traffic. To enable Argo Smart Routing for UDP traffic, Cloudflare extended the route computations typically used for HTTP and TCP traffic and applied them to UDP traffic.</p><p>We knew that Argo Smart Routing offered impressive benefits for HTTP traffic, reducing time to first byte by up to 30% on average for customers. But UDP can be treated differently by networks, so we were curious to see if we would see a similar reduction in round-trip-time for UDP. To validate, we ran a set of tests. We set up an origin in Iowa, USA and had a client connect to it from Tokyo, Japan. Compared to a regular Spectrum setup, we saw a decrease in round-trip-time of up to 17.3% on average. For the standard setup, Spectrum was able to proxy packets to Iowa in 173.3 milliseconds on average. Comparatively, turning on Argo Smart Routing reduced the average round-trip-time down to 143.3 milliseconds. The distance between those two cities is 6,074 miles (9,776 kilometers), meaning we've effectively moved the two closer to each other by over a thousand miles (or 1,609 km) just by turning on this feature.</p><p>We're incredibly excited about Argo Smart Routing for UDP and what our customers will use it for. If you're in gaming or real-time-communications, or even have a different use-case that you think would benefit from speeding up UDP traffic, please contact your account team today. We are currently in closed beta but are excited about accepting applications.</p> ]]></content:encoded>
            <category><![CDATA[Speed Week]]></category>
            <category><![CDATA[Argo Smart Routing]]></category>
            <category><![CDATA[Spectrum]]></category>
            <category><![CDATA[UDP]]></category>
            <category><![CDATA[Performance]]></category>
            <guid isPermaLink="false">5qKIhJCi7nIZIQudfOBtgh</guid>
            <dc:creator>Achiel van der Mandele</dc:creator>
            <dc:creator>Chris Draper</dc:creator>
        </item>
        <item>
            <title><![CDATA[One-click ISO 27001 certified deployment of Regional Services in the EU]]></title>
            <link>https://blog.cloudflare.com/one-click-iso-27001-deployment/</link>
            <pubDate>Sat, 18 Mar 2023 15:00:00 GMT</pubDate>
            <description><![CDATA[ Cloudflare announces one-click ISO certified region, a super easy way for customers to limit where traffic is serviced to ISO 27001 certified data centers inside the European Union ]]></description>
            <content:encoded><![CDATA[ <p></p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6aVTJdGy7JkjPxS0Z827zC/93d84cd6fc8321a8ecdb60b48f476041/Regional-Services-one-click-limit-traffic-to-ISO-27001-certified-colos-only.png" />
            
            </figure><p>Today, we’re very happy to announce the general availability of a new region for Regional Services that allows you to limit your traffic to only <a href="https://www.iso.org/isoiec-27001-information-security.html">ISO 27001</a> certified data centers inside the EU. This helps customers that have very strict requirements surrounding which data centers are allowed to decrypt and service traffic. Enabling this feature is a one-click operation right on the Cloudflare dashboard.</p>
    <div>
      <h3>Regional Services - a recap</h3>
      <a href="#regional-services-a-recap">
        
      </a>
    </div>
    <p>In 2020, we saw an increase in prospects asking about data localization. Specifically, increased regulatory pressure limited them from using vendors that operated at global scale. We launched <a href="/introducing-regional-services/">Regional Services</a>, a new way for customers to use the Cloudflare network. With Regional Services, we put customers back in control over which data centers are used to service traffic. Regional Services operates by limiting exactly which data centers are used to decrypt and service HTTPS traffic. For example, a customer may want to use only data centers inside the European Union to service traffic. Regional Services operates by leveraging our global network for DDoS protection but only decrypting traffic and applying Layer 7 products inside data centers that are located inside the European Union.</p><p>We later followed up with the <a href="https://www.cloudflare.com/data-localization/">Data Localization Suite</a> and additional regions: <a href="/regional-services-comes-to-apac/">India, Japan, and Australia</a>.</p><p>With Regional Services, customers get the best of both worlds: we empower them to use our global network for volumetric DDoS protection whilst limiting where traffic is serviced. We do that by accepting the raw TCP connection at the closest data center but forwarding it on to a data center in-region for decryption. That means that only machines of the customer’s choosing actually see the raw HTTP request, which could contain sensitive data such as a customer’s bank account or medical information.</p>
    <div>
      <h3>A new region and a new UI</h3>
      <a href="#a-new-region-and-a-new-ui">
        
      </a>
    </div>
    <p>Traditionally we’ve seen requests for data localization largely center around countries or geographic areas. Many types of regulations require companies to make promises about working only with vendors that are capable of restricting where their traffic is serviced geographically. Organizations can have many reasons for being limited in their choices, but they generally fall into two buckets: compliance and contractual commitments.</p><p>More recently, we are seeing that more and more companies are asking about security requirements. An often asked question about security in IT is: how do you ensure that something is safe? For instance, for a data center you might be wondering how physical access is managed. Or how often security policies are reviewed and updated. This is where certifications come in. A common certification in IT is the <a href="https://en.wikipedia.org/wiki/ISO/IEC_27001">ISO 27001 certification</a>:</p><p>Per the <a href="https://www.iso.org/isoiec-27001-information-security.html">ISO.org</a>:</p><blockquote><p><i>“ISO/IEC 27001 is the world’s best-known standard for information security management systems (ISMS) and their requirements. Additional best practice in data protection and cyber resilience are covered by more than a dozen standards in the ISO/IEC 27000 family. Together, they enable organizations of all sectors and sizes to manage the security of assets such as financial information, intellectual property, employee data and information entrusted by third parties.”</i></p></blockquote><p>In short, ISO 27001 is a certification that a data center can achieve that ensures that they maintain a set of security standards to keep the data center secure. With the new Regional Services region, HTTPS traffic will only be decrypted in data centers that hold the ISO 27001 certification. Products such as WAF, Bot Management and Workers will only be applied in those relevant data centers.</p><p>The other update we’re excited to announce is a brand new User Interface for configuring the Data Localization Suite. The previous UI was limited in that customers had to preconfigure a region for an entire zone: you couldn’t mix and match regions. The new UI allows you to do just that: each individual hostname can be configured for a different region, directly on the DNS tab:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/60Ech3V5DIBzcXCKC79TU3/2e16686487cbbad51c77a3f896d9be87/pasted-image-0--5--3.png" />
            
            </figure><p>Configuring a region for a particular hostname is now just a single click away. Changes take effect within seconds, making this the easiest way to configure data localization yet. For customers using the Metadata Boundary, we’ve also launched a self-serve UI that allows you to configure where logs flow:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/62faVgbaj8GXkZtHrCX5xR/717d4b892a5f1f78c4b8c503a549c65c/image-13.png" />
            
            </figure><p>We’re excited about these new updates that give customers more flexibility in choosing which of Cloudflare’s data centers to use as well as making it easier than ever to configure them. The new region and existing regions are now a one-click configuration option right from the dashboard. As always, we love getting feedback, especially on what new regions you’d like to see us add in the future. In the meantime, if you’re interested in using the Data Localization Suite, please reach out to your account team.</p> ]]></content:encoded>
            <category><![CDATA[Security Week]]></category>
            <category><![CDATA[Data Localization]]></category>
            <category><![CDATA[Compliance]]></category>
            <category><![CDATA[Certification]]></category>
            <category><![CDATA[Regional Services]]></category>
            <category><![CDATA[Product News]]></category>
            <guid isPermaLink="false">4eu3YHNrghYyABVfdr9okM</guid>
            <dc:creator>Achiel van der Mandele</dc:creator>
        </item>
        <item>
            <title><![CDATA[Regional Services comes to India, Japan and Australia]]></title>
            <link>https://blog.cloudflare.com/regional-services-comes-to-apac/</link>
            <pubDate>Thu, 22 Sep 2022 01:00:00 GMT</pubDate>
            <description><![CDATA[ With Regional Services, we are thrilled to expand our coverage to these countries in Asia Pacific, allowing more customers to use Cloudflare by giving them precise control over which parts of the Cloudflare network are able to perform advanced functions ]]></description>
            <content:encoded><![CDATA[ <p></p><p>We <a href="/introducing-the-cloudflare-data-localization-suite/">announced</a> the Data Localization Suite in 2020, when requirements for data localization were already important in the European Union. Since then, we’ve witnessed a growing trend toward localization globally. We are thrilled to expand our coverage to these countries in Asia Pacific, allowing more customers to use Cloudflare by giving them precise control over which parts of the Cloudflare network are able to perform advanced functions like <a href="https://www.cloudflare.com/learning/ddos/glossary/web-application-firewall-waf/">WAF</a> or Bot Management that require inspecting traffic.</p>
    <div>
      <h3>Regional Services, a recap</h3>
      <a href="#regional-services-a-recap">
        
      </a>
    </div>
    <p>In 2020, we introduced (<a href="/introducing-regional-services/">Regional Services</a>), a new way for customers to use Cloudflare. With Regional Services, customers can limit which data centers actually decrypt and inspect traffic. This helps because certain customers are affected by regulations on where they are allowed to service traffic. Others have agreements with <i>their</i> customers as part of contracts specifying exactly where traffic is allowed to be decrypted and inspected.</p><p>As one German bank told us: "We can look at the rules and regulations and debate them all we want. As long as you promise me that no machine outside the European Union will see a decrypted bank account number belonging to one of my customers, we're happy to use Cloudflare in any capacity".</p><p>Under normal operation, Cloudflare uses its entire network to perform all functions. This is what most customers want: leverage all of Cloudflare’s data centers so that you always service traffic to eyeballs as quickly as possible. Increasingly, we are seeing customers that wish to strictly limit which data centers service their traffic. With <a href="/introducing-regional-services/">Regional Services</a>, customers can use Cloudflare's network but limit which data centers perform the actual decryption. Products that require decryption, such as WAF, Bot Management and Workers will only be applied within those data centers.</p>
    <div>
      <h3>How does Regional Services work?</h3>
      <a href="#how-does-regional-services-work">
        
      </a>
    </div>
    <p>You might be asking yourself: how does that even work? Doesn't Cloudflare operate an anycast network? Cloudflare was built from the bottom up to leverage anycast, a routing protocol. All of Cloudflare's data centers advertise the same IP addresses through Border Gateway Protocol. Whichever data center is closest to you from a network point of view is the one that you'll hit.</p><p>This is great for two reasons. The first is that the closer the data center to you, the faster the reply. The second great benefit is that this comes in very handy when dealing with large DDoS attacks. Volumetric DDoS attacks throw a lot of bogus traffic at you, which overwhelms network capacity. Cloudflare's anycast network is great at taking on these attacks because they get distributed across the entire network.</p><p>Anycast doesn't respect regional borders, it doesn't even know about them. Which is why out of the box, Cloudflare can't guarantee that traffic inside a country will also be serviced there. Although typically you’ll hit a data center inside your country, it’s very possible that your Internet Service Provider will send traffic to a network that might route it to a different country.</p><p>Regional Services solves that: when turned on, each data center becomes aware of which region it is operating in. If a user from a country hits a data center that doesn't match the region that the customer has selected, we simply forward the raw TCP stream in encrypted form. Once it reaches a data center inside the right region, we decrypt and apply all Layer 7 products. This covers products such as CDN, WAF, Bot Management and Workers.</p><p>Let's take an example. A user is in Kerala, India and their Internet Service Provider has determined that the fastest path to one of our data centers is to Colombo, Sri Lanka. In this example, a customer may have selected India as the sole region within which traffic should be serviced. The Colombo data center sees that this traffic is meant for the India region. It does not decrypt, but instead forwards it to the closest data center inside India. There, we decrypt and products such as WAF and Workers are applied as if the traffic had hit the data center directly.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7gRJahpGc8QYhz5fZlXsIl/5d5adca2fba56b006c98459cc304e7b9/image2-27.png" />
            
            </figure>
    <div>
      <h3>Bringing Regional Services to Asia</h3>
      <a href="#bringing-regional-services-to-asia">
        
      </a>
    </div>
    <p>Historically, we’ve seen most interest in Regional Services in geographic regions such as the European Union and the Americas. Over the past few years, however, we are seeing a lot of interest from Asia Pacific. Based on customer feedback and analysis on regulations we quickly concluded there were three key regions we needed to support: India, Japan and Australia. We’re proud to say that all three are now generally available for use today.</p><p>But we’re not done yet! We realize there are many more customers that require localization to their particular region. We’re looking to add many more in the near future and are working hard to make it easier to support more of them. If you have a region in mind, we’d love to hear it!</p><p>India, Japan and Australia are all live today! If you’re interested in using the <a href="https://www.cloudflare.com/data-localization/">Data Localization Suite</a>, contact your account team!</p> ]]></content:encoded>
            <category><![CDATA[GA Week]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[General Availability]]></category>
            <category><![CDATA[Regional Services]]></category>
            <category><![CDATA[APJC]]></category>
            <guid isPermaLink="false">3jabawaAHr0fzOv3vBeQHx</guid>
            <dc:creator>Achiel van der Mandele</dc:creator>
        </item>
        <item>
            <title><![CDATA[Magic Firewall gets Smarter]]></title>
            <link>https://blog.cloudflare.com/magic-firewall-gets-smarter/</link>
            <pubDate>Thu, 09 Dec 2021 13:59:09 GMT</pubDate>
            <description><![CDATA[ To improve security, we’re adding threat intel integration and geo-blocking. For visibility, we’re packet captures at the edge, a way to see packets arrive at the edge in near real-time. ]]></description>
            <content:encoded><![CDATA[ 
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/691K7OvLHJqGuGYH8hy5i4/e56bd77eb57ede323a1f728d217e04b9/pasted-image-0.png" />
            
            </figure><p>Today, we're very excited to announce a set of updates to Magic Firewall, adding security and visibility features that are key in modern <a href="https://www.cloudflare.com/learning/cloud/what-is-a-cloud-firewall/">cloud firewalls</a>. To improve security, we’re adding threat intel integration and geo-blocking. For visibility, we’re adding packet captures at the edge, a way to see packets arrive at the edge in near real-time.</p><p>Magic Firewall is our network-level firewall which is delivered through Cloudflare to secure your enterprise. Magic Firewall covers your remote users, branch offices, data centers and cloud infrastructure. Best of all, it’s deeply integrated with Cloudflare, giving you a one-stop overview of everything that’s happening on your network.</p>
    <div>
      <h2>A brief history of firewalls</h2>
      <a href="#a-brief-history-of-firewalls">
        
      </a>
    </div>
    <p>We talked a lot about firewalls on <a href="/welcome-to-cio-week/">Monday</a>, including how our firewall-as-a-service solution is very different from traditional firewalls and helps security teams that want sophisticated inspections at the <i>Application Layer.</i> When we talk about the Application Layer, we’re referring to OSI Layer 7. This means we’re applying security features using semantics of the protocol. The most common example is HTTP, the protocol you’re using to visit this website. We have Gateway and our WAF to protect inbound and outbound HTTP requests, but what about Layer 3 and Layer 4 capabilities? Layer 3 and 4 refer to the <i>packet</i> and <i>connection</i> levels. These security features aren’t applied to HTTP requests, but instead to IP packets and (for example) TCP connections. A lot of folks in the CIO organization want to add extra layers of security and visibility without resorting to decryption at Layer 7. We’re excited to talk to you about two sets of new features that will make your lives easier: geo-blocking and threat intel integration to improve security posture, and packet captures to get you better visibility.</p>
    <div>
      <h2>Threat Intel and IP Lists</h2>
      <a href="#threat-intel-and-ip-lists">
        
      </a>
    </div>
    <p>Magic Firewall is great if you know exactly what you want to allow and block. You can put in rules that match exactly on IP source and destination, as well as bitslicing to verify the contents of various packets. However, there are many situations in which you don’t exactly know who the bad and good actors are: is this IP address that’s trying to access my network a perfectly fine consumer, or is it part of a botnet that’s trying to attack my network?</p><p>The same goes the other way: whenever someone inside your network is trying to create a connection to the Internet, how do you know whether it’s an obscure blog or a malware website? Clearly, you don’t want to play whack-a-mole and try to keep track of every malicious actor on the Internet by yourself. For most security teams, it’s nothing more than a waste of time! You’d much rather rely on a company that makes it their business to focus on this.</p><p>Today, we're announcing Magic Firewall support for our in-house Threat Intelligence feed. Cloudflare sees approximately 28 million HTTP requests each second and blocks 76 billion cyber threats each day. With almost 20% of the top 10 million Alexa websites on Cloudflare, we see a lot of novel threats pop up every day. We use that data to detect malicious actors on the Internet and turn it into a list of known malicious IPs. And we don’t stop there: we also integrate with a number of third party vendors to augment our coverage.</p><p>To match on any of the threat intel lists, just set up a rule in the UI as normal:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6MWFm9iSVnZM3MWzPaHiln/f51028381a360e570c096e7f4535345d/pasted-image-0-1.png" />
            
            </figure><p>Threat intel feed categories include Malware, Anonymizer and Botnet Command-and-Control centers. Malware and Botnet lists cover properties on the Internet distributing malware and known command and control centers. Anonymizers contain a list of known forward proxies that allow attackers to hide their IP addresses.</p><p>In addition to the managed lists, you also have the flexibility of creating your own lists, either to add your own known set of malicious IPs or to make management of your known good network endpoints easier. As an example, you may want to create a list of all your own servers. That way, you can easily block traffic to and from it from any rule, without having to replicate the list each time.</p><p>Another particularly gnarly problem that many of our customers deal with is geo restrictions. Many are restricted in where they are allowed (or want to) accept traffic from and to. The challenge here is that nothing about an IP address tells you anything about the geolocation of it. And even worse, IP addresses regularly change hands, moving from one country to the other.</p><p>As of today, you can easily block or allow traffic to any country, without the management hassle that comes with maintaining lists yourself. Country lists are kept up to date entirely by Cloudflare, all you need to do is set up a rule matching on the country and we’ll take care of the rest.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3oAiULyz8nJlqTbBBlElWT/10efc6fdca51d18a495a80aeef75149f/pasted-image-0-2.png" />
            
            </figure>
    <div>
      <h2>Packet captures at the edge</h2>
      <a href="#packet-captures-at-the-edge">
        
      </a>
    </div>
    <p>Finally, we’re releasing a very powerful feature: packet captures at the edge. A packet capture is a <a href="https://tools.ietf.org/id/draft-gharris-opsawg-pcap-00.html">pcap file</a> that contains all packets that were seen by a particular network box (usually a firewall or router) during a specific time frame. Packet captures are useful if you want to debug your network: why can’t my users connect to a particular website? Or you may want to get better visibility into a DDoS attack, so you can put up better firewall rules.</p><p>Traditionally, you’d log into your router or firewall and start up something like <a href="https://www.tcpdump.org/">tcpdump</a>. You’d set up a filter to only match on certain packets (packet capture files can quickly get very big) and grab the file. But what happens if you want coverage across your entire network: on-premises, offices and all your cloud environments? You’ll likely have different vendors for each of those locations and have to figure out how to get packet captures from all of them. Even worse, some of them might not even support grabbing packet captures.</p><p>With Magic Firewall, grabbing packet captures across your entire network becomes simple: because you run a single network-firewall-as-a-service, you can grab packets across your entire network in one go. This gets you instant visibility into exactly where that particular IP is interacting with your network, regardless of physical or virtual location. You have the option of grabbing all network traffic (warning, it might be a lot!) or set a filter to only grab a subset. Filters follow the same Wireshark syntax that Magic Firewall rules use:</p><p><code>(ip.src in $cf.anonymizer)</code></p><p>We think these are great additions to Magic Firewall, giving you powerful primitives to police traffic and tooling to gain visibility into what’s actually going on in your network. Threat Intel, geo blocking and IP lists are all available today — reach out to your account team to have them activated. Packet captures is entering early access later in December. Similarly, if you’re interested, please reach out to your account team!</p> ]]></content:encoded>
            <category><![CDATA[CIO Week]]></category>
            <category><![CDATA[Magic Firewall]]></category>
            <category><![CDATA[Magic Transit]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[Threat Intelligence]]></category>
            <guid isPermaLink="false">6VvTRDXKH7JlO4rEMDvKfy</guid>
            <dc:creator>Achiel van der Mandele</dc:creator>
        </item>
        <item>
            <title><![CDATA[Announcing Argo for Spectrum]]></title>
            <link>https://blog.cloudflare.com/argo-spectrum/</link>
            <pubDate>Tue, 23 Nov 2021 13:58:39 GMT</pubDate>
            <description><![CDATA[ Announcing general availability of Argo for Spectrum, a way to turbo-charge any TCP based application. ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Today we're excited to announce the general availability of Argo for Spectrum, a way to turbo-charge any TCP based application. With Argo for Spectrum, you can reduce latency, packet loss and improve connectivity for any TCP application, including common protocols like Minecraft, Remote Desktop Protocol and SFTP.</p>
    <div>
      <h3>The Internet — more than just a browser</h3>
      <a href="#the-internet-more-than-just-a-browser">
        
      </a>
    </div>
    <p>When people think of the Internet, many of us think about using a browser to view websites. Of course, it’s so much more! We often use other ways to connect to each other and to the resources we need for work. For example, you may interact with servers for work using SSH File Transfer Protocol (SFTP), git or Remote Desktop software. At home, you might play a video game on the Internet with friends.</p><p>To help people that protect these services against DDoS attacks, Spectrum launched in 2018 and extends Cloudflare’s <a href="https://www.cloudflare.com/ddos/">DDoS protection</a> to any TCP or UDP based protocol. Customers use it for a wide variety of use cases, including to protect video streaming (RTMP), gaming and internal IT systems. Spectrum also supports common VoIP protocols such as SIP and RTP, which have recently seen an <a href="/attacks-on-voip-providers/">increase in DDoS ransomware attacks</a>. A lot of these applications are also highly sensitive to performance issues. No one likes waiting for a file to upload or dealing with a lagging video game.</p><p>Latency and throughput are the two metrics people generally discuss when talking about network performance. Latency refers to the amount of time a piece of data (a packet) takes to traverse between two systems. Throughput refers to the amount of bits you can actually send per second. This blog will discuss how these two interplay and how we improve them with Argo for Spectrum.</p>
    <div>
      <h3>Argo to the rescue</h3>
      <a href="#argo-to-the-rescue">
        
      </a>
    </div>
    <p>There are a number of factors that cause poor performance between two points on the Internet, including network congestion, the distance between the two points, and packet loss. This is a problem many of our customers have, even on web applications. To help, we launched <a href="/argo/">Argo Smart Routing</a> in 2017, a way to reduce latency (or <i>time to first byte</i>, to be precise) for any HTTP request that goes to an origin.</p><p>That’s great for folks who run websites, but what if you’re working on an application that doesn’t speak HTTP? Up until now people had limited options for improving performance for these applications. That changes today with the general availability of Argo for Spectrum. Argo for Spectrum offers the same benefits as Argo Smart Routing for any TCP-based protocol.</p><p>Argo for Spectrum takes the same smarts from our network traffic and applies it to Spectrum. At time of writing, Cloudflare sits in front of approximately 20% of the Alexa top 10 million websites. That means that we see, in near real-time, which networks are congested, which are slow and which are dropping packets. We use that data and take action by provisioning faster routes, which sends packets through the Internet faster than normal routing. Argo for Spectrum works the exact same way, using the same intelligence and routing plane but extending it to any TCP based application.</p>
    <div>
      <h3>Performance</h3>
      <a href="#performance">
        
      </a>
    </div>
    <p>But what does this mean for real application performance? To find out, we ran a set of benchmarks on Catchpoint. Catchpoint is a service that allows you to set up <a href="https://www.cloudflare.com/application-services/solutions/app-performance-monitoring/">performance monitoring</a> from all over the world. Tests are repeated at intervals and aggregate results are reported. We wanted to use a third party such as Catchpoint to get objective results (as opposed to running themselves).</p><p>For our test case, we used a file server in the Netherlands as our origin. We provisioned various tests on Catchpoint to measure file transfer performance from various places in the world: Rabat, Tokyo, Los Angeles and Lima.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1Dmiv8f30ef7K9FQ6O1Nyi/131f81007fa1c71ecebb4237f1ad759e/image2-28.png" />
            
            </figure><p>Throughput of a 10MB file. Higher is better.</p><p>Depending on location, transfers saw increases of up to 108% (for locations such as Tokyo) and <b>85% on average</b>. Why is it <b>so</b> much faster? The answer is <a href="https://en.wikipedia.org/wiki/Bandwidth-delay_product"><i>bandwidth delay product</i></a>. In layman's terms, bandwidth delay product means that the higher the latency, the lower the throughput. This is because with transmission protocols such as TCP, we need to wait for the other party to acknowledge that they received data before we can send more.</p><p>As an analogy, let’s assume we’re operating a water cleaning facility. We send unprocessed water through a pipe to a cleaning facility, but we’re not sure how much capacity the facility has! To test, we send an amount of water through the pipe. Once the water has arrived, the facility will call us up and say, “we can easily handle this amount of water at a time, please send more.” If the pipe is short, the feedback loop is quick: the water will arrive, and we’ll immediately be able to send more without having to wait. If we have a very, very long pipe, we have to stop sending water for a while before we get confirmation that the water has arrived and there’s enough capacity.</p><p>The same happens with TCP: we send an amount of data to the wire and wait to get confirmation that it arrived. If the <i>latency</i> is high it reduces the throughput because we’re constantly waiting for confirmation. If latency is low we can throttle throughput at a high rate. With Spectrum and Argo, we help in two ways: the first is that Spectrum terminates the TCP connection close to the user, meaning that latency for that link is low. The second is that Argo reduces the latency between our edge and the origin. In concert, they create a set of low-latency connections, resulting in a low overall bandwidth delay product between users in origin. The result is a much higher throughput than you would otherwise get.</p><p>Argo for Spectrum supports any TCP based protocol. This includes commonly used protocols like SFTP, git (over SSH), RDP and SMTP, but also media streaming and gaming protocols such as RTMP and Minecraft. Setting up Argo for Spectrum is easy. When creating a Spectrum application, just hit the “Argo Smart Routing” toggle. Any traffic will automatically be smart routed.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3m5hR3BEdy6PqTp7jo7XyT/1ce3ff692d52b0fa677e27c79311dcf1/image3-35.png" />
            
            </figure><p>Argo for Spectrum covers much more than just these applications: we support any TCP-based protocol. If you're interested, reach out to your account team today to see what we can do for you.</p> ]]></content:encoded>
            <category><![CDATA[Argo Smart Routing]]></category>
            <category><![CDATA[Spectrum]]></category>
            <category><![CDATA[TCP]]></category>
            <category><![CDATA[Performance]]></category>
            <guid isPermaLink="false">7YylXseoJGsIrnn3GLNzq</guid>
            <dc:creator>Achiel van der Mandele</dc:creator>
        </item>
        <item>
            <title><![CDATA[Real-Time Communications at Scale]]></title>
            <link>https://blog.cloudflare.com/announcing-our-real-time-communications-platform/</link>
            <pubDate>Thu, 30 Sep 2021 12:59:36 GMT</pubDate>
            <description><![CDATA[ We’re making it easier to build and scale real-time communications applications around open technologies, starting with WebRTC Components. ]]></description>
            <content:encoded><![CDATA[ <p></p><p>For every successful technology, there is a moment where its time comes. Something happens, usually external, to catalyze it — shifting it from being a good idea with promise, to a reality that we can’t imagine living without. Perhaps the best recent example was what happened to the cloud as a result of the introduction of the iPhone in 2007. Smartphones created a huge addressable market for small developers; and even big developers found their customer base could explode in a way that they couldn’t handle without access to public cloud infrastructure. Both wanted to be able to focus on building amazing applications, without having to worry about what lay underneath.</p><p>Last year, during the outbreak of COVID-19, a similar moment happened to real time communication. Being able to communicate is the lifeblood of any organization. Before 2020, much of it happened in meeting rooms in offices all around the world. But in March last year — that changed dramatically. Those meeting rooms suddenly were emptied. Fast-forward 18 months, and that massive shift in how we work has persisted.</p><p>While, undoubtedly, many organizations would not have been able to get by without the likes of Slack, Zoom and Teams as real time collaboration tools, we think today’s iteration of communication tools is just the tip of the iceberg. Looking around, it’s hard to escape the feeling there is going to be an explosion in innovation that is about to take place to enable organizations to communicate in a remote, or at least hybrid, world.</p><p>With this in mind, today we’re excited to be introducing Cloudflare’s Real Time Communications platform. This is a new suite of products designed to help you build the next generation of real-time, interactive applications. Whether it’s one-to-one video calling, group audio or video-conferencing, the demand for real-time communications only continues to grow.</p><p>Running a reliable and scalable real-time communications platform requires building out a large-scale network. You need to <a href="/250-cities-is-just-the-start/">get your network edge within milliseconds of your users</a> in multiple geographies to make sure everyone can always connect with low latency, low packet loss and low jitter. A <a href="/cloudflare-backbone-internet-fast-lane/">backbone to route around</a> Internet traffic jams. <a href="/designing-edge-servers-with-arm-cpus/">Infrastructure that can efficiently scale</a> to serve thousands of participants at once. And then you need to deploy media servers, write business logic, manage multiple client platforms, and keep it all running smoothly. We think we can help with this.</p><p>Launching today, you will be able to leverage Cloudflare’s global edge network to improve connectivity for any existing WebRTC-based video and audio application, with what we’re calling “WebRTC Components”.  This includes scaling to (tens of) thousands of participants, leveraging our <a href="/cloudflare-thwarts-17-2m-rps-ddos-attack-the-largest-ever-reported/">DDoS mitigation</a> to protect your services from attacks, and enforce <a href="https://developers.cloudflare.com/spectrum/reference/configuration-options#ip-access-rules">IP and ASN-based access policies</a> in just a few clicks.</p>
    <div>
      <h3>How Real Time is “Real Time”?</h3>
      <a href="#how-real-time-is-real-time">
        
      </a>
    </div>
    <p>Real-time typically refers to communication that happens in under 500ms: that is, as fast as packets can traverse the fibre optic networks that connect the world together. In 2021, most real-time audio and video applications use <a href="https://webrtcforthecurious.com/docs/01-what-why-and-how/">WebRTC</a>, a set of open standards and browser APIs that define how to connect, secure, and transfer both media and data over UDP. It was designed to bring better, more flexible bi-directional communication when compared to the primary browser-based communication protocol we rely on today, HTTP. And because WebRTC is supported in the browser, it means that users don’t need custom clients, nor do developers need to build them: all they need is a browser.</p><p>Importantly, we’ve seen the need for reliable, real-time communication across time-zones and geographies increase dramatically, as organizations change the way they work (<a href="/the-future-of-work-at-cloudflare/">yes, including us</a>).</p><p>So where is real-time important in practice?</p><ul><li><p>One-to-one calls (think FaceTime). We’re used to almost instantaneous communication over traditional telephone lines, and there’s no reason for us to head backwards.</p></li><li><p>Group calling and conferencing (Zoom or Google Meet), where even just a few seconds of delay results in everyone talking over each other.</p></li><li><p>Social video, gaming and sports. You don’t want to be 10 seconds behind the action or miss that key moment in a game because the stream dropped a few frames or decided to buffer.</p></li><li><p>Interactive applications: from 3D modeling in the browser, Augmented Reality on your phone, and even game streaming need to be in real-time.</p></li></ul><p>We believe that we’ve only collectively scratched the surface when it comes to real-time applications — and part of that is because scaling real-time applications to even thousands of users requires new infrastructure paradigms and demands more from the network than traditional HTTP-based communication.</p>
    <div>
      <h3>Enter: WebRTC Components</h3>
      <a href="#enter-webrtc-components">
        
      </a>
    </div>
    <p>Today, we’re launching our closed beta <i>WebRTC Components</i>, allowing teams running centralized <a href="https://www.cloudflare.com/learning/video/turn-server/">WebRTC TURN servers</a> to offload it to Cloudflare’s distributed, global network and improve reliability, scale to more users, and spend less time managing infrastructure.</p><p><a href="https://webrtcforthecurious.com/docs/03-connecting/#turn">TURN</a>, or Traversal Using Relays Around NAT (Network Address Translation), was designed to navigate the practical shortcomings of WebRTC’s peer-to-peer origins. WebRTC was (and is!) a peer-to-peer technology, but in practice, establishing reliable peer-to-peer connections remains hard due to Carrier-Grade NAT, corporate NATs and firewalls. Further, each peer is limited by its own network connectivity — in a traditional <a href="https://webrtcforthecurious.com/docs/08-applied-webrtc/#full-mesh">peer-to-peer mesh</a>, participants can quickly find their network connections saturated because they have to receive data from every other peer. In a mixed environment with different devices (mobile, desktops), networks (high-latency 3G through to fast fiber), scaling to more than a handful of peers becomes extremely challenging.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/XY5oJWURkYZEvmSeSXGax/8dc00dc851aaa722ed75b8e53df26d87/Before.png" />
            
            </figure><p>Running a TURN service at the edge instead of your own infrastructure gets you a better connection. Cloudflare operates an anycast network spanning <a href="/250-cities-is-just-the-start/">250+ cities</a>, meaning we’re very close to wherever your users are. This means that when users connect to Cloudflare’s TURN service, they get a really good connection to the Cloudflare network. Once it’s on there, we leverage our network and <a href="/250-cities-is-just-the-start/">private backbone</a> to get you superior connectivity, all the way back to the other user on the call.</p><p>But even better: stop worrying about scale. WebRTC infrastructure is notoriously difficult to scale: you need to make sure you have the right capacity in the right location. Cloudflare’s TURN service scales automatically and if you want more endpoints they’re just an API call away.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5Qt7hVKP49sXw4ceYyA2Q2/73c56a80d17827050b8f90a37a7382ee/unnamed--1--1.png" />
            
            </figure><p>Of course WebRTC Components is built on the Cloudflare network, benefiting from the DDoS protection that it’s 100 Tbps network offers. From now on deploying scalable, secure, production-grade WebRTC relays globally is only a couple of API calls away.</p>
    <div>
      <h3>A Developer First Real-Time Platform</h3>
      <a href="#a-developer-first-real-time-platform">
        
      </a>
    </div>
    <p>But, as we like to say at Cloudflare: we’re just getting started. Managed, scalable TURN infrastructure is a critical building block to building real-time services for one-to-one and small group calling, especially for teams who have been managing their own infrastructure, but things become rapidly more complex when you start adding more participants.</p><p>Whether that’s managing the quality of the streams (“tracks”, in WebRTC parlance) each client is sending and receiving to keep call quality up, permissions systems to determine who can speak or broadcast in large-scale events, and/or building signalling infrastructure with support chat and interactivity on top of the media experience, one thing is clear: it there’s a lot to bite off.</p><p>With that in mind, here’s a sneak peek at where we’re headed:</p><ul><li><p>Developer-first APIs that abstract the need to manage and configure low-level infrastructure, authentication, authorization and participant permissions. Think in terms of your participants, rooms and channels, without having to learn the intricacies of ICE, peer connections and media tracks.</p></li><li><p>Integration with <a href="https://www.cloudflare.com/teams/access/">Cloudflare for Teams</a> to support organizational access policies: great for when your company town hall meetings are now conducted remotely.</p></li><li><p>Making it easy to connect any input and output source, including broadcasting to traditional HTTP streaming clients and recording for on-demand playback with <a href="/stream-live/">Stream Live</a>, and ingesting from RTMP sources with <a href="/restream-with-stream-connect/">Stream Connect</a>, or future protocols such as <a href="https://datatracker.ietf.org/doc/html/draft-murillo-whip-02">WHIP</a>.</p></li><li><p>Embedded serverless capabilities via <a href="https://workers.cloudflare.com/">Cloudflare Workers</a>, from triggering Workers on participant events (e.g. join, leave) through to building stateful chat and collaboration tools with <a href="/introducing-workers-durable-objects/">Durable Objects</a> and WebSockets.</p></li></ul><p>… and this is just the beginning.</p><p>We’re also looking for ambitious engineers who want to play a role in building our RTC platform. If you’re an engineer interested in building the next generation of real-time, interactive applications, <a href="https://boards.greenhouse.io/cloudflare/jobs/3523616?gh_jid=3523616&amp;gh_src=9b769b781us">join</a> <a href="https://boards.greenhouse.io/cloudflare/jobs/3523626?gh_jid=3523626&amp;gh_src=4bdb03661us">us</a>!</p><p>If you’re interested in working with us to help connect more of the world together, and are struggling with scaling your existing 1-to-1 real-time video &amp; audio platform beyond a few hundred or thousand concurrent users, <a href="https://docs.google.com/forms/d/e/1FAIpQLSeGvMJPTmsdWXq1rSCGHzszce5RdM5iYHxsQQfPk8Kt5rkaKQ/viewform?usp=sf_link">sign up for the closed beta</a> of WebRTC Components. We’re especially interested in partnering with teams at the beginning of their real-time journeys and who are keen to iterate closely with us.</p> ]]></content:encoded>
            <category><![CDATA[Birthday Week]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[Video]]></category>
            <category><![CDATA[Cloudflare Stream]]></category>
            <category><![CDATA[WebRTC]]></category>
            <guid isPermaLink="false">29oyPijBN1jb64XSQsGHLy</guid>
            <dc:creator>Matt Silverlock</dc:creator>
            <dc:creator>Achiel van der Mandele</dc:creator>
            <dc:creator>James Allworth</dc:creator>
        </item>
        <item>
            <title><![CDATA[Magic WAN & Magic Firewall: secure network connectivity as a service]]></title>
            <link>https://blog.cloudflare.com/magic-wan-firewall/</link>
            <pubDate>Mon, 22 Mar 2021 13:00:00 GMT</pubDate>
            <description><![CDATA[ Magic WAN provides secure, performant connectivity and routing for your entire corporate network, reducing cost and operational complexity. ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Back in October 2020, we introduced <a href="/cloudflare-one/">Cloudflare One</a>, our vision for the future of <a href="https://www.cloudflare.com/learning/network-layer/network-security/">corporate networking and security</a>. Since then, we’ve been laser-focused on delivering more pieces of this platform, and today we’re excited to announce two of its most foundational aspects: <a href="http://www.cloudflare.com/magic-wan">Magic WAN</a> and Magic Firewall. Magic WAN provides secure, performant connectivity and routing for your entire corporate network, reducing cost and operational complexity. Magic Firewall integrates smoothly with Magic WAN, enabling you to enforce network firewall policies at the edge, across traffic from any entity within your network.</p>
    <div>
      <h3>Traditional network architecture doesn’t solve today’s problems</h3>
      <a href="#traditional-network-architecture-doesnt-solve-todays-problems">
        
      </a>
    </div>
    <p>Enterprise networks have historically adopted one of a few models, which were designed to enable <a href="https://www.cloudflare.com/learning/security/what-is-information-security/">secure information flow</a> between offices and data centers, with access to the Internet locked down and managed at <a href="https://www.cloudflare.com/learning/access-management/what-is-the-network-perimeter/">office perimeters</a>. As applications moved to the cloud and employees moved out of offices, these designs stopped working, and band-aid solutions like VPN boxes don’t solve the core problems with <a href="https://www.cloudflare.com/learning/network-layer/enterprise-networking/">enterprise network architecture</a>.</p><p>On the connectivity side, full mesh <a href="https://www.cloudflare.com/learning/network-layer/what-is-mpls/">MPLS (multiprotocol label switching) networks</a> are expensive and time consuming to deploy, challenging to maintain, exponentially hard to scale, and often have major gaps in visibility. Other architectures require backhauling traffic through central locations before sending it back to the source, which introduces unacceptable latency penalties and necessitates purchasing costly hub hardware for maximum capacity rather than actual utilization. And most customers we’ve talked to are struggling with the worst of both worlds: a combination of impossible-to-manage architectures stitched together over years or decades. Security architects also struggle with these models - they have to juggle a stack of security hardware boxes from different vendors and trade off cost, performance, and security as their network grows.</p>
    <div>
      <h3>Move your network perimeter to the edge and secure it as a service</h3>
      <a href="#move-your-network-perimeter-to-the-edge-and-secure-it-as-a-service">
        
      </a>
    </div>
    <p>With Magic WAN, you can securely connect any traffic source - data centers, offices, devices, cloud properties - to Cloudflare’s network and configure routing policies to get the bits where they need to go, all within one SaaS solution. Magic WAN supports a variety of on-ramps including <a href="https://www.cloudflare.com/learning/network-layer/what-is-gre-tunneling/">Anycast GRE tunnels</a>, <a href="https://www.cloudflare.com/network-interconnect/">Cloudflare Network Interconnect</a>, <a href="https://www.cloudflare.com/products/argo-tunnel/">Argo Tunnel</a>, <a href="/warp-for-desktop">WARP</a>, and a variety of <a href="/network-onramp-partnerships">Network On-ramp Partners</a>. No more MPLS expense or lead times, no more performance penalties from traffic trombones, and no more nightmare of managing a tangle of legacy solutions: instead, use Cloudflare’s global Anycast network as an extension of yours, and get better performance and visibility built-in.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2ld2gNBj2OfGU41qVEztvX/6bf3efaad900a825c9096e550eac8592/image4-21.png" />
            
            </figure><p>Once your traffic is connected to Cloudflare, how do you control which entities within your network are allowed to interact with each other, or the Internet? This is where Magic Firewall comes in. Magic Firewall allows you to centrally manage policy across your entire network, all at the edge as a service. Magic Firewall gives you fine-grained control over which data is allowed in and out of your network, or inside your network. Even better, you get visibility into exactly how traffic is flowing through your network from a single dashboard.</p><p>Magic WAN provides the foundation for the broad suite of functions included in Cloudflare One, which were all built in software from the ground up to scale and integrate smoothly. Magic Firewall is available for Magic WAN out of the box, and customers can easily activate additional <a href="https://www.cloudflare.com/learning/security/glossary/what-is-zero-trust/">Zero Trust security</a> and performance features such as our <a href="https://www.cloudflare.com/teams/">Secure Web Gateway</a> with <a href="https://www.cloudflare.com/learning/access-management/what-is-browser-isolation/">remote browser isolation</a>, <a href="/one-more-zero-trust-thing-cloudflare-intrusion-detection/">Intrusion Detection System</a>, Smart Routing, and more.</p><blockquote><p><i>“Our network team is excited by Magic WAN. Cloudflare has built a global network-as-a-service platform that will help network teams manage complex edge and multi-cloud environments much more efficiently. Operating a single global WAN with built-in security and fast routing functionality — regardless of the HQ, data center, branch office, or end user location — is a game-changer in WAN technology.”— Sander Petersson, Head of Infrastructure, FlightRadar24</i></p></blockquote><p>What does this look like concretely? Let's explore some ways you can use Magic WAN and Magic Firewall to <a href="https://www.cloudflare.com/network-services/solutions/enterprise-network-security/">solve problems in enterprise networking and security</a> with an example customer, Acme Corp.</p>
    <div>
      <h3>Replace MPLS between branch offices &amp; data centers</h3>
      <a href="#replace-mpls-between-branch-offices-data-centers">
        
      </a>
    </div>
    <p>Today, Acme Corp has offices around the world that are each connected to regional data centers and to each other with MPLS. Each data center, which hosts corporate applications and a stack of hardware boxes to keep them secure, also has leased line connectivity to at least one other data center. Acme is also migrating some applications to the cloud, and they’re planning to establish direct connections from their data centers to cloud providers to boost security and reliability.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2YQH7Wz81UKrcKrHcpGzAd/a1cabec7843221b8150ab0d85e96a51e/image6-8.png" />
            
            </figure><p>As Acme has grown, one of their network team's most consistent pain points is managing and maintaining their MPLS connectivity. It's expensive and deployment requires long lead times, limiting Acme’s speed of expanding into new locations (especially internationally) or supporting offices added through acquisitions. Employees in Acme’s offices connecting to cloud providers and SaaS apps experience frustrating latency, since traffic hairpins through an Acme data center for security policies enforced through a stack of hardware boxes before being sent to its destination. <a href="https://www.cloudflare.com/learning/network-layer/what-is-branch-networking/">Traffic between offices</a>, such as IP telephony and video conferencing, doesn’t have any security policies applied to it, presenting a gap in Acme’s security posture.</p><p>Just as they're working on transitioning to the cloud for compute and storage, Acme wants to migrate away from these private links and instead leverage the Internet, securely, for connectivity. They considered establishing fully meshed site-to-site IPSec VPN tunnels over the Internet, but the complexity strained their networking team as well as their heterogeneous router deployments. Magic WAN is ready to meet them where they are today, simplifying network management and delivering immediate performance benefits, as well as enabling Acme's gradual transition away from MPLS.</p><p>In this example deployment with Magic WAN, Acme connects each office and VPC to Cloudflare with Anycast GRE tunnels. With this architecture, Acme only needs to set up a single tunnel for each site/Internet connection in order to automatically receive connectivity to Cloudflare's entire global network (200+ cities in 100+ countries around the world) - like a hub and spoke architecture, except the hub is everywhere. Acme also chooses to establish dedicated private connectivity from their data centers with <a href="https://www.cloudflare.com/network-interconnect/">Cloudflare Network Interconnect</a>, enabling even more secure and reliable traffic delivery and great connectivity to other networks/cloud providers through Cloudflare’s <a href="https://www.peeringdb.com/net/4224">highly interconnected</a> network.</p><p>Once these tunnels are established, Acme can configure allowed routes for traffic within their private network (RFC 1918 space), and Cloudflare gets the traffic where it needs to go, providing resiliency and traffic optimization. With an easy setup process that takes only a few hours, Acme Corp can start their migration away from MPLS. And as new Cloudflare One capabilities like QoS and Argo for Networks are introduced, Acme’s network performance and reliability will continue to improve.</p>
    <div>
      <h3>Secure remote employee access to private networks</h3>
      <a href="#secure-remote-employee-access-to-private-networks">
        
      </a>
    </div>
    <p>When Acme Corp's employees abruptly transitioned to working remotely last year due to the COVID-19 pandemic, Acme's IT organization scrambled to find short term fixes to maintain employee access to internal resources. Their legacy VPNs didn't hold up - Acme employees working from home struggled with connectivity, reliability, and performance issues as appliances were pushed beyond the limits they were designed for.</p><p>Thankfully, there's a better solution! Acme Corp can use Cloudflare for Teams and Magic WAN to provide a secure way for employees to access resources behind private networks from their devices, wherever they're working. Acme employees install the WARP client on their devices to send traffic to Cloudflare's network, where it can be authenticated and routed to private resources in data centers or VPCs that are connected to Cloudflare via GRE tunnel (shown in the previous example), Argo Tunnel, Cloudflare Network Interconnect, or IPSec (coming soon). This architecture solves the performance and capacity issues Acme experienced with their legacy VPNs - rather than sending all traffic through single choke point devices, it’s routed to the closest Cloudflare location where policy is applied at the edge before being sent along an optimized path to its destination.</p><p>Traffic from Acme Corp's employee devices, data centers and offices will also be able to be policed by Magic Firewall for powerful, granular policy control that's enforced across all "on-ramps." Whether employees are connecting from their phones and laptops or working from Acme offices, the same policies can be applied in the same place. This simplifies configuration and improves visibility for Acme's IT and security teams, who will be able to log into the Cloudflare dashboard to see and control policies in one place - a game changer compared to managing employee access across different VPNs, firewalls, and cloud services.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2bKIu94AdaXlZTDHf9gJWE/40358ef29ee33fc3875213ff613556e4/image2-22.png" />
            
            </figure><p>This solution allows Acme to transition away from their VPN, firewall, and <a href="https://www.cloudflare.com/learning/access-management/what-is-a-secure-web-gateway/">secure web gateway appliances</a>, improving performance and enabling easy policy management across traffic from wherever their employees are working.</p>
    <div>
      <h3>Migrate network and security functions to the edge</h3>
      <a href="#migrate-network-and-security-functions-to-the-edge">
        
      </a>
    </div>
    <p>Historically, Acme relied on stacks of hardware appliances in physical offices and data centers to enforce <a href="https://www.cloudflare.com/network-security/">network security</a> and get visibility into what’s happening on the network: specialized firewalls to inspect inbound or outbound traffic, intrusion detection systems and <a href="https://www.cloudflare.com/learning/security/what-is-siem/">SIEMs</a>. As their organization is moving to the cloud and rethinking the future of remote work, the Acme security team is looking for sustainable solutions that improve security, even beyond what used to be possible in their traditional castle-and-moat architecture.</p><p>Previously, we walked through how Acme could configure Magic WAN to send traffic from offices, data centers, cloud properties, and devices to Cloudflare’s network. Once this traffic is flowing through Cloudflare, it’s easy to add access controls and filtering functions to augment or replace on-prem security hardware, all delivered and administered through a single pane of glass.</p><p>With Magic Firewall, customers get a single <a href="https://www.cloudflare.com/learning/cloud/what-is-a-cloud-firewall/">firewall-as-a-service</a> that runs at the edge, replacing clunky boxes they have installed at branch offices or data centers. Magic Firewall allows them to easily manage configuration, but also simplifies compliance auditing.</p><p>To <a href="https://www.cloudflare.com/learning/access-management/what-is-access-control/">control access</a>, customers can put policies in place that determine exactly which traffic is allowed to go where. For example, Acme wants traffic to flow from the Internet to their web servers inside data centers on port 80 and 443, but wants to lock down <a href="https://www.cloudflare.com/learning/access-management/what-is-ssh/">SSH access</a> to only certain private networks inside branch offices.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4GEELEJG2qGSpnjGhgk4Mw/d774989e74069792cf14ce52c01a943e/image1-29.png" />
            
            </figure><p>If Acme wants to go further in locking down their network, they can adopt a Zero Trust access model  with <a href="https://www.cloudflare.com/teams/access/">Access</a> and <a href="https://www.cloudflare.com/teams/gateway/">Gateway</a> to control who can reach what, and how, across all their traffic flowing through the Cloudflare network. As Cloudflare releases new filtering and control functions, like our upcoming IDS/IPS and DLP solutions, Acme can enable them to further increase security with only a few clicks.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3aWN98JS4FlEiH4enUx9QP/58caec3717efc9b5851dd7e98cdc3a83/image5-22.png" />
            
            </figure><p>Acme’s long-term goal is to transition all security and performance functions to the cloud, consumed as a service. Magic WAN enables a smooth path for this migration, allowing them to gradually deepen their security posture as they retire legacy hardware.</p><blockquote><p><i>Increased cloud adoption along with the recent pivot to remote workers has increased the volume of Internet, SaaS, and IaaS traffic straining traditional network architectures such as MPLS. WAN architectures that offer a global scale, integrated enterprise network security functions, and direct, secure connectivity to remote users are key to organizations looking to increase their operational agility and lower total costs of ownership.— Ghassan Abdo, IDC Research VP, WW Telecom, Virtualization &amp; CDN</i></p></blockquote>
    <div>
      <h3>Cloudflare’s network as an extension of yours</h3>
      <a href="#cloudflares-network-as-an-extension-of-yours">
        
      </a>
    </div>
    <p>Like many of our products, Cloudflare One started as a collection of solutions to problems that we experienced in growing and securing our own network. Magic WAN and Magic Firewall allow us to extend the benefits of our careful architecture decisions over the past ten years to customers:</p><ul><li><p><b>Global scale &amp; close to eyeballs:</b> wherever your offices, data centers, and users are, we’re close by. We’ve worked hard to establish great connectivity to eyeball networks because of our CDN business, which pays dividends for remote workers who need great connectivity from their homes back to your network. It also means we can stop threats at the edge, close to their source, rather than running the risk of malicious traffic overwhelming capacity-limited on prem appliances.</p></li><li><p><b>Hardware and carrier-agnostic:</b> use whatever hardware you have today to connect to us, and get the benefits of resiliency afforded by our diverse carrier connectivity.</p></li><li><p><b>Built from scratch to work together:</b> we’ve developed our products in software, from the ground up, to integrate easily. We think constantly about how our products can integrate to make each other better as they’re created and evolved.</p></li></ul>
    <div>
      <h3>Get started today</h3>
      <a href="#get-started-today">
        
      </a>
    </div>
    <p>Magic WAN is available in limited beta, and Magic Firewall is generally available for all <a href="https://www.cloudflare.com/magic-transit">Magic Transit</a> customers and included out of the box with Magic WAN. If you’re interested in testing out Magic WAN or want to learn more about how Cloudflare can help your organization replace legacy MPLS architecture, <a href="https://www.cloudflare.com/products/zero-trust/remote-workforces/">secure access for remote workers</a>, and deepen your security posture while reducing total cost of ownership, please <a href="https://www.cloudflare.com/magic-wan">get in touch</a>.</p> ]]></content:encoded>
            <category><![CDATA[Cloudflare One]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[Security Week]]></category>
            <category><![CDATA[Firewall]]></category>
            <category><![CDATA[Network]]></category>
            <guid isPermaLink="false">wjmDtPvG1qvzkh0fZl7mo</guid>
            <dc:creator>Achiel van der Mandele</dc:creator>
            <dc:creator>Annika Garbers</dc:creator>
        </item>
        <item>
            <title><![CDATA[One more (Zero Trust) thing: Cloudflare Intrusion Detection System]]></title>
            <link>https://blog.cloudflare.com/one-more-zero-trust-thing-cloudflare-intrusion-detection/</link>
            <pubDate>Sat, 17 Oct 2020 13:00:00 GMT</pubDate>
            <description><![CDATA[ We’re very excited to announce our plans for Cloudflare Intrusion Detection System, a new product that monitors your network and alerts when an attack is suspected.  ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Today, we’re very excited to announce our plans for Cloudflare Intrusion Detection System, a new product that monitors your network and alerts when an attack is suspected. With deep integration into Cloudflare One, Cloudflare Intrusion Detection System gives you a bird’s eye view of your <i>entire</i> global network and inspects all traffic for bad behavior, regardless of whether it came from outside or inside your network.</p>
    <div>
      <h3>Analyze your network without doing the legwork</h3>
      <a href="#analyze-your-network-without-doing-the-legwork">
        
      </a>
    </div>
    <p>Enterprises build firewall rules to keep their networks safe from external and internal threats. When bad actors try to attack a network, those firewalls check if the attack matches a rule pattern. If it does, the firewall steps in and blocks the attack.</p><p>Teams used to configure those rules across physical firewall appliances, frequently of different makes and models, deployed to physical locations. Yesterday, we <a href="/introducing-magic-firewall/">announced Magic Firewall</a>, Cloudflare’s network-level firewall delivered in our data centers around the world. Your team can write a firewall rule once, <a href="/our-network-cloudflare-one/">deploy it to Cloudflare</a>, and our global network will protect your offices and data centers without the need for on-premises hardware.</p><p>This is great if you know where attacks are coming from. If you don’t have that level of certainty, finding those types of attacks becomes expensive guesswork. Sophisticated attackers can prod a network’s defenses to determine what rules do or do not exist. They can exploit that information to launch quieter attacks. Or even worse: compromise your employees and attack from the inside.</p><p>We’re excited to end Zero Trust week by announcing one more thing: Cloudflare Intrusion Detection System (IDS), a solution that analyzes your <b>entire</b> network simultaneously and alerts you to events that your rules might not catch.</p><p>Cloudflare IDS represents a critical piece of Cloudflare One. With WARP connecting your devices, and Magic Transit connecting your offices and data centers to Cloudflare, Cloudflare IDS sits on top of both, allowing you to examine and evaluate all traffic simultaneously.  This gives you a single view of what’s happening inside of your network and where breaches might have occurred. Cloudflare IDS is also constantly getting better at identifying threats and attacks. You can opt in to receive alerts, and with a single-click, quickly and easily block intrusion attempts that sneak past static rules. Most importantly, your team benefits from the intelligence Cloudflare gathers from attacks in other regions or industries to flag events that impact you.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1n3WciYqFWcKC5TOfRTt3o/b79c51173068b0f1ab20f14387015320/IDS-diagram_3x.png" />
            
            </figure><p>So how does it work?</p>
    <div>
      <h3>Assume breach</h3>
      <a href="#assume-breach">
        
      </a>
    </div>
    <p>Legacy security models implicitly trusted any connection inside the network. That made them vulnerable to breaches and attacks from bad actors coming from within. The <a href="https://www.cloudflare.com/learning/security/glossary/what-is-zero-trust/">concept of Zero Trust</a> flips the model by assuming every connection is hazardous. Instead of waiting for evidence that a definite breach has occured, the assumption is that one has already happened.</p><p>In order to <a href="https://www.cloudflare.com/learning/access-management/how-to-implement-zero-trust/">implement the Zero Trust model</a> effectively, you need two core components:</p><ul><li><p>A comprehensive view across your entire network, which is constantly analyzed to catch problems that static rules might have missed, and;</p></li><li><p>An intrusion detection system (purchased or homegrown), which is doing the analyzing.</p></li></ul><p>Part of what drives Cloudflare IDS’s effectiveness is its deep integration with Cloudflare One. WARP and Magic Transit provide the first component, allowing you to connect your entire network and all devices to Cloudflare, giving you a bird’s eye view of every single packet and connection.</p><p>Cloudflare IDS then helps detect attacks coming from everywhere inside the network by actively looking at traffic and the contents of traffic. Cloudflare IDS will operate in two ways: traffic shape and traffic inspection. By looking at the behavior of traffic on your network, we can learn what normal behavior looks like: a user only logs into a single system each day, they only access certain applications etc. We would not expect someone to try to log into many systems at once or port scan the network: clear signs of bad intent.  </p><p>The other form of intrusion detection we employ is traffic inspection: looking inside traffic that flows through your network to see if anyone is performing a very targeted attack. These styles of attacks can’t be detected using traditional methods because they actually look like normal traffic: only by looking inside can we see that the actor is trying something malicious.</p>
    <div>
      <h3>Herd immunity</h3>
      <a href="#herd-immunity">
        
      </a>
    </div>
    <p>Attackers tend to follow a pattern. Bad actors who try an attack on one enterprise will then repeat that same attack elsewhere. We’ve unfortunately seen this increase, lately, as attacks like Fancy Bear’s DDoS campaign move from organization to organization and repeat the same playbook.</p><p>We think we’re safer together. Cloudflare IDS learns from attacks against our network and all our customer’s networks, to constantly identify new types of attacks being launched. We can then give your team the benefit of lessons learned by keeping Cloudflare and other customers safe. The platform also incorporates external threat feeds; and finally, allows you to bring your own.</p>
    <div>
      <h3>Offload CPU spend</h3>
      <a href="#offload-cpu-spend">
        
      </a>
    </div>
    <p>A constant source of complaint from customers who are running their own IDS solution (whether built in-house or purchased) is that IDS solutions are notoriously CPU-hungry. They need to keep a lot of state in memory, and require a lot of computation to work effectively and accurately.</p><p>With Cloudflare IDS, you can offload that burden to our network. Cloudflare was built from the ground up to be infinitely scalable. Every edge data center runs the exact same software, allowing us to field out workload efficiently and at massive scale. With Cloudflare running your IDS, you can remove the computational resource burden of legacy solutions and stop worrying about capacity.</p>
    <div>
      <h3>Ridiculously easy</h3>
      <a href="#ridiculously-easy">
        
      </a>
    </div>
    <p>When your team deploys Cloudflare IDS, you’ll need to click one button and that’s it. We’ll begin analyzing patterns in your Magic Transit traffic and Magic Firewall events to check them against our threat feeds.</p><p>If we determine that something suspicious has happened, we’ll send an alert to notify your team. Your security team can then begin to review the attempt and drill down into the data to make a determination about what happened. You can gain more insights into the type of attack and where it occurred on the dashboard. Remediation is a click away: just set up a rule and push it out to the global Cloudflare network: we’ll stop the attack dead in its tracks.</p>
    <div>
      <h3>What’s next?</h3>
      <a href="#whats-next">
        
      </a>
    </div>
    <p>The launch of Cloudflare IDS will follow the GA of our <a href="/introducing-magic-firewall/">Magic Firewall announcement</a>. If you want to be the first to adopt IDS, please reach out to your account team to learn more.</p> ]]></content:encoded>
            <category><![CDATA[Zero Trust]]></category>
            <category><![CDATA[Zero Trust Week]]></category>
            <category><![CDATA[Security]]></category>
            <guid isPermaLink="false">1xojPuwIlnht0LTdNYQlGU</guid>
            <dc:creator>Sam Rhea</dc:creator>
            <dc:creator>Achiel van der Mandele</dc:creator>
        </item>
        <item>
            <title><![CDATA[Introducing Magic Firewall]]></title>
            <link>https://blog.cloudflare.com/introducing-magic-firewall/</link>
            <pubDate>Fri, 16 Oct 2020 15:00:00 GMT</pubDate>
            <description><![CDATA[ Today we’re excited to announce Magic Firewall™, a network-level firewall delivered through Cloudflare to secure your enterprise. Magic Firewall covers your remote users, branch offices, data centers and cloud infrastructure. ]]></description>
            <content:encoded><![CDATA[ 
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/51DdJ5LPNE1lDkmqfaRebW/2c7fe91183eeb7ffa1d1680a8541c595/Image_20201016_072224.jpeg.jpeg" />
            
            </figure><p>Today we’re excited to announce Magic Firewall™, a network-level firewall delivered through Cloudflare to secure your enterprise. Magic Firewall covers your remote users, branch offices, data centers and cloud infrastructure. Best of all, it’s deeply integrated with Cloudflare One™, giving you a one-stop overview of everything that’s happening on your network.</p><p>Cloudflare Magic Transit™ secures IP subnets with the same DDoS protection technology that we built to keep our own global network secure. That helps ensure your network is safe from attack and available and it replaces physical appliances that have limits with Cloudflare’s network.</p><p>That still leaves some hardware onsite, though, for a different function: firewalls. Networks don’t just need protection from DDoS attacks; administrators need a way to set policies for all traffic entering and leaving the network. With Magic Firewall, we want to help your team deprecate those network firewall appliances and move that burden to the Cloudflare global network.</p>
    <div>
      <h2>Firewall boxes are miserable to manage</h2>
      <a href="#firewall-boxes-are-miserable-to-manage">
        
      </a>
    </div>
    <p>Network firewalls have always been clunky. Not only are they expensive, they are bound by their own hardware constraints. If you need more CPU or memory, you have to buy more boxes. If you lack capacity, the entire network suffers, directly impacting employees that are trying to do their work. To compensate, network operators and security teams are forced to buy more capacity than we need, resulting in having to pay more than necessary.</p><p>We’ve heard this problem from our Magic Transit customers who are constantly running into capacity challenges:</p><blockquote><p><i>“We’re constantly running out of memory and running into connection limits on our firewalls. It’s a huge problem.”</i></p></blockquote><p>Network operators find themselves piecing together solutions from different vendors, mixing and matching features, and worrying about keeping policies in sync across the network. The result is more headache and added cost.</p>
    <div>
      <h2>The solution isn’t more hardware</h2>
      <a href="#the-solution-isnt-more-hardware">
        
      </a>
    </div>
    <p>Some organizations then turn to even more vendors and purchase additional hardware to manage the patchwork firewall hardware they have deployed. Teams then have to balance refresh cycles, updates, and end of life management across even more platforms. These are band-aid solutions that do not solve the fundamental problem: how do we create a single view of the entire network that gives insights into what is happening (good and bad) and apply policy instantaneously, <i>globally?</i></p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2VpvWSMYDhXy8OFXWeZuzn/c88e8ef9269192a0f405edfdb69fb79b/image2-18.png" />
            
            </figure><p>Traditional Firewall Architecture</p>
    <div>
      <h2>Introducing Magic Firewall</h2>
      <a href="#introducing-magic-firewall">
        
      </a>
    </div>
    <p>Instead of more band-aids, we’re excited to launch Magic Firewall as a single, comprehensive, solution to network filtering. Unlike legacy appliances, Magic Firewall runs in the Cloudflare network. That network scales up or down with a customer’s needs at any given time.</p><p>Running in our network delivers an added benefit. Many customers backhaul network traffic to single chokepoints in order to perform firewalling operations, adding latency. Cloudflare operates data centers in 200 cities around the world and each of those points of presence is capable of delivering the same solution. Regional offices and data centers can instead rely on a Cloudflare Magic Firewall engine running within 100 milliseconds of their operation.</p>
    <div>
      <h2>Integrated with Cloudflare One</h2>
      <a href="#integrated-with-cloudflare-one">
        
      </a>
    </div>
    <p>Cloudflare One consists of products that allow you to apply a single filtering engine with consistent security controls to your entire network, not just part of it. The same types of controls that your organization wants to apply to traffic leaving your networks should be applied to traffic leaving your devices.</p><p>Magic Firewall will integrate with what you’re already using in Cloudflare. For example, traffic leaving endpoints outside of the network can reach Cloudflare using the Cloudflare WARP client where Gateway will apply the same rules your team configures for network level filtering. Branch offices and data centers can connect through Magic Transit with the same set of rules. This gives you a one-stop overview of your entire network instead of having to hunt down information across multiple devices and vendors.</p>
    <div>
      <h2>How does it work?</h2>
      <a href="#how-does-it-work">
        
      </a>
    </div>
    <p>So what <i>is</i> Magic Firewall? Magic Firewall is a way to replace your antiquated on-premises network firewall with an as-a-service solution, pushing your <a href="https://www.cloudflare.com/learning/access-management/what-is-the-network-perimeter/">perimeter</a> out to the edge. We already allow you to apply firewall rules at our edge with Magic Transit, but the process to add or change rules has previously involved working with your account team or Cloudflare support. Our first version, generally available in the next few months, will allow all our Magic Transit customers to apply static OSI Layer 3 &amp; 4 mitigations completely self-service, at Cloudflare scale.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7Ev46NEGa1xfrn1DWKTpbF/16a86bc3067f7d0812b2c7d44c2a03f5/image1-23.png" />
            
            </figure>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7uCJhkWr15tTnXdWXyZg2M/4f16d7c85a7c1d492c15a0d7fc19546e/image3-18.png" />
            
            </figure><p>Cloudflare applies firewall policies at every data center</p><p>Meaning you have firewalls applying policies across the globe</p><p></p><p>Our first version of Magic Firewall will focus on static mitigations, allowing you to set a standard set of rules that apply to your entire network, whether devices or applications are sitting in the cloud, an employee's device or a branch office. You'll be able to express rules allowing or blocking based on:</p><ul><li><p>Protocol</p></li><li><p>Source or destination IP and port</p></li><li><p>Packet length</p></li><li><p>Bit field match</p></li></ul><p>Rules can be crafted in <a href="https://www.wireshark.org/docs/wsug_html_chunked/ChWorkBuildDisplayFilterSection.html">Wireshark syntax</a>, a domain specific language common in the networking world and the same syntax we use across our other products. With this syntax, you can easily craft extremely powerful rules to precisely allow or deny any traffic in or out of your network. If you suspect there’s a bad actor inside or outside of your perimeter, simply log on to the dashboard and block that traffic. Rules are pushed out globally in seconds, shutting down threats at the edge.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1ZYICN15ioMIF0Zg9aEQZk/5c133827252678013e3348f02f6168da/image4-13.png" />
            
            </figure><p>Configuring firewalls should be easy and powerful. With Magic Firewall, rules can be configured using an easy UI that allows for complex logic. Or, just type the filter rule manually using Wireshark filter syntax and configure that way. Don’t want to mess with a UI? Rules can be added just as easily through the API.</p>
    <div>
      <h2>What’s next?</h2>
      <a href="#whats-next">
        
      </a>
    </div>
    <p>Looking at packets is not enough… Even with firewall rules, teams still need visibility into what’s <i>actually</i> happening on their network: what’s happening inside of these datastreams? Is this legitimate traffic or do we have malicious actors either inside or outside of our network doing nefarious things? Deploying Cloudflare to sit between any two actors that interact with any of your assets (be they employee devices or services exposed to the Internet) allows us to enforce any policy, anywhere, either on where the traffic is coming from or what’s inside the traffic. Applying policies based on traffic type is just around the corner and we’re excited to announce that we’re planning to add additional capabilities to automatically detect intrusion events based on what’s happening inside datastreams in the near future.</p><p>We're excited about this new journey. With Cloudflare One, we’re reinventing what the network looks like for corporations. We integrate access management, security features and performance across the board: for your network’s visitors but also for anyone inside it. All of this built on top of a network that was <a href="/our-network-cloudflare-one">#BuiltForThis.</a></p><p>We’ll be opening up Magic Firewall in a limited beta, starting with existing Magic Transit customers. If you’re interested, please <a href="https://www.cloudflare.com/lp/magic-firewall-beta/">let us know</a>.</p> ]]></content:encoded>
            <category><![CDATA[Zero Trust Week]]></category>
            <category><![CDATA[Zero Trust]]></category>
            <category><![CDATA[Magic Firewall]]></category>
            <category><![CDATA[Cloudflare One]]></category>
            <category><![CDATA[Magic Transit]]></category>
            <category><![CDATA[Security]]></category>
            <guid isPermaLink="false">2aDewXWkm04UQhuPDozb94</guid>
            <dc:creator>Achiel van der Mandele</dc:creator>
        </item>
        <item>
            <title><![CDATA[Announcing support for gRPC]]></title>
            <link>https://blog.cloudflare.com/announcing-grpc/</link>
            <pubDate>Thu, 01 Oct 2020 13:00:00 GMT</pubDate>
            <description><![CDATA[ Today we're excited to announce beta support for proxying gRPC, a next-generation protocol that allows you to build APIs at scale. With gRPC on Cloudflare, you get access to the security, reliability and performance features that you're used to having at your fingertips for traditional APIs. ]]></description>
            <content:encoded><![CDATA[ <p>Today we're excited to announce beta support for proxying <a href="https://grpc.io/">gRPC</a>, a next-generation protocol that allows you to build <a href="https://www.cloudflare.com/learning/security/api/what-is-an-api/">APIs</a> at scale. With gRPC on Cloudflare, you get access to the security, reliability and performance features that you're used to having at your fingertips for traditional APIs. Sign up for the beta today in the Network tab of the Cloudflare dashboard.</p><p>gRPC has proven itself to be a popular new protocol for building APIs at scale: it’s more efficient and built to offer superior bi-directional streaming capabilities. However, because gRPC uses newer technology, like HTTP/2, under the covers, existing security and performance tools did not support gRPC traffic out of the box. This meant that customers adopting gRPC to power their APIs had to pick between modernity on one hand, and things like security, performance, and reliability on the other. Because supporting modern protocols and making sure people can operate them safely and performantly is in our DNA, we set out to fix this.</p><p>When you put your gRPC APIs on Cloudflare, you immediately gain the benefits that come with Cloudflare. Apprehensive of exposing your APIs to bad actors? Add security features such as WAF and Bot Management. Need more performance? Turn on Argo Smart Routing to decrease time to first byte. Or increase reliability by adding a Load Balancer.</p><p>And naturally, gRPC plugs in to <a href="/introducing-api-shield">API Shield</a>, allowing you to add more security by enforcing client authentication and schema validation at the edge.</p>
    <div>
      <h3>What is gRPC?</h3>
      <a href="#what-is-grpc">
        
      </a>
    </div>
    <p>Protocols like JSON-REST have been the bread and butter of Internet facing APIs for several years. They're great in that they operate over HTTP, their payloads are human readable, and a large body of tooling exists to quickly set up an API for another machine to talk to. However, the same things that make these protocols popular are also weaknesses; JSON, as an example, is inefficient to store and transmit, and expensive for computers to parse.</p><p>In 2015, Google introduced <a href="https://grpc.io/">gRPC</a>, a protocol designed to be fast and efficient, relying on binary protocol buffers to serialize messages before they are transferred over the wire. This prevents (normal) humans from reading them but results in much higher processing efficiency. gRPC has become increasingly popular in the era of microservices because it neatly addresses the shortfalls laid out above.</p><p>JSON</p><p>Protocol Buffers</p><p>{ “foo”: “bar” }</p><p>0b111001001100001011000100000001100001010</p><p></p><p>gRPC relies on HTTP/2 as a transport mechanism. This poses a problem for customers trying to deploy common security technologies like web application firewalls, as most reverse proxy solutions (including Cloudflare’s HTTP stack, until today) downgrade HTTP requests down to HTTP/1.1 before sending them off to an origin.</p><p>Beyond microservices in a datacenter, the original use case for gRPC, adoption has grown in many other contexts. Many popular mobile apps have millions of users, that all rely on messages being sent back and forth between mobile phones and servers. We've seen many customers wire up API connectivity for their mobile apps by using the same gRPC API endpoints they already have inside their data centers for communication with clients in the outside world.</p><p>While this solves the efficiency issues with running services at scale, it exposes critical parts of these customers' infrastructure to the Internet, introducing security and reliability issues. Today we are introducing support for gRPC at Cloudflare, to secure and improve the experience of running gRPC APIs on the Internet.</p>
    <div>
      <h3>How does gRPC + Cloudflare work?</h3>
      <a href="#how-does-grpc-cloudflare-work">
        
      </a>
    </div>
    <p>The engineering work our team had to do to add gRPC support is composed of a few pieces:</p><ol><li><p><b>Changes to the early stages of our request processing pipeline to identify gRPC traffic</b> coming down the wire.</p></li><li><p><b>Additional functionality in our WAF to “understand” gRPC traffic</b>, ensuring gRPC connections are handled correctly within the WAF, including inspecting all components of the initial gRPC connection request.</p></li><li><p><b>Adding support to establish HTTP/2 connections to customer origins</b> for gRPC traffic, allowing gRPC to be proxied through our edge. HTTP/2 to origin support is currently limited to gRPC traffic, though we expect to expand the scope of traffic proxied back to origin over HTTP/2 soon.</p></li></ol><p>What does this mean for you, a Cloudflare customer interested in using our <a href="https://www.cloudflare.com/application-services/solutions/api-security/">tools</a> to secure and accelerate your API? Because of the hard work we’ve done, enabling support for gRPC is a click of a button in the Cloudflare dashboard.</p>
    <div>
      <h3>Using gRPC to build mobile apps at scale</h3>
      <a href="#using-grpc-to-build-mobile-apps-at-scale">
        
      </a>
    </div>
    <p>Why does Cloudflare supporting gRPC matter? To dig in on one use case, let’s look at mobile apps. Apps need quick, efficient ways of interacting with servers to get the information needed to show on your phone. There is no browser, so they rely on <i>APIs</i> to get the information. An API stands for application programming interface and is a standardized way for machines (say, your phone and a server) to talk to each other.</p><p>Let's say we're a mobile app developer with thousands, or even millions of users. With this many users, using a modern protocol, gRPC, allows us to run less compute infrastructure than would be necessary with older, less efficient protocols like JSON-REST. But exposing these endpoints, naked, on the Internet is really scary. Up until now there were very few, if any, options for protecting gRPC endpoints against application layer attacks with a <a href="https://www.cloudflare.com/learning/ddos/glossary/web-application-firewall-waf/">WAF</a> and guarding against volumetric attacks with DDoS mitigation tools. That changes today, with Cloudflare adding gRPC to it’s set of supported protocols.  </p><p>With gRPC on Cloudflare, you get the benefits of our security, reliability and performance products:</p><ul><li><p>WAF for inspection of incoming gRPC requests. Use managed rules or craft your own.</p></li><li><p>Load Balancing to increase reliability: configure multiple gRPC backends to handle the load, let Cloudflare distribute the load across them. Backend selection can be done in round-robin fashion, based on health checks or load.</p></li><li><p>Argo Smart Routing to increase performance by transporting your gRPC messages faster than the Internet would be able to route them. Messages are routed around congestion on the Internet, resulting in an average reduction of time to first byte by 30%.</p></li></ul><p>And of course, all of this works with <a href="/introducing-api-shield">API Shield</a>, an easy way to add mTLS authentication to any API endpoint.</p>
    <div>
      <h3>Enabling gRPC support</h3>
      <a href="#enabling-grpc-support">
        
      </a>
    </div>
    <p>To enable gRPC support, head to the <a href="https://dash.cloudflare.com">Cloudflare dashboard</a> and go to the Network tab. From there you can sign up for the beta.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1dXJ7lGxzg9e6EPFqgBTD7/74f4d6167eb4b151a6186af624a5ed66/image1-1.png" />
            
            </figure><p>We have limited seats available at launch, but will open up more broadly over the next few weeks. After signing up and toggling gRPC support, you’ll have to enable Cloudflare proxying on your domain on the DNS tab to activate Cloudflare for your gRPC API.</p><p>We’re excited to bring gRPC support to the masses, allowing you to add the security, reliability and performance benefits that you’re used to getting with Cloudflare. Enabling is just a click away. Take it for a spin and let us know what you think!</p> ]]></content:encoded>
            <category><![CDATA[Birthday Week]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[gRPC]]></category>
            <guid isPermaLink="false">4AG91cetalX3slOwHJWxYz</guid>
            <dc:creator>Achiel van der Mandele</dc:creator>
        </item>
        <item>
            <title><![CDATA[Introducing Regional Services]]></title>
            <link>https://blog.cloudflare.com/introducing-regional-services/</link>
            <pubDate>Fri, 26 Jun 2020 11:00:00 GMT</pubDate>
            <description><![CDATA[ Cloudflare launches Regional Services, giving customers control over where their data is processed. ]]></description>
            <content:encoded><![CDATA[ <p>In a world where, increasingly, workloads shift to the cloud, it is often uncertain and unclear how data travels the Internet and in which countries data is processed. Today, Cloudflare is pleased to announce that we're giving our customers control. With Regional Services, we’re providing customers full control over exactly where their traffic is handled.</p><p>We operate a global network spanning more than 200 cities. Each data center runs servers with the exact same software stack. This has enabled Cloudflare to quickly and efficiently add capacity where needed. It also allows our engineers to ship features with ease: deploy once, and it's available globally.</p><p>The same benefit applies to our customers: configure once and that change is applied everywhere in seconds, regardless of whether they’re changing security features, adding a DNS record or deploying a Cloudflare Worker containing code.</p><p>Having a homogenous network is great from a routing point of view: whenever a user performs an HTTP request, the closest datacenter is found due to Cloudflare's Anycast network. BGP looks at the hops that would need to be traversed to find the closest data center. This means that someone near the Canadian border (let's say North Dakota) could easily find themselves routed to Winnipeg (inside Canada) instead of a data center in the United States. This is generally what our customers want and expect: find the fastest way to serve traffic, regardless of geographic location.</p><a href="https://cloudflare.tv/">
         <img src="http://staging.blog.mrk.cfdata.org/content/images/2020/06/tube-blog-banner.png" />
      </a><p>Some organizations, however, have expressed preferences for maintaining regional control over their data for a variety of reasons. For example, they may be bound by agreements with their own customers that include geographic restrictions on data flows or data processing. As a result, some customers have requested control over where their web traffic is serviced.</p><p>Regional Services gives our customers the ability to accommodate regional restrictions while still using Cloudflare’s global edge network. As of today, Enterprise customers can add Regional Services to their contracts. With Regional Services, customers can choose which subset of data centers are able to service traffic on the HTTP level. But we're not reducing network capacity to do this: that would not be the Cloudflare Way. Instead, we're allowing customers to use our entire network for <a href="https://www.cloudflare.com/ddos/">DDoS protection</a> but limiting the data centers that apply higher-level layer 7 security and performance features such as WAF, Workers, and Bot Management.</p><p>Traffic is ingested on our global Anycast network at the location closest to the client, as usual, and then passed to data centers inside the geographic region of the customer’s choice. TLS keys are only <a href="/geo-key-manager-how-it-works">stored</a> and used to actually handle traffic inside that region. This gives our customers the benefit of our huge, low-latency, high-throughput network, capable of withstanding even the <a href="/the-daily-ddos-ten-days-of-massive-attacks/">largest DDoS attacks</a>, while also giving them local control: only data centers inside a customer’s preferred geographic region will have the access necessary to apply security policies.</p><p>The diagram below shows how this process works. When users connect to Cloudflare, they hit the closest data center to them, by nature of our Anycast network. That data center detects and mitigates DDoS attacks. Legitimate traffic is passed through to a data center with the geographic region of the customers choosing. Inside that data center, traffic is inspected at OSI layer 7 and HTTP products can work their magic:</p><ul><li><p>Content can be returned from and stored in cache</p></li><li><p>The WAF looks inside the HTTP payloads</p></li><li><p>Bot Management detects and blocks suspicious activity</p></li><li><p>Workers scripts run</p></li><li><p>Access policies are applied</p></li><li><p>Load Balancers look for the best origin to service traffic</p></li></ul>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7aaFSqiVx77rXsS2N3RT1f/d574a8616e54dd8246b68ee94a09837e/image2-9.png" />
            
            </figure><p>Today's launch includes preconfigured geographic regions; we'll look to add more depending on customer demand. Today, US and EU regions are available immediately, meaning layer 7 (HTTP) products can be configured to only be applied within those regions and not outside of them.</p><p>The US and EU maps are depicted below. Purple dots represent data centers that apply DDoS protection and network acceleration. Orange dots represent data centers that process traffic.</p>
    <div>
      <h3>US</h3>
      <a href="#us">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/27QO1l8SD4U7w27OSYYPOp/33c4577ab859445c0f3fab1f515fbf72/image1-10.png" />
            
            </figure>
    <div>
      <h3>EU</h3>
      <a href="#eu">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/10lHcRerwTtYDamjx1u0HA/7f714e18362e0ad7a09caa8ea4447406/BDES-655-_-Slides-with-Cloudflare-PoPs-for-product-launch--1-.jpg" />
            
            </figure><p>We're very excited to provide new tools to our customers, allowing them to dictate which of our data centers employ HTTP features and which do not. If you're interested in learning more, contact <a>sales@cloudflare.com</a>.</p> ]]></content:encoded>
            <category><![CDATA[Data Center]]></category>
            <category><![CDATA[Europe]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[BGP]]></category>
            <category><![CDATA[Regional Services]]></category>
            <guid isPermaLink="false">6odmOeCIIEK47sVIlmcGt6</guid>
            <dc:creator>Achiel van der Mandele</dc:creator>
        </item>
        <item>
            <title><![CDATA[Test your home network performance]]></title>
            <link>https://blog.cloudflare.com/test-your-home-network-performance/</link>
            <pubDate>Tue, 26 May 2020 17:00:59 GMT</pubDate>
            <description><![CDATA[ Cloudflare launches speed.cloudflare.com, a tool that allows you to gain in-depth insights into the quality of your network uplink, including throughput, latency and jitter. ]]></description>
            <content:encoded><![CDATA[ <p>With many people being forced to work from home, there’s <a href="/recent-trends-in-internet-traffic/">increased load on consumer ISPs</a>. You may be asking yourself: how well is my ISP performing with even more traffic? Today we’re announcing the general availability of speed.cloudflare.com, a way to gain meaningful insights into exactly how well your network is performing.</p><p>We’ve seen a massive shift from users accessing the Internet from <a href="/covid-19-impacts-on-internet-traffic-seattle-italy-and-south-korea/">busy office districts to spread out urban areas</a>.</p><p>Although there are a slew of speed testing tools out there, none of them give you precise insights into how they came to those measurements and how they map to real-world performance. With <a href="https://speed.cloudflare.com">speed.cloudflare.com</a>, we give you insights into what we’re measuring and how exactly we calculate the scores for your network connection. Best of all, you can easily download the measurements from right inside the tool if you’d like to perform your own analysis.</p><p>We also know you care about privacy. We believe that you should know what happens with the results generated by this tool. Many other tools sell the data to third parties. Cloudflare does not sell your data. Performance data is collected and anonymized and is governed by the terms of our <a href="https://www.cloudflare.com/privacypolicy/">Privacy Policy</a>. The data is used anonymously to determine how we can improve our network, both in terms of capacity as well as to help us determine which Internet Service Providers to peer with.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2XmV7Z2R39XCeitSokM2tH/c14af6877c0e7dc4d87e96be78a43aef/image1-10.png" />
            
            </figure><p>The test has three main components: download, upload and a latency test. Each measures  different aspects of your network connection.</p>
    <div>
      <h3>Down</h3>
      <a href="#down">
        
      </a>
    </div>
    <p>For starters we run you through a basic download test. We start off downloading small files and progressively move up to larger and larger files until the test has saturated your Internet downlink. Small files (we start off with 10KB, then 100KB and so on) are a good representation of how websites will load, as these typically encompass many small files such as images, CSS stylesheets and JSON blobs.</p><p>For each file size, we show you the measurements inside a table, allowing you to drill down. Each dot in the bar graph represents one of the measurements, with the thin line delineating the range of speeds we've measured. The slightly thicker block represents the set of measurements between the 25th and 75th percentile.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4AHcHPWM661ZMzgT6vvSPK/a8ee84fa21f13641439a7bb0342ecb2b/image2-10.png" />
            
            </figure><p>Getting up to the larger file sizes we can see true maximum throughput: how much bandwidth do you really have? You may be wondering why we have to use progressively larger files. The reason is that download speeds start off slow (this is aptly called <a href="https://en.wikipedia.org/wiki/TCP_congestion_control#Slow_start">slow start</a>) and then progressively gets faster. If we were to use only small files we would never get to the maximum throughput that your network provider supports, which should be close to the Internet speed your ISP quoted you when you signed up for service.</p><p>The maximum throughput on larger files will be indicative of how fast you can download large files such as games (<a href="https://store.steampowered.com/app/271590/Grand_Theft_Auto_V/">GTA V</a> is almost 100 GB to download!) or the maximum quality that you can stream video on (lower download speed means you have to use a lower resolution to get continuous playback). We only increase download file sizes up to the absolute minimum required to get accurate measurements: no wasted bandwidth.</p>
    <div>
      <h3>Up</h3>
      <a href="#up">
        
      </a>
    </div>
    <p>Upload is the opposite of download: we send data <i>from</i> your browser <i>to</i> the Internet. This metric is more important nowadays with many people working from home: it directly affects <a href="https://www.cloudflare.com/developer-platform/solutions/live-streaming/">live video conferencing</a>. A faster upload speed means your microphone and video feed can be of higher quality, meaning people can see and hear you more clearly on videoconferences.</p><p>Measurements for upload operate in the same manner: we progressively try to upload larger and larger files up until the point we notice your connection is saturated.</p><p>Speed measurements are never 100% consistent, which is why we repeat them. An easy way for us to report your speed would be to simply report the fastest speed we see. The problem is that this will not be representative of your real-world experience: latency and packet loss constantly fluctuates, meaning you can't expect to see your maximum measured performance all the time.</p><p>To compensate for this, we take the 90th percentile of measurements, or p90 and report that instead of the absolute maximum speed that we measured. Taking the 90th percentile is a more accurate representation in that it discounts peak outliers, which is a much closer approximation of what you can expect in terms of speeds in the real world.</p>
    <div>
      <h3>Latency and Jitter</h3>
      <a href="#latency-and-jitter">
        
      </a>
    </div>
    <p>Download and upload are important metrics but don't paint the entire picture of the quality of your Internet connection. Many of us find ourselves interacting with work and friends over videoconferencing software more than ever. Although speeds matter, video is also very sensitive to the <i>latency</i> of your Internet connection. Latency represents the time an IP <i>packet</i> needs to travel from your device to the service you're using on the Internet and back. High latency means that when you're talking on a video conference, it will take longer for the other party to hear your voice.</p><p>But, latency only paints half the picture. Imagine yourself in a conversation where you have some delay before you hear what the other person says. That may be annoying but after a while you get used to it. What would be even worse is if the delay <i>differed</i> constantly: sometimes the audio is almost in sync and sometimes it has a delay of a few seconds. You can imagine how often this would result into two people starting to talk at the same time. This is directly related to how <i>stable</i> your latency is and is represented by the jitter metric. Jitter is the average variation found in consecutive latency measurements. A lower number means that the latencies measured are more consistent, meaning your media streams will have the same delay throughout the session.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4J8jZSYw2a59lXQZLn3Dtf/9704dd82d47bedfab3a8669b8c3f8a61/image4-5.png" />
            
            </figure><p>We've designed speed.cloudflare.com to be as transparent as possible: you can click into any of the measurements to see the average, median, minimum, maximum measurements, and more. If you're interested in playing around with the numbers, there's a download button that will give you the raw results we measured.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2AvTQZVVkOnGyXWDijbUkr/a138ef20812bfc813b8e3e7ee0333f91/image3-9.png" />
            
            </figure><p>The entire speed.cloudflare.com backend runs on Workers, meaning all logic runs entirely on the Cloudflare edge and your browser, no server necessary! If you're interested in seeing how the benchmarks take place, we've open-sourced the code, feel free to take a peek on our <a href="https://github.com/cloudflare/worker-speedtest-template">Github</a> repository.</p><p>We hope you'll enjoy adding this tool to your set of network debugging tools. We love being transparent and our tools reflect this: your network performance is more than just one number. Give it a <a href="https://speed.cloudflare.com">whirl</a> and let us know what you think.</p> ]]></content:encoded>
            <category><![CDATA[Performance]]></category>
            <category><![CDATA[Speed]]></category>
            <category><![CDATA[Latency]]></category>
            <category><![CDATA[COVID-19]]></category>
            <category><![CDATA[Insights]]></category>
            <guid isPermaLink="false">HF2StgkKQVmu9H1vJZ3dp</guid>
            <dc:creator>Achiel van der Mandele</dc:creator>
        </item>
        <item>
            <title><![CDATA[Cloudflare for SSH, RDP and Minecraft]]></title>
            <link>https://blog.cloudflare.com/cloudflare-for-ssh-rdp-and-minecraft/</link>
            <pubDate>Mon, 13 Apr 2020 19:06:38 GMT</pubDate>
            <description><![CDATA[ Cloudflare now covers SSH, RDP and Minecraft, offering DDoS protection and increased network performance. Spectrum pay-as-you-go now available on all paid plans. ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Almost exactly two years ago, we <a href="/spectrum/">launched Cloudflare Spectrum</a> for our Enterprise customers. Today, we’re thrilled to extend DDoS protection and traffic acceleration with Spectrum for <a href="https://www.cloudflare.com/products/cloudflare-spectrum/ssh">SSH</a>, <a href="https://www.cloudflare.com/products/cloudflare-spectrum/rdp/">RDP</a>, and <a href="https://www.cloudflare.com/products/cloudflare-spectrum/minecraft">Minecraft</a> to our Pro and Business plan customers.</p><p>When we think of Cloudflare, a lot of the time we think about protecting and improving the performance of websites. But the Internet is so much more, ranging from gaming, to managing servers, to cryptocurrencies. How do we make sure these applications are secure and performant?</p><p>With Spectrum, you can put Cloudflare in front of your SSH, RDP and Minecraft services, protecting them from DDoS attacks and improving network performance. This allows you to protect the management of your servers, not just your website. Better yet, by leveraging the Cloudflare network you also get increased reliability and increased performance: lower latency!</p>
    <div>
      <h3>Remote access to servers</h3>
      <a href="#remote-access-to-servers">
        
      </a>
    </div>
    <p>While access to websites from home is incredibly important, being able to remotely manage your servers can be equally critical. Losing access to your infrastructure can be disastrous: people need to know their infrastructure is safe and connectivity is good and performant. Usually, server management is done through SSH (Linux or Unix based servers) and RDP (Windows based servers). With these protocols, performance and reliability are <i>key</i>: you need to know you can always reliably manage your servers and that the bad people are kept out. What's more, low latency is really important. Every time you type a key in an SSH terminal or click a button in a remote desktop session, that key press or button click has to traverse the Internet to your origin before the server can process the input and send feedback. While increasing bandwidth can help, lowering latency can help even more in getting your sessions to feel like you're working on a local machine and not one half-way across the globe.</p>
    <div>
      <h3>All work and no play makes Jack Steve a dull boy</h3>
      <a href="#all-work-and-no-play-makes-jack-steve-a-dull-boy">
        
      </a>
    </div>
    <p>While we stay at home, many of us are also looking to play and not only work. Video games in particular have seen a huge <a href="https://en.wikipedia.org/wiki/Impact_of_the_2019%E2%80%9320_coronavirus_pandemic_on_the_video_game_industry">increase in popularity</a>. As personal interaction becomes more difficult to come by, Minecraft has become a popular social outlet. Many of us at Cloudflare are using it to stay in touch and have fun with friends and family in the current age of quarantine. And it’s not just employees at Cloudflare that feel this way, we’ve seen a big increase in Minecraft traffic flowing through our network. Traffic per week had remained steady for a while but has more than tripled since many countries have put their citizens in lockdown:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1AXaLJtErEqZ3dQb1wJguw/3453333c635763235c74f16688887d12/image3-10.png" />
            
            </figure><p>Minecraft is a particularly popular target for DDoS attacks: it's not uncommon for people to develop feuds whilst playing the game. When they do, some of the more tech-savvy players of this game opt to take matters into their own hands and launch a (D)DoS attack, rendering it unusable for the duration of the attacks. Our friends at Hypixel and Nodecraft have known this for many years, which is why they’ve chosen to protect their servers using Spectrum.</p><p>While we love recommending their services, we realize some of you prefer to run your own Minecraft server on a VPS (virtual private server like a DigitalOcean droplet) that you maintain. To help you protect your Minecraft server, we're providing Spectrum for Minecraft as well, available on Pro and Business plans. You'll be able to use the entire Cloudflare network to protect your server and increase network performance.</p>
    <div>
      <h3>How does it work?</h3>
      <a href="#how-does-it-work">
        
      </a>
    </div>
    <p>Configuring Spectrum is easy, just log into your dashboard and head on over to the Spectrum tab. From there you can choose a protocol and configure the IP of your server:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2AVtgjvLEEdw2qPCABTjyg/a8ff0da389d7d9cc6defa51e899f98c4/image2-10.png" />
            
            </figure><p>After that all you have to do is use the subdomain you configured to connect instead of your IP. Traffic will be proxied using Spectrum on the Cloudflare network, keeping the bad people out and your services safe.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4tBzB1f6aFcCSNo9awsyXQ/6cb1e87075fef67a752f99248f8745ad/image1-13.png" />
            
            </figure><p>So how much does this cost? We're happy to announce that <a href="https://www.cloudflare.com/plans/">all paid plans</a> will get access to Spectrum for free, with a generous free data allowance. Pro plans will be able to use SSH and Minecraft, up to 5 gigabytes for free each month. Biz plans can go up to 10 gigabytes for free and also get access to RDP. After the free cap you will be billed on a per <a href="https://support.cloudflare.com/hc/en-us/articles/360041721872">gigabyte basis</a>.</p><p>Spectrum is complementary to Access: it offers DDoS protection and improved network performance as a 'drop-in' product, no configuration necessary on your origins. If you want more control over who has access to which services, we highly recommend taking a look at <a href="https://teams.cloudflare.com/access/">Cloudflare for Teams</a>.</p><p>We're very excited to extend Cloudflare's services to not just HTTP traffic, allowing you to protect your core management services and Minecraft gaming servers. In the future, we'll add support for more protocols. If you have a suggestion, let us know! In the meantime, if you have a Pro or Business account, head on over to the dashboard and enable Spectrum today!</p> ]]></content:encoded>
            <category><![CDATA[Spectrum]]></category>
            <category><![CDATA[SSH]]></category>
            <category><![CDATA[DDoS]]></category>
            <category><![CDATA[Speed & Reliability]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[Security]]></category>
            <guid isPermaLink="false">5BPg140vG7il7lQnjTUQr7</guid>
            <dc:creator>Achiel van der Mandele</dc:creator>
        </item>
        <item>
            <title><![CDATA[Migrating from VPN to Access]]></title>
            <link>https://blog.cloudflare.com/migrating-from-vpn-to-access/</link>
            <pubDate>Sat, 28 Mar 2020 12:00:00 GMT</pubDate>
            <description><![CDATA[ With so many people at Cloudflare now working remotely, it's worth stepping back and looking at the systems we use to get work done and how we protect them. Over the years we've migrated from a traditional "put it behind the VPN!" company to a modern zero-trust architecture.  ]]></description>
            <content:encoded><![CDATA[ 
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/HbcD2nz4oPww319S2cbBv/6b8fb8eb285b84cf4f18fa5de13ecc46/access-plus-Spectrum_2x-1.png" />
            
            </figure><p>With so many people at Cloudflare now working remotely, it's worth stepping back and looking at the systems we use to get work done and how we protect them. <a href="/dogfooding-from-home">Over the years we've migrated</a> from a traditional "put it behind the VPN!" company to a modern zero-trust architecture. Cloudflare hasn’t completed its journey yet, but we're pretty darn close. Our general strategy: protect every internal app we can with <a href="https://teams.cloudflare.com/access/index.html">Access</a> (our zero-trust access proxy), and simultaneously beef up our VPN’s security with <a href="https://www.cloudflare.com/products/cloudflare-spectrum/">Spectrum</a> (a product allowing the proxying of arbitrary TCP and UDP traffic, protecting it from DDoS).</p><p>Before Access, we had many services behind VPN (Cisco ASA running AnyConnect) to enforce strict authentication and authorization. But VPN always felt clunky: it's difficult to set up, maintain (securely), and scale on the server side. Each new employee we onboarded needed to learn how to configure their client. But migration takes time and involves many different teams. While we migrated services one by one, we focused on the high priority services first and worked our way down. Until the last service is moved to Access, we still maintain our VPN, keeping it protected with Spectrum.</p><p>Some of our services didn't run over HTTP or other Access-supported protocols, and still required the use of the VPN: source control (git+ssh) was a particular sore spot. If any of our developers needed to commit code they'd have to fire up the VPN to do so. To help in our new-found goal to destroy the <a href="/releasing-the-cloudflare-access-feature-that-let-us-smash-a-vpn-on-stage/">pinata</a>, we introduced support for <a href="https://www.cloudflare.com/learning/access-management/what-is-ssh/">SSH</a> over Access, which allowed us to replace the VPN as a protection layer for our source control systems.</p><p>Over the years, we've been whittling away at our services, one-by-one. We're nearly there, with only a few niche tools remaining behind the VPN and not behind Access. As of this year, we are no longer requiring new employees to set up VPN as part of their company onboarding! We can see this in our Access logs, with more users logging into more apps every month:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5V12QwR05YIkZ1PHUXvnw6/c5dd851801509785fa22233a454cf8b5/1-3.png" />
            
            </figure><p>During this transition period from VPN to Access, we've had to keep our VPN service up and running. As VPN is a key tool for people doing their work while remote, it's extremely important that this service is highly available and performant.</p><p>Enter Spectrum: our DDoS protection and performance product for any TCP and UDP-based protocol. We put Spectrum in front of our VPN very early on and saw immediate improvement in our security posture and availability, all without any changes in end-user experience.</p><p>With Spectrum sitting in front of our VPN, we now use the entire Cloudflare edge network to protect our VPN endpoints against DDoS and improve performance for VPN end-users.</p><p>Setup was a breeze, with only minimal configuration needed:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3ToKNn4OwfUURzIitlS0VV/39722c8c1b4634328e5180451d553cdf/2-3.png" />
            
            </figure><p>Cisco AnyConnect uses HTTPS (TCP) to authenticate, after which the actual data is tunneled using a DTLS encrypted UDP protocol.</p><p>Although configuration and setup was a breeze, actually getting it to work was definitely not. Our early users quickly noted that although authenticating worked just fine, they couldn’t actually see any data flowing through the VPN. We quickly realized our arch nemesis, the MTU (maximum transmission unit) was to <a href="/path-mtu-discovery-in-practice/">blame</a>. As some of our readers might remember, we have historically always set a very small MTU size for IPv6. We did this because there might be IPv6 to IPv4 tunnels in between eyeballs and our edge. By setting it very low we prevented PTB (packet too big) packets from ever getting sent back to us, which causes problems due to our ECMP routing inside our data centers. But with a VPN, you always increase the packet size due to the VPN header. This means that the 1280 MTU that we had set would never be enough to run a UDP-based VPN. We ultimately settled on an <a href="/increasing-ipv6-mtu/">MTU of 1420</a>, which we still run today and allows us to protect our VPN entirely using Spectrum.</p><p>Over the past few years this has served us well, knowing that our VPN infrastructure is safe and people will be able to continue to work remotely no matter what happens. All in all this has been a very interesting journey, whittling down one service at a time, getting closer and closer to the day we can officially retire our VPN. To us, Access represents the future, with Spectrum + VPN to tide us over and protect our services until they’ve migrated over. In the meantime, as of the start of 2020, new employees no longer get a VPN account by default!</p> ]]></content:encoded>
            <category><![CDATA[Cloudflare Access]]></category>
            <category><![CDATA[Election Security]]></category>
            <category><![CDATA[VPN]]></category>
            <category><![CDATA[Cloudflare Zero Trust]]></category>
            <guid isPermaLink="false">UArmX2uXKhofwGIhZqNQ4</guid>
            <dc:creator>Achiel van der Mandele</dc:creator>
        </item>
        <item>
            <title><![CDATA[Spectrum for UDP: DDoS protection and firewalling for unreliable protocols]]></title>
            <link>https://blog.cloudflare.com/spectrum-for-udp-ddos-protection-and-firewalling-for-unreliable-protocols/</link>
            <pubDate>Wed, 20 Mar 2019 15:01:00 GMT</pubDate>
            <description><![CDATA[ Today, we're announcing Spectrum for UDP. Spectrum for UDP works the same as Spectrum for TCP: Spectrum sits between your clients and your origin. Incoming connections are proxied through, whilst applying our DDoS protection and IP Firewall rules.  ]]></description>
            <content:encoded><![CDATA[ <p>Today, we're announcing Spectrum for UDP. Spectrum for UDP works the same as Spectrum for TCP: Spectrum sits between your clients and your origin. Incoming connections are <i>proxied</i> through, whilst applying our DDoS protection and IP Firewall rules. This allows you to protect your services from all sorts of nasty attacks and completely hides your origin behind Cloudflare.</p><p>Last year, we launched <a href="/spectrum/">Spectrum</a>. Spectrum brought the power of our DDoS and firewall features to all TCP ports and services. Spectrum for TCP allows you to protect your SSH services, gaming protocols, and as of last month, even <a href="https://developers.cloudflare.com/spectrum/getting-started/ftp/">FTP servers</a>. We’ve seen customers running all sorts of applications behind Spectrum, such as <a href="https://www.bitfly.at">Bitfly</a>, <a href="https://nicehash.com">Nicehash</a>, and <a href="https://hypixel.net">Hypixel</a>.</p><p>This is great if you're running TCP services, but plenty of our customers also have workloads running over UDP. As an example, many multiplayer games prefer the low cost and lighter weight of UDP and don't care about whether packets arrive or not.</p><p>UDP applications have historically been hard to protect and secure, which is why we built Spectrum for UDP. Spectrum for UDP allows you to protect standard UDP services (such as RDP over UDP), but can also protect any custom protocol you come up with! The only requirement is that it uses UDP as an underlying protocol.</p>
    <div>
      <h3>Configuring a UDP application on Spectrum</h3>
      <a href="#configuring-a-udp-application-on-spectrum">
        
      </a>
    </div>
    <p>To configure on the dashboard, simply switch the application type from TCP to UDP:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6mq0LQqGOJk7r1M8rwwC8P/fb5cab669371da0048342fc1c00a018e/image1.png" />
            
            </figure>
    <div>
      <h3>Retrieving client information</h3>
      <a href="#retrieving-client-information">
        
      </a>
    </div>
    <p>With Spectrum, we terminate the connection and open a new one to your origin. But, what if you want to still see who's actually connecting to you? For TCP, there's <a href="https://www.haproxy.org/download/1.8/doc/proxy-protocol.txt">Proxy Protocol</a>. Whilst initially introduced by HAProxy, it has since been adopted by more parties, such as <a href="https://docs.nginx.com/nginx/admin-guide/load-balancer/using-proxy-protocol/">nginx</a>. We added <a href="https://developers.cloudflare.com/spectrum/getting-started/proxy-protocol/">support</a> late 2018, allowing you to easily read the client's IP and port from a header that precedes each data stream.</p><p>Unfortunately, there is no equivalent for UDP, so we're rolling our own. Due to the fact that UDP is connection-less, we can't get away with the Proxy Protocol approach for TCP, which prepends the entire stream with one header. Instead, we are forced to prepend each packet with a small header that specifies:</p><ul><li><p>the original client IP</p></li><li><p>the Spectrum IP</p></li><li><p>the original client port</p></li><li><p>the Spectrum port</p></li></ul><p>Schema representing a UDP packet prefaced with our <i>Simple Proxy Protocol</i> header.</p>
            <pre><code>0                   1                   2                   3
 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|          Magic Number         |                               |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+                               +
|                                                               |
+                                                               +
|                                                               |
+                         Client Address                        +
|                                                               |
+                               +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|                               |                               |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+                               +
|                                                               |
+                                                               +
|                                                               |
+                         Proxy Address                         +
|                                                               |
+                               +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|                               |         Client Port           |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|           Proxy Port          |          Payload...           |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+</code></pre>
            <p>Simple Proxy Protocol is turned off by default, which means UDP packets will arrive at your origin as if they were sent from Spectrum. To enable, just enable it on your Spectrum app.</p>
    <div>
      <h3>Getting access to Spectrum for UDP</h3>
      <a href="#getting-access-to-spectrum-for-udp">
        
      </a>
    </div>
    <p>We're excited about launching this and and even more excited to see what you'll build and protect with it. In fact, what if you could build serverless services on Spectrum, without actually having an origin running? Stay tuned for some cool announcements in the near future.</p><p>Spectrum for UDP is currently an Enterprise-only feature. To get UDP enabled for your account, please reach out to your account team and we’ll get you set up.</p><p>One more thing... if you’re at <a href="https://gdconf.com">GDC</a> this year, say hello at booth <a href="https://www.expocad.com/host/fx/ubm/gdc19/exfx.html#floorplan">P1639</a>! We’d love to talk more and learn about what you’d like to do with Spectrum.</p> ]]></content:encoded>
            <category><![CDATA[DDoS]]></category>
            <category><![CDATA[Spectrum]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[TCP]]></category>
            <category><![CDATA[Speed & Reliability]]></category>
            <category><![CDATA[Firewall]]></category>
            <guid isPermaLink="false">5sYOiRAlrMZkxNKFcfX66T</guid>
            <dc:creator>Achiel van der Mandele</dc:creator>
        </item>
    </channel>
</rss>