
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/">
    <channel>
        <title><![CDATA[ The Cloudflare Blog ]]></title>
        <description><![CDATA[ Get the latest news on how products at Cloudflare are built, technologies used, and join the teams helping to build a better Internet. ]]></description>
        <link>https://blog.cloudflare.com</link>
        <atom:link href="https://blog.cloudflare.com/" rel="self" type="application/rss+xml"/>
        <language>en-us</language>
        
        <lastBuildDate>Sat, 04 Apr 2026 07:41:05 GMT</lastBuildDate>
        <item>
            <title><![CDATA[The backbone behind Cloudflare’s Connectivity Cloud]]></title>
            <link>https://blog.cloudflare.com/backbone2024/</link>
            <pubDate>Tue, 06 Aug 2024 14:00:00 GMT</pubDate>
            <description><![CDATA[ Read through the latest milestones and expansions of Cloudflare's global backbone and how it supports our Connectivity Cloud and our services ]]></description>
            <content:encoded><![CDATA[ <p>The modern use of "cloud" arguably traces its origins to the cloud icon, omnipresent in network diagrams for decades. A cloud was used to represent the vast and intricate infrastructure components required to deliver network or Internet services without going into depth about the underlying complexities. At Cloudflare, we embody this principle by providing critical infrastructure solutions in a user-friendly and easy-to-use way. Our logo, featuring the cloud symbol, reflects our commitment to simplifying the complexities of Internet infrastructure for all our users.</p><p>This blog post provides an update about our infrastructure, focusing on our global backbone in 2024, and highlights its benefits for our customers, our competitive edge in the market, and the impact on our mission of helping build a better Internet. Since the time of our last backbone-related <a href="http://blog.cloudflare.com/cloudflare-backbone-internet-fast-lane">blog post</a> in 2021, we have increased our backbone capacity (Tbps) by more than 500%, unlocking new use cases, as well as reliability and performance benefits for all our customers.</p>
    <div>
      <h3>A snapshot of Cloudflare’s infrastructure</h3>
      <a href="#a-snapshot-of-cloudflares-infrastructure">
        
      </a>
    </div>
    <p>As of July 2024, Cloudflare has data centers in 330 cities across more than 120 countries, each running Cloudflare equipment and services. The goal of delivering Cloudflare products and services everywhere remains consistent, although these data centers vary in the number of servers and amount of computational power.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/38RRu7BaumWFemL23JcFLW/fd1e4aced5095b1e04384984c88e48be/BLOG-2432-2.png" />
          </figure><p></p><p>These data centers are strategically positioned around the world to ensure our presence in all major regions and to help our customers comply with local regulations. It is a programmable smart network, where your traffic goes to the best data center possible to be processed. This programmability allows us to keep sensitive data regional, with our <a href="https://www.cloudflare.com/data-localization/">Data Localization Suite solutions</a>, and within the constraints that our customers impose. Connecting these sites, exchanging data with customers, public clouds, partners, and the broader Internet, is the role of our network, which is managed by our infrastructure engineering and network strategy teams. This network forms the foundation that makes our products lightning fast, ensuring our global reliability, security for every customer request, and helping customers comply with <a href="https://www.cloudflare.com/the-net/building-cyber-resilience/challenges-data-sovereignty/">data sovereignty requirements</a>.</p>
    <div>
      <h3>Traffic exchange methods</h3>
      <a href="#traffic-exchange-methods">
        
      </a>
    </div>
    <p>The Internet is an interconnection of different networks and separate <a href="https://www.cloudflare.com/learning/network-layer/what-is-an-autonomous-system/">autonomous systems</a> that operate by exchanging data with each other. There are multiple ways to exchange data, but for simplicity, we'll focus on two key methods on how these networks communicate: Peering and IP Transit. To better understand the benefits of our global backbone, it helps to understand these basic connectivity solutions we use in our network.</p><ol><li><p><b>Peering</b>: The voluntary interconnection of administratively separate Internet networks that allows for traffic exchange between users of each network is known as “<a href="https://www.netnod.se/ix/what-is-peering">peering</a>”. Cloudflare is one of the <a href="https://bgp.he.net/report/exchanges#_participants">most peered networks</a> globally. We have peering agreements with ISPs and other networks in 330 cities and across all major </p><p><a href="https://www.cloudflare.com/learning/cdn/glossary/internet-exchange-point-ixp/">Internet Exchanges (IX’s)</a>. Interested parties can register to <a href="https://www.cloudflare.com/partners/peering-portal/">peer with us</a> anytime, or directly connect to our network with a link through a <a href="https://developers.cloudflare.com/network-interconnect/pni-and-peering/">private network interconnect (PNI)</a>.</p></li><li><p><b>IP transit</b>: A paid service that allows traffic to cross or "transit" somebody else's network, typically connecting a smaller Internet service provider (ISP) to the larger Internet. Think of it as paying a toll to access a private highway with your car.</p></li></ol><p>The backbone is a dedicated high-capacity optical fiber network that moves traffic between Cloudflare’s global data centers, where we interconnect with other networks using these above-mentioned traffic exchange methods. It enables data transfers that are more reliable than over the public Internet. For the connectivity within a city and long distance connections we manage our own dark fiber or lease wavelengths using Dense Wavelength Division Multiplexing (DWDM). DWDM is a fiber optic technology that enhances network capacity by transmitting multiple data streams simultaneously on different wavelengths of light within the same fiber. It’s like having a highway with multiple lanes, so that more cars can drive on the same highway. We buy and lease these services from our global carrier partners all around the world.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1RgjDtW5LehGZEYXey4AQH/cfef08965313f67c84a052e0541fc42b/BLOG-2432-3.png" />
          </figure><p></p>
    <div>
      <h3>Backbone operations and benefits</h3>
      <a href="#backbone-operations-and-benefits">
        
      </a>
    </div>
    <p>Operating a global backbone is challenging, which is why many competitors don’t do it. We take this challenge for two key reasons: traffic routing control and cost-effectiveness.</p><p>With IP transit, we rely on our transit partners to carry traffic from Cloudflare to the ultimate destination network, introducing unnecessary third-party reliance. In contrast, our backbone gives us full control over routing of both internal and external traffic, allowing us to manage it more effectively. This control is crucial because it lets us optimize traffic routes, usually resulting in the lowest latency paths, as previously mentioned. Furthermore, the cost of serving large traffic volumes through the backbone is, on average, more cost-effective than IP transit. This is why we are doubling down on backbone capacity in regions such as Frankfurt, London, Amsterdam, and Paris and Marseille, where we see continuous traffic growth and where connectivity solutions are widely available and competitively priced.</p><p>Our backbone serves both internal and external traffic. Internal traffic includes customer traffic using our security or performance products and traffic from Cloudflare's internal systems that shift data between our data centers. <a href="http://blog.cloudflare.com/introducing-regional-tiered-cache">Tiered caching</a>, for example, optimizes our caching delivery by dividing our data centers into a hierarchy of lower tiers and upper tiers. If lower-tier data centers don’t have the content, they request it from the upper tiers. If the upper tiers don’t have it either, they then request it from the origin server. This process reduces origin server requests and improves cache efficiency. Using our backbone to transport the cached content between lower and upper-tier data centers and the origin is often the most cost-effective method, considering the scale of our network. <a href="https://www.cloudflare.com/network-services/products/magic-transit/">Magic Transit</a> is another example where we attract traffic, by means of BGP anycast, to the Cloudflare data center closest to the end user and implement our DDoS solution. Our backbone transports the clean traffic to our customer’s data center, which they connect through a <a href="http://blog.cloudflare.com/cloudflare-network-interconnect">Cloudflare Network Interconnect (CNI)</a>.</p><p>External traffic that we carry on our backbone can be traffic from other origin providers like AWS, Oracle, Alibaba, Google Cloud Platform, or Azure, to name a few. The origin responses from these cloud providers are transported through peering points and our backbone to the Cloudflare data center closest to our customer. By leveraging our backbone we have more control over how we backhaul this traffic throughout our network, which results in more reliability and better performance and less dependency on the public Internet.</p><p>This interconnection between public clouds, offices, and the Internet with a controlled layer of performance, security, programmability, and visibility running on our global backbone is our <a href="http://blog.cloudflare.com/welcome-to-connectivity-cloud">Connectivity Cloud</a>.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1Fk6k5NOgfOM3qpK0z3wb0/2fe9631dbe6b2dfc6b3c3cd0156f293e/Screenshot_2024-08-28_at_3.21.50_PM.png" />
          </figure><p><sub><i>This map is a simplification of our current backbone network and does not show all paths</i></sub></p><p></p>
    <div>
      <h3>Expanding our network</h3>
      <a href="#expanding-our-network">
        
      </a>
    </div>
    <p>As mentioned in the introduction, we have increased our backbone capacity (Tbps) by more than 500% since 2021. With the addition of sub-sea cable capacity to Africa, we achieved a big milestone in 2023 by completing our global backbone ring. It now reaches six continents through terrestrial fiber and subsea cables.</p><p>Building out our backbone within regions where Internet infrastructure is less developed compared to markets like Central Europe or the US has been a key strategy for our latest network expansions. We have a shared goal with regional ISP partners to keep our data flow localized and as close as possible to the end user. Traffic often takes inefficient routes outside the region due to the lack of sufficient local peering and regional infrastructure. This phenomenon, known as traffic tromboning, occurs when data is routed through more cost-effective international routes and existing peering agreements.</p><p>Our regional backbone investments in countries like India or Turkey aim to reduce the need for such inefficient routing. With our own in-region backbone, traffic can be directly routed between in-country Cloudflare data centers, such as from Mumbai to New Delhi to Chennai, reducing latency, increasing reliability, and helping us to provide the same level of service quality as in more developed markets. We can control that data stays local, supporting our Data Localization Suite (<a href="https://www.cloudflare.com/data-localization/">DLS</a>), which helps businesses comply with regional data privacy laws by controlling where their data is stored and processed.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4WCNB78y1jHHsid46pBZOo/e950ced1e510cb8caeea0961c43ea8a0/BLOG-2432-5.png" />
          </figure><p></p>
    <div>
      <h3>Improved latency and performance</h3>
      <a href="#improved-latency-and-performance">
        
      </a>
    </div>
    <p>This strategic expansion has not only extended our global reach but has also significantly improved our overall latency. One illustration of this is that since the deployment of our backbone between Lisbon and Johannesburg, we have seen a major performance improvement for users in Johannesburg. Customers benefiting from this improved latency can be, for example, a financial institution running their APIs through us for real-time trading, where milliseconds can impact trades, or our <a href="https://www.cloudflare.com/network-services/products/magic-wan/">Magic WAN</a> users, where we facilitate site-to-site connectivity between their branch offices.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1o0H8BNLf5ca8BBx38Q5Ee/5b22f7c0ad1c5c49a67bc5149763e81d/BLOG-2432-6.png" />
          </figure><p></p><p>The table above shows an example where we measured the round-trip time (RTT) for an uncached origin fetch, from an end-user in Johannesburg to various origin locations, comparing our backbone and the public Internet. By carrying the origin request over our backbone, as opposed to IP transit or peering, local users in Johannesburg get their content up to 22% faster. By using our own backbone to long-haul the traffic to its final destination, we are in complete control of the path and performance. This improvement in latency varies by location, but consistently demonstrates the superiority of our backbone infrastructure in delivering high performance connectivity.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4ZEEZJERWQ2UB1sdTjWUtM/f90b11507ab24edbf84e9b4cfb9b1155/BLOG-2432-7.png" />
          </figure><p></p>
    <div>
      <h3>Traffic control</h3>
      <a href="#traffic-control">
        
      </a>
    </div>
    <p>Consider a navigation system using 1) GPS to identify the route and 2) a highway toll pass that is valid until your final destination and allows you to drive straight through toll stations without stopping. Our backbone works quite similarly.</p><p>Our global backbone is built upon two key pillars. The first is BGP (<a href="https://www.cloudflare.com/learning/security/glossary/what-is-bgp/">Border Gateway Protocol</a>), the routing protocol for the Internet, and the second is Segment Routing MPLS (<a href="https://www.cloudflare.com/learning/network-layer/what-is-mpls/">Multiprotocol label switching</a>), a technique for steering traffic across predefined forwarding paths in an IP network. By default, Segment Routing provides end-to-end encapsulation from ingress to egress routers where the intermediate nodes execute no route lookup. Instead, they forward traffic across an end-to-end virtual circuit, or tunnel, called a label-switched path. Once traffic is put on a label-switched path, it cannot detour onto the public Internet and must continue on the predetermined route across Cloudflare’s backbone. This is nothing new, as many networks will even run a “BGP Free Core” where all the route intelligence is carried at the edge of the network, and intermediate nodes only participate in forwarding from ingress to egress.</p><p>While leveraging Segment Routing Traffic Engineering (SR-TE) in our backbone, we can automatically select paths between our data centers that are optimized for latency and performance. Sometimes the “shortest path” in terms of routing protocol cost is not the lowest latency or highest performance path.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6QettBytPdJxacwVLVHYFN/de95a8e5a67514e64931fbe4d26967b6/BLOG-2432-8.png" />
          </figure>
    <div>
      <h3>Supercharged: Argo and the global backbone</h3>
      <a href="#supercharged-argo-and-the-global-backbone">
        
      </a>
    </div>
    <p><a href="https://www.cloudflare.com/lp/pg-argo-smart-routing/?utm_source=google&amp;utm_medium=cpc&amp;utm_campaign=ao-fy-pay-gbl_en_native-applications-ge-ge-general-core_paid_apo_argo&amp;utm_content=argo&amp;utm_term=cloudflare+argo&amp;campaignid=71700000092259497&amp;adgroupid=58700007751943324&amp;creativeid=666481290143&amp;&amp;_bt=666481290143&amp;_bk=cloudflare%20argo&amp;_bm=e&amp;_bn=g&amp;_bg=138787490550&amp;_placement=&amp;_target=&amp;_loc=1017825&amp;_dv=c&amp;awsearchcpc=1&amp;gad_source=1&amp;gclid=Cj0KCQjwvb-zBhCmARIsAAfUI2uj2VOkHjvM2qspAfBodOROAH_bG040P6bjvQeEbVwFF1qwdEKLXLkaAllMEALw_wcB&amp;gclsrc=aw.ds">Argo Smart Routing</a> is a service that uses Cloudflare’s portfolio of backbone, transit, and peering connectivity to find the most optimal path between the data center where a user’s request lands and your back-end origin server. Argo may forward a request from one Cloudflare data center to another on the way to an origin if the performance would improve by doing so. <a href="http://blog.cloudflare.com/orpheus-saves-internet-requests-while-maintaining-speed">Orpheus</a> is the counterpart to Argo, and routes around degraded paths for all customer origin requests free of charge. Orpheus is able to analyze network conditions in real-time and actively avoid reachability failures. Customers with Argo enabled get optimal performance for requests from Cloudflare data centers to their origins, while Orpheus provides error self-healing for all customers universally. By mixing our global backbone using Segment Routing as an underlay with <a href="https://www.cloudflare.com/application-services/products/argo-smart-routing/">Argo Smart Routing</a> and Orpheus as our connectivity overlay, we are able to transport critical customer traffic along the most optimized paths that we have available.</p><p>So how exactly does our global backbone fit together with Argo Smart Routing? <a href="http://blog.cloudflare.com/argo-and-the-cloudflare-global-private-backbone">Argo Transit Selection</a> is an extension of Argo Smart Routing where the lowest latency path between Cloudflare data center hops is explicitly selected and used to forward customer origin requests. The lowest latency path will often be our global backbone, as it is a more dedicated and private means of connectivity, as opposed to third-party transit networks.</p><p>Consider a multinational Dutch pharmaceutical company that relies on Cloudflare's network and services with our <a href="https://www.cloudflare.com/learning/access-management/what-is-sase/">SASE solution</a> to connect their global offices, research centers, and remote employees. Their Asian branch offices depend on Cloudflare's security solutions and network to provide secure access to important data from their central data centers back to their offices in Asia. In case of a cable cut between regions, our network would automatically look for the best alternative route between them so that business impact is limited.</p><p>Argo measures every potential combination of the different provider paths, including our own backbone, as an option for reaching origins with smart routing. Because of our vast interconnection with so many networks, and our global private backbone, Argo is able to identify the most performant network path for requests. The backbone is consistently one of the lowest latency paths for Argo to choose from.</p><p>In addition to high performance, we care greatly about network reliability for our customers. This means we need to be as resilient as possible from fiber cuts and third-party transit provider issues. During a disruption of the <a href="https://en.wikipedia.org/wiki/AAE-1">AAE-1</a> (<a href="https://www.submarinecablemap.com/submarine-cable/asia-africa-europe-1-aae-1">Asia Africa Europe-1</a>) submarine cable, this is what Argo saw between Singapore and Amsterdam across some of our transit provider paths vs. the backbone.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/66CGBePnLzuLRuTErvf8Cr/813b4b60a95935491e967214851e5a04/BLOG-2432-9.png" />
          </figure><p>The large (purple line) spike shows a latency increase on one of our third-party IP transit provider paths due to congestion, which was eventually resolved following likely traffic engineering within the provider’s network. We saw a smaller latency increase (yellow line) over other transit networks, but still one that is noticeable. The bottom (green) line on the graph is our backbone, where round-trip time more or less remains flat throughout the event, due to our diverse backbone connectivity between Asia and Europe. Throughout the fiber cut, we remained stable at around 200ms between Amsterdam and Singapore. There was no noticeable network hiccup as was seen on the transit provider paths, so Argo actively leveraged the backbone for optimal performance.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1A8CdaGq8P2hF3DtIs9dQI/a10fdf3af9de917fb0036d38eace9905/BLOG-2432-10.png" />
          </figure>
    <div>
      <h3>Call to action</h3>
      <a href="#call-to-action">
        
      </a>
    </div>
    <p>As Argo improves performance in our network, Cloudflare Network Interconnects (<a href="https://developers.cloudflare.com/network-interconnect/">CNIs</a>) optimize getting onto it. We encourage our Enterprise customers to use our free CNI’s as on-ramps onto our network whenever practical. In this way, you can fully leverage our network, including our robust backbone, and increase overall performance for every product within your Cloudflare Connectivity Cloud. In the end, our global network is our main product and our backbone plays a critical role in it. This way we continue to help build a better Internet, by improving our services for everybody, everywhere.</p><p>If you want to be part of our mission, join us as a Cloudflare network on-ramp partner to offer secure and reliable connectivity to your customers by integrating directly with us. Learn more about our on-ramp partnerships and how they can benefit your business <a href="https://www.cloudflare.com/network-onramp-partners/">here</a>.</p> ]]></content:encoded>
            <category><![CDATA[Connectivity Cloud]]></category>
            <category><![CDATA[Anycast]]></category>
            <category><![CDATA[Argo Smart Routing]]></category>
            <category><![CDATA[Athenian Project]]></category>
            <category><![CDATA[BGP]]></category>
            <category><![CDATA[Better Internet]]></category>
            <category><![CDATA[Cloudflare Network]]></category>
            <category><![CDATA[Magic Transit]]></category>
            <category><![CDATA[Product News]]></category>
            <guid isPermaLink="false">WiHZr8Fb6WzdVjo0egsWW</guid>
            <dc:creator>Shozo Moritz Takaya</dc:creator>
            <dc:creator>Bryton Herdes</dc:creator>
        </item>
        <item>
            <title><![CDATA[Argo Smart Routing for UDP: speeding up gaming, real-time communications and more]]></title>
            <link>https://blog.cloudflare.com/turbo-charge-gaming-and-streaming-with-argo-for-udp/</link>
            <pubDate>Tue, 20 Jun 2023 13:00:40 GMT</pubDate>
            <description><![CDATA[ Today, Cloudflare is super excited to announce that we’re bringing traffic acceleration to customer’s UDP traffic. Now, you can improve the latency of UDP-based applications like video games, voice calls, and video meetings by up to 17% ]]></description>
            <content:encoded><![CDATA[ 
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/64tixskqgONiSTACdvMbMX/3502932df801cd9691f432892495f379/image1-14.png" />
            
            </figure><p>Today, Cloudflare is super excited to announce that we’re bringing traffic acceleration to customer’s UDP traffic. Now, you can improve the latency of UDP-based applications like video games, voice calls, and video meetings by up to 17%. Combining the power of Argo Smart Routing (our traffic acceleration product) with UDP gives you the ability to supercharge your UDP-based traffic.</p>
    <div>
      <h3>When applications use TCP vs. UDP</h3>
      <a href="#when-applications-use-tcp-vs-udp">
        
      </a>
    </div>
    <p>Typically when people talk about the Internet, they think of websites they visit in their browsers, or apps that allow them to order food. This type of traffic is sent across the Internet via <a href="https://www.cloudflare.com/learning/ddos/glossary/hypertext-transfer-protocol-http/">HTTP</a> which is built on top of the <a href="https://www.cloudflare.com/learning/ddos/glossary/tcp-ip/">Transmission Control Protocol</a> (TCP). However, there’s a lot more to the Internet than just browsing websites and using apps. Gaming, <a href="https://www.cloudflare.com/developer-platform/solutions/live-streaming/">live video</a>, or tunneling traffic to different networks via a VPN are all common applications that don’t use HTTP or TCP. These popular applications leverage the <a href="https://www.cloudflare.com/learning/ddos/glossary/user-datagram-protocol-udp/">User Datagram Protocol</a> (or UDP for short). To understand why these applications use UDP instead of TCP, we’ll need to dig into how these different applications work.</p><p>When you load a web page, you generally want to see the <i>entire</i> web page; the website would be confusing if parts of it are missing. For this reason, HTTP uses TCP as a method of transferring website data. TCP ensures that if a packet ever gets lost as it crosses the Internet, that packet will be resent. Having a reliable protocol like TCP is generally a good idea when 100% of the information sent needs to be loaded. It’s worth noting that later HTTP versions like <a href="https://www.cloudflare.com/learning/performance/what-is-http3/">HTTP/3</a> actually deviated from TCP as a transmission protocol, but they still ensure packet delivery by handling packet retransmission using the <a href="/the-road-to-quic/">QUIC protocol</a>.</p><p>There are other applications that prioritize quickly sending real time data and are less concerned about perfectly delivering 100% of the data. Let’s explore Real-Time Communications (RTC) like video meetings as an example. If two people are streaming video live, all they care about is what is happening <i>now</i>. If a few packets are lost during the initial transmission, retransmission is usually too slow to render the lost packet data in the current video frame. TCP doesn’t really make sense in this scenario.</p><p>Instead, RTC protocols are built on top of UDP. TCP is like a formal back and forth conversation where every sentence matters. UDP is more like listening to your friend's stream of consciousness: you don’t care about every single bit as long as you get the gist of it. UDP transfers packet data with speed and efficiency without guaranteeing the delivery of those packets. This is perfect for applications like RTC where reducing latency is more important than occasionally losing a packet here or there. The same applies to gaming traffic; you generally want the most up-to-date information, and you don’t really care about retransmitting lost packets.</p><p>Gaming and RTC applications <i>really</i> care about <a href="https://www.cloudflare.com/learning/performance/glossary/what-is-latency/">latency</a>. Latency is the length of time it takes a packet to be sent to a server plus the length of time to receive a response from the server (called <a href="https://www.cloudflare.com/learning/cdn/glossary/round-trip-time-rtt/">round-trip time or RTT</a>). In the case of video games, the higher the latency, the longer it will take for you to see other players move and the less time you’ll have to react to the game. With enough latency, games become unplayable: if the players on your screen are constantly blipping around it’s near impossible to interact with them. In RTC applications like video meetings, you’ll experience a delay between yourself and your counterpart. You may find yourselves accidentally talking over each other which isn’t a great experience.</p><p>Companies that host gaming or RTC infrastructure often try to reduce latency by spinning up servers that are geographically closer to their users. However, it’s common to have two users that are trying to have a video call between distant locations like Amsterdam and Los Angeles. No matter where you install your servers, that's still a long distance for that traffic to travel. The longer the path, the higher the chances are that you're going to run into congestion along the way. Congestion is just like a traffic jam on a highway, but for networks. Sometimes certain paths get overloaded with traffic. This causes delays and packets to get dropped. This is where Argo Smart Routing comes in.</p>
    <div>
      <h3>Argo Smart Routing</h3>
      <a href="#argo-smart-routing">
        
      </a>
    </div>
    <p>Cloudflare customers that want the best cross-Internet application performance rely on Argo Smart Routing’s traffic acceleration to reduce latency. Argo Smart Routing is like the GPS of the Internet. It uses real time global network performance measurements to accelerate traffic, actively route around Internet congestion, and increase your traffic’s stability by reducing packet loss and jitter.</p><p>Argo Smart Routing was launched in <a href="/argo/">May 2017</a>, and its first iteration focused on reducing website traffic latency. Since then, we’ve <a href="/argo-v2/">improved Argo Smart Routing</a> and also <a href="/argo-spectrum/">launched Argo Smart Routing for Spectrum TCP traffic</a> which reduces latency in any TCP-based protocols. Today, we’re excited to bring the same Argo Smart Routing technology to customer’s UDP traffic which will reduce latency, packet loss, and jitter in gaming, and live audio/video applications.</p><p>Argo Smart Routing accelerates Internet traffic by sending millions of synthetic probes from every Cloudflare data center to the origin of every Cloudflare customer. These probes measure the latency of all possible routes between Cloudflare’s data centers and a customer’s origin. We then combine that with probes running between Cloudflare’s data centers to calculate possible routes. When an Internet user makes a request to an origin, Cloudflare calculates the results of our real time global latency measurements, examines Internet congestion data, and calculates the optimal route for customer’s traffic. To enable Argo Smart Routing for UDP traffic, Cloudflare extended the route computations typically used for HTTP and TCP traffic and applied them to UDP traffic.</p><p>We knew that Argo Smart Routing offered impressive benefits for HTTP traffic, reducing time to first byte by up to 30% on average for customers. But UDP can be treated differently by networks, so we were curious to see if we would see a similar reduction in round-trip-time for UDP. To validate, we ran a set of tests. We set up an origin in Iowa, USA and had a client connect to it from Tokyo, Japan. Compared to a regular Spectrum setup, we saw a decrease in round-trip-time of up to 17.3% on average. For the standard setup, Spectrum was able to proxy packets to Iowa in 173.3 milliseconds on average. Comparatively, turning on Argo Smart Routing reduced the average round-trip-time down to 143.3 milliseconds. The distance between those two cities is 6,074 miles (9,776 kilometers), meaning we've effectively moved the two closer to each other by over a thousand miles (or 1,609 km) just by turning on this feature.</p><p>We're incredibly excited about Argo Smart Routing for UDP and what our customers will use it for. If you're in gaming or real-time-communications, or even have a different use-case that you think would benefit from speeding up UDP traffic, please contact your account team today. We are currently in closed beta but are excited about accepting applications.</p> ]]></content:encoded>
            <category><![CDATA[Speed Week]]></category>
            <category><![CDATA[Argo Smart Routing]]></category>
            <category><![CDATA[Spectrum]]></category>
            <category><![CDATA[UDP]]></category>
            <category><![CDATA[Performance]]></category>
            <guid isPermaLink="false">5qKIhJCi7nIZIQudfOBtgh</guid>
            <dc:creator>Achiel van der Mandele</dc:creator>
            <dc:creator>Chris Draper</dc:creator>
        </item>
        <item>
            <title><![CDATA[Argo for Packets is Generally Available]]></title>
            <link>https://blog.cloudflare.com/argo-for-packets-generally-available/</link>
            <pubDate>Fri, 10 Dec 2021 13:58:45 GMT</pubDate>
            <description><![CDATA[ Today, we’re announcing the general availability of Argo for Packets, which provides IP layer network optimizations to supercharge your Cloudflare network services products. ]]></description>
            <content:encoded><![CDATA[ 
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3Ze9T67g2Ut7CQGQ7soMNr/66a70e17c8d33f4b9e53400fe82d2d4e/image5-14.png" />
            
            </figure><p>What would you say if we told you your IP network can be faster by 10%, and all you have to do is reach out to your account team to make it happen?</p><p>Today, we’re announcing the general availability of Argo for Packets, which provides IP layer network optimizations to supercharge your Cloudflare network services products like <a href="https://www.cloudflare.com/magic-transit/">Magic Transit</a> (our Layer 3 DDoS protection service), <a href="https://www.cloudflare.com/magic-wan/">Magic WAN</a> (which lets you build your own <a href="https://www.cloudflare.com/learning/network-layer/what-is-an-sd-wan/">SD-WAN</a> on top of Cloudflare), and <a href="/cloudflare-for-offices/">Cloudflare for Offices</a> (our initiative to provide secure, performant connectivity into thousands of office buildings around the world).</p><p>If you’re not familiar with Argo, it’s a Cloudflare product that makes your traffic faster. Argo finds the fastest, most available path for your traffic on the Internet. Every day, Cloudflare carries trillions of requests, connections, and packets across our network and the Internet. Because our network, our customers, and their end users are well distributed globally, all of these requests flowing across our infrastructure paint a great picture of how different parts of the Internet are performing at any given time. Cloudflare leverages this picture to ensure that your traffic takes the fastest path through our infrastructure.</p><p>Previously, Argo optimized traffic at the Layer 7 application layer and at the Layer 4 protocol layer. With the GA of Argo for Packets, we’re now optimizing the IP layer for your private network. During <a href="/argo-v2/">Speed Week we announced</a> the early access for Argo for Packets, and how it can offer a 10% latency reduction. Today, to celebrate Argo for Packets reaching GA, we’re going to dive deeper into the latency reductions, show you examples, explain how you can see even greater optimizations, and talk about how Argo’s secure data plane gives you additional encryption even at Layer 3.</p><p>And if you’re interested in enabling Argo for Packets today, please reach out to your account team to get the process started!</p>
    <div>
      <h3>Better than BGP</h3>
      <a href="#better-than-bgp">
        
      </a>
    </div>
    <p>As we said during Speed Week, Argo for Packets provides an average 10% latency improvement across the world in our internal testing:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/ikoMLDL8TGw5uMACrwHg1/6834d63e080c9ebe7117e1f66993a7fc/image6-14.png" />
            
            </figure><p>As we moved towards GA, we found that our real world numbers match our internal testing, and we still see that 10% improvement. But it’s important to note that the 10% latency reduction numbers are an average across all paths across the world. Different customers can see different latency gains depending on their setup.</p><p>Argo for Packets achieves these latency gains by dynamically choosing the best possible path throughout our network. Let’s talk a bit about what that means.</p><p>Normal packets on the network find their way to their destination using something called the <a href="https://www.cloudflare.com/learning/security/glossary/what-is-bgp/">Border Gateway Protocol (BGP)</a>, which allows packets to traverse the “shortest” path to its destination. However, the shortest path in BGP terms isn’t strongly correlated with latency, but with network hops. For example, path A in a network has two possible paths: 12345 - 54321 - 13335, and 12345 13335. Both networks start from the network 12345 and end at Cloudflare, which is AS 13335. BGP logic dictates that traffic will always go through the second path. But if the first path has a lower network latency or lower packet loss, customers could potentially see better performance and not know it!</p><p>There are two ways to remedy this. The first way is to invest in building out more pipes with network 12345 while expanding the network to be right next to every network. Customers can also build out their own networks or purchase expensive vendor MPLS networks. Either solution will cost a lot of money and time to reach the levels of performance customers want.</p><p>Cloudflare improves customer performance by leveraging our existing global network and backbone, plus the networking data from traffic that’s already being sent over to optimize routes back to you. This allows us to improve which paths are taken as traffic changes and congestion on the Internet happens. Argo looks at every path back from every Cloudflare datacenter back to your origin, down to the individual network path. Argo compares existing Layer 4 traffic and network analytics across all of these unique paths to determine the fastest, most available path.</p><p>To make Argo personalized to your private network, Cloudflare makes use of a data source that we already built for <a href="https://www.cloudflare.com/magic-transit/">Magic Transit</a>. That data source: health check probes. Cloudflare leverages existing health check probes from every single Cloudflare data center back to each customer’s origin. These probes are used to determine the health of paths from Cloudflare back to a customer for Magic Transit, so that Cloudflare knows which paths back to origin are healthy. These probes contain a variety of information that can also be used to improve performance such as packet loss and latency data. By examining health check probes and adding them to existing Layer 4 data, Cloudflare can get a better understanding of one-way latencies and can construct a map that allows us to see all the interconnected data centers and how fast they are to each other. Cloudflare then finds the best path at layer 3 back to the customer datacenter by picking an entry location where the packet entered our network, and an exit location that is directly connected back to the customer via a Cloudflare Network Interconnect.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7ml1iHvry9yunmX1ue1xbc/e0585642360129d42581d12d199fff33/image1-49.png" />
            
            </figure><p>Using this map, Cloudflare constructs dynamic routes for each customer based on where their traffic enters Cloudflare’s network and where they need to go.</p><p>Let’s dive into some examples of how your latency reductions manifest depending on your setup.</p>
    <div>
      <h3>Cloudflare’s Network is Your Network</h3>
      <a href="#cloudflares-network-is-your-network">
        
      </a>
    </div>
    <p>In our <a href="/magic-makes-your-network-faster/">Speed Week blog</a> outlining how Magic products make your network faster, we outlined several different network topology examples and showed the improvements Magic Transit and Magic WAN had on their networks. Let’s supercharge those numbers by adding Argo for Packets on top of that to see how we can improve performance even further.</p><p>The example from the blog outlined a company with locations in South Carolina, Oregon, and Los Angeles. In that blog, we showed the latency improvements that Magic Transit by itself provided for one leg of the trip. That network looks like this:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/Asq79awFbjcKjQEyM1YYj/3c7780faf0a954920c40facdfef73abc/image3-23.png" />
            
            </figure>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/vZPTaPL8yyUuuosFPL28r/09b17497953c525aafefba63ccb00783/image4-25.png" />
            
            </figure><p>Let’s break that out to show the latencies between all paths on that network. Let’s assume that South Carolina connects to Atlanta, and Oregon connects to Seattle, which is the most likely scenario:</p>
<div><table><thead>
  <tr>
    <th><span>Source Location</span></th>
    <th><span>Destination Location</span></th>
    <th><span>Magic WAN one-way latency</span></th>
    <th><span>Argo for Packets One-way latency</span></th>
    <th><span>Argo improvement (in ms)</span></th>
    <th><span>Latency percent improvement</span></th>
  </tr></thead>
<tbody>
  <tr>
    <td><span>Los Angeles</span></td>
    <td><span>Atlanta</span></td>
    <td><span>49.1</span></td>
    <td><span>45</span></td>
    <td><span>4.11</span></td>
    <td><span>8.36</span></td>
  </tr>
  <tr>
    <td><span>Los Angeles</span></td>
    <td><span>Seattle</span></td>
    <td><span>32.4</span></td>
    <td><span>27.2</span></td>
    <td><span>5.18</span></td>
    <td><span>16</span></td>
  </tr>
  <tr>
    <td><span>Atlanta</span></td>
    <td><span>Los Angeles</span></td>
    <td><span>49</span></td>
    <td><span>44.9</span></td>
    <td><span>4.09</span></td>
    <td><span>8.35</span></td>
  </tr>
  <tr>
    <td><span>Atlanta</span></td>
    <td><span>Seattle</span></td>
    <td><span>78.1</span></td>
    <td><span>56.9</span></td>
    <td><span>21.2</span></td>
    <td><span>27.1</span></td>
  </tr>
  <tr>
    <td><span>Seattle</span></td>
    <td><span>Los Angeles</span></td>
    <td><span>32.2</span></td>
    <td><span>27</span></td>
    <td><span>5.22</span></td>
    <td><span>16.2</span></td>
  </tr>
  <tr>
    <td><span>Seattle</span></td>
    <td><span>Atlanta</span></td>
    <td><span>77.7</span></td>
    <td><span>56.7</span></td>
    <td><span>20.9</span></td>
    <td><span>26.9</span></td>
  </tr>
</tbody></table></div><p>For this sample customer network, Argo for Packets improves latencies on every possible path. As you can see the average percent improvement is much higher for this particular network than the global average of 10%.</p><p>Let’s take another example of a customer with locations in Asia: South Korea, Philippines, Singapore, Osaka, and Hong Kong. For a network with those locations, Argo for Packets is able to create a 17% latency reduction by finding the optimal paths between locations that were typically trickiest to navigate, like between South Korea, Osaka, and the Philippines. A customer with many locations will see huge benefits from Argo for Packets, because it optimizes the trickiest paths on the Internet and makes them just as fast as the other paths. It removes the latency incurred by your worst network paths and makes not only your average numbers look good, but especially your 90th percentile latency numbers.</p><p>Reducing these long-tail latencies is critical especially as customers move back to better and start returning to offices all around the world.</p>
    <div>
      <h3>Next Stop: Your Office</h3>
      <a href="#next-stop-your-office">
        
      </a>
    </div>
    <p>Argo for Packets pairs brilliantly with Magic WAN and Cloudflare for Offices to create a hyper-optimized, ultra-secure, private network that adapts to whatever you throw at it. If this is your first time hearing about <a href="/cloudflare-for-offices/">Cloudflare for Offices</a>, it’s our new initiative to provide private, secure, performant connectivity to thousands of new locations around the world. And that private connectivity provides a great foundation for Argo for Packets to speed up your network.</p><p>Taking the above example from the United States, if this company adds two new locations in Boston and Dallas, those locations also see significant latency reduction through Argo for Packets. Now, their network looks like this:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1Mtub5sA25DVwRXkxKLa6o/97053ec38ab4138ed1470b24f207ae76/image2-32.png" />
            
            </figure><p>Argo for Packets also ensures that those freshly added new offices will immediately see great performance on the private network:</p>
<div><table><thead>
  <tr>
    <th><span>Source Location</span></th>
    <th><span>Destination Location</span></th>
    <th><span>Argo improvement (in ms)</span></th>
    <th><span>Latency percent improvement</span></th>
  </tr></thead>
<tbody>
  <tr>
    <td><span>Los Angeles</span></td>
    <td><span>Dallas</span></td>
    <td><span>9.89</span></td>
    <td><span>23.3</span></td>
  </tr>
  <tr>
    <td><span>Los Angeles</span></td>
    <td><span>Atlanta</span></td>
    <td><span>0.774</span></td>
    <td><span>1.58</span></td>
  </tr>
  <tr>
    <td><span>Los Angeles</span></td>
    <td><span>Seattle</span></td>
    <td><span>0.478</span></td>
    <td><span>1.51</span></td>
  </tr>
  <tr>
    <td><span>Los Angeles</span></td>
    <td><span>Boston</span></td>
    <td><span>13.3</span></td>
    <td><span>16.8</span></td>
  </tr>
  <tr>
    <td><span>Dallas</span></td>
    <td><span>Los Angeles</span></td>
    <td><span>9.66</span></td>
    <td><span>23</span></td>
  </tr>
  <tr>
    <td><span>Dallas</span></td>
    <td><span>Atlanta</span></td>
    <td><span>0</span></td>
    <td><span>0</span></td>
  </tr>
  <tr>
    <td><span>Dallas</span></td>
    <td><span>Seattle</span></td>
    <td><span>2.96</span></td>
    <td><span>5.2</span></td>
  </tr>
  <tr>
    <td><span>Dallas</span></td>
    <td><span>Boston</span></td>
    <td><span>0.43</span></td>
    <td><span>0.955</span></td>
  </tr>
  <tr>
    <td><span>Atlanta</span></td>
    <td><span>Los Angeles</span></td>
    <td><span>0.687</span></td>
    <td><span>1.4</span></td>
  </tr>
  <tr>
    <td><span>Atlanta</span></td>
    <td><span>Dallas</span></td>
    <td><span>0</span></td>
    <td><span>0</span></td>
  </tr>
  <tr>
    <td><span>Atlanta</span></td>
    <td><span>Seattle</span></td>
    <td><span>9.7</span></td>
    <td><span>12.4</span></td>
  </tr>
  <tr>
    <td><span>Atlanta</span></td>
    <td><span>Boston</span></td>
    <td><span>4.39</span></td>
    <td><span>15.2</span></td>
  </tr>
  <tr>
    <td><span>Seattle</span></td>
    <td><span>Los Angeles</span></td>
    <td><span>0.322</span></td>
    <td><span>1.02</span></td>
  </tr>
  <tr>
    <td><span>Seattle</span></td>
    <td><span>Dallas</span></td>
    <td><span>3.11</span></td>
    <td><span>5.43</span></td>
  </tr>
  <tr>
    <td><span>Seattle</span></td>
    <td><span>Atlanta</span></td>
    <td><span>9.81</span></td>
    <td><span>12.6</span></td>
  </tr>
  <tr>
    <td><span>Seattle</span></td>
    <td><span>Boston</span></td>
    <td><span>34.7</span></td>
    <td><span>30.3</span></td>
  </tr>
  <tr>
    <td><span>Boston</span></td>
    <td><span>Los Angeles</span></td>
    <td><span>13.3</span></td>
    <td><span>16.8</span></td>
  </tr>
  <tr>
    <td><span>Boston</span></td>
    <td><span>Dallas</span></td>
    <td><span>0.386</span></td>
    <td><span>0.85</span></td>
  </tr>
  <tr>
    <td><span>Boston</span></td>
    <td><span>Atlanta</span></td>
    <td><span>4.37</span></td>
    <td><span>15</span></td>
  </tr>
  <tr>
    <td><span>Boston</span></td>
    <td><span>Seattle</span></td>
    <td><span>33.7</span></td>
    <td><span>29.6</span></td>
  </tr>
</tbody></table></div><p>Cloudflare for Offices makes it so easy to get those offices set up because customers don’t have to bring their perimeter firewalls, WAN devices, or anything else — they can just plug into Cloudflare at their building, and the power of Cloudflare One allows them to get all their <a href="https://www.cloudflare.com/network-security/">network security services</a> over a private connection to Cloudflare, optimized by Argo for Packets.</p>
    <div>
      <h3>Your Network, but Faster</h3>
      <a href="#your-network-but-faster">
        
      </a>
    </div>
    <p>Argo for Packets is the perfect complement to any of our Cloudflare One solutions: providing faster bits through your network, built on Cloudflare. Now, your SD-WAN and Magic Transit solutions can be optimized to not just be secure, but performant as well.</p><p>If you’re interested in turning on Argo for Packets or onboarding your offices to a private and secure connectivity solution, reach out to your account team to get the process started.</p> ]]></content:encoded>
            <category><![CDATA[CIO Week]]></category>
            <category><![CDATA[Argo Smart Routing]]></category>
            <category><![CDATA[Speed & Reliability]]></category>
            <guid isPermaLink="false">4L25DX1Jk3p3hGuROawGnW</guid>
            <dc:creator>David Tuber</dc:creator>
        </item>
        <item>
            <title><![CDATA[Announcing Argo for Spectrum]]></title>
            <link>https://blog.cloudflare.com/argo-spectrum/</link>
            <pubDate>Tue, 23 Nov 2021 13:58:39 GMT</pubDate>
            <description><![CDATA[ Announcing general availability of Argo for Spectrum, a way to turbo-charge any TCP based application. ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Today we're excited to announce the general availability of Argo for Spectrum, a way to turbo-charge any TCP based application. With Argo for Spectrum, you can reduce latency, packet loss and improve connectivity for any TCP application, including common protocols like Minecraft, Remote Desktop Protocol and SFTP.</p>
    <div>
      <h3>The Internet — more than just a browser</h3>
      <a href="#the-internet-more-than-just-a-browser">
        
      </a>
    </div>
    <p>When people think of the Internet, many of us think about using a browser to view websites. Of course, it’s so much more! We often use other ways to connect to each other and to the resources we need for work. For example, you may interact with servers for work using SSH File Transfer Protocol (SFTP), git or Remote Desktop software. At home, you might play a video game on the Internet with friends.</p><p>To help people that protect these services against DDoS attacks, Spectrum launched in 2018 and extends Cloudflare’s <a href="https://www.cloudflare.com/ddos/">DDoS protection</a> to any TCP or UDP based protocol. Customers use it for a wide variety of use cases, including to protect video streaming (RTMP), gaming and internal IT systems. Spectrum also supports common VoIP protocols such as SIP and RTP, which have recently seen an <a href="/attacks-on-voip-providers/">increase in DDoS ransomware attacks</a>. A lot of these applications are also highly sensitive to performance issues. No one likes waiting for a file to upload or dealing with a lagging video game.</p><p>Latency and throughput are the two metrics people generally discuss when talking about network performance. Latency refers to the amount of time a piece of data (a packet) takes to traverse between two systems. Throughput refers to the amount of bits you can actually send per second. This blog will discuss how these two interplay and how we improve them with Argo for Spectrum.</p>
    <div>
      <h3>Argo to the rescue</h3>
      <a href="#argo-to-the-rescue">
        
      </a>
    </div>
    <p>There are a number of factors that cause poor performance between two points on the Internet, including network congestion, the distance between the two points, and packet loss. This is a problem many of our customers have, even on web applications. To help, we launched <a href="/argo/">Argo Smart Routing</a> in 2017, a way to reduce latency (or <i>time to first byte</i>, to be precise) for any HTTP request that goes to an origin.</p><p>That’s great for folks who run websites, but what if you’re working on an application that doesn’t speak HTTP? Up until now people had limited options for improving performance for these applications. That changes today with the general availability of Argo for Spectrum. Argo for Spectrum offers the same benefits as Argo Smart Routing for any TCP-based protocol.</p><p>Argo for Spectrum takes the same smarts from our network traffic and applies it to Spectrum. At time of writing, Cloudflare sits in front of approximately 20% of the Alexa top 10 million websites. That means that we see, in near real-time, which networks are congested, which are slow and which are dropping packets. We use that data and take action by provisioning faster routes, which sends packets through the Internet faster than normal routing. Argo for Spectrum works the exact same way, using the same intelligence and routing plane but extending it to any TCP based application.</p>
    <div>
      <h3>Performance</h3>
      <a href="#performance">
        
      </a>
    </div>
    <p>But what does this mean for real application performance? To find out, we ran a set of benchmarks on Catchpoint. Catchpoint is a service that allows you to set up <a href="https://www.cloudflare.com/application-services/solutions/app-performance-monitoring/">performance monitoring</a> from all over the world. Tests are repeated at intervals and aggregate results are reported. We wanted to use a third party such as Catchpoint to get objective results (as opposed to running themselves).</p><p>For our test case, we used a file server in the Netherlands as our origin. We provisioned various tests on Catchpoint to measure file transfer performance from various places in the world: Rabat, Tokyo, Los Angeles and Lima.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1Dmiv8f30ef7K9FQ6O1Nyi/131f81007fa1c71ecebb4237f1ad759e/image2-28.png" />
            
            </figure><p>Throughput of a 10MB file. Higher is better.</p><p>Depending on location, transfers saw increases of up to 108% (for locations such as Tokyo) and <b>85% on average</b>. Why is it <b>so</b> much faster? The answer is <a href="https://en.wikipedia.org/wiki/Bandwidth-delay_product"><i>bandwidth delay product</i></a>. In layman's terms, bandwidth delay product means that the higher the latency, the lower the throughput. This is because with transmission protocols such as TCP, we need to wait for the other party to acknowledge that they received data before we can send more.</p><p>As an analogy, let’s assume we’re operating a water cleaning facility. We send unprocessed water through a pipe to a cleaning facility, but we’re not sure how much capacity the facility has! To test, we send an amount of water through the pipe. Once the water has arrived, the facility will call us up and say, “we can easily handle this amount of water at a time, please send more.” If the pipe is short, the feedback loop is quick: the water will arrive, and we’ll immediately be able to send more without having to wait. If we have a very, very long pipe, we have to stop sending water for a while before we get confirmation that the water has arrived and there’s enough capacity.</p><p>The same happens with TCP: we send an amount of data to the wire and wait to get confirmation that it arrived. If the <i>latency</i> is high it reduces the throughput because we’re constantly waiting for confirmation. If latency is low we can throttle throughput at a high rate. With Spectrum and Argo, we help in two ways: the first is that Spectrum terminates the TCP connection close to the user, meaning that latency for that link is low. The second is that Argo reduces the latency between our edge and the origin. In concert, they create a set of low-latency connections, resulting in a low overall bandwidth delay product between users in origin. The result is a much higher throughput than you would otherwise get.</p><p>Argo for Spectrum supports any TCP based protocol. This includes commonly used protocols like SFTP, git (over SSH), RDP and SMTP, but also media streaming and gaming protocols such as RTMP and Minecraft. Setting up Argo for Spectrum is easy. When creating a Spectrum application, just hit the “Argo Smart Routing” toggle. Any traffic will automatically be smart routed.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3m5hR3BEdy6PqTp7jo7XyT/1ce3ff692d52b0fa677e27c79311dcf1/image3-35.png" />
            
            </figure><p>Argo for Spectrum covers much more than just these applications: we support any TCP-based protocol. If you're interested, reach out to your account team today to see what we can do for you.</p> ]]></content:encoded>
            <category><![CDATA[Argo Smart Routing]]></category>
            <category><![CDATA[Spectrum]]></category>
            <category><![CDATA[TCP]]></category>
            <category><![CDATA[Performance]]></category>
            <guid isPermaLink="false">7YylXseoJGsIrnn3GLNzq</guid>
            <dc:creator>Achiel van der Mandele</dc:creator>
        </item>
        <item>
            <title><![CDATA[Improving Origin Performance for Everyone with Orpheus and Tiered Cache]]></title>
            <link>https://blog.cloudflare.com/orpheus/</link>
            <pubDate>Tue, 14 Sep 2021 12:59:13 GMT</pubDate>
            <description><![CDATA[ Building a better Internet means helping build more reliable and efficient services that everyone can use.  ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Cloudflare’s mission is to help build a better Internet for everyone. Building a better Internet means helping build more reliable and efficient services that everyone can use. To help realize this vision, we’re announcing the free distribution of two products, one old and one new:</p><ul><li><p>Tiered Caching is now available to all customers for free. Tiered Caching reduces origin data transfer and improves performance, making web properties cheaper and faster to operate. Tiered Cache was previously a paid addition to Free, Pro, and Business plans as part of Argo.</p></li><li><p>Orpheus is now available to all customers for free. Orpheus routes around problems on the Internet to ensure that customer origin servers are reachable from everywhere, reducing the number of errors your visitors see.</p></li></ul>
    <div>
      <h3>Tiered Caching: improving website performance and economics for everyone</h3>
      <a href="#tiered-caching-improving-website-performance-and-economics-for-everyone">
        
      </a>
    </div>
    <p>Tiered Cache uses the size of our network to reduce requests to customer origins by dramatically increasing cache hit ratios. With data centers around the world, Cloudflare caches content very close to end users, but if a piece of content is not in cache, the Cloudflare edge data centers must contact the origin server to receive the cacheable content. This can be slow and places load on an origin server compared to serving directly from cache.</p><p>Tiered Cache works by dividing Cloudflare’s data centers into a hierarchy of lower-tiers and upper-tiers. If content is not cached in lower-tier data centers (generally the ones closest to a visitor), the lower-tier must ask an upper-tier to see if it has the content. If the upper-tier does not have it, only the upper-tier can ask the origin for content. This practice improves bandwidth efficiency by limiting the number of data centers that can ask the origin for content, reduces origin load, and makes websites more cost-effective to operate.</p><p>Dividing data centers like this results in improved performance for visitors because distances and links traversed between Cloudflare data centers are generally shorter and faster than the links between data centers and origins. It also reduces load on origins, making web properties more economical to operate. Customers enabling Tiered Cache can achieve a <b>60% or greater reduction in their cache miss rate</b> as compared to Cloudflare’s traditional <a href="https://www.cloudflare.com/learning/cdn/what-is-a-cdn/">CDN service</a>.</p><p>Additionally, Tiered Cache concentrates connections to origin servers so they come from a small number of data centers rather than the full set of network locations. This results in fewer open connections using server resources.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6tyJdM0Ardg2o6mCgracqP/781fa255d1a3bd7fbfc3438770323a55/pasted-image-0.png" />
            
            </figure>
    <div>
      <h4>Tiered Cache is simple to enable:</h4>
      <a href="#tiered-cache-is-simple-to-enable">
        
      </a>
    </div>
    <ul><li><p>Log into your Cloudflare account.</p></li><li><p>Navigate to the <b>Caching</b> in the dashboard.</p></li><li><p>Under <b>Caching,</b> select <b>Tiered Cache</b>.</p></li><li><p>Enable Tiered Cache.</p></li></ul><p>From there, customers will automatically be enrolled in <a href="/introducing-smarter-tiered-cache-topology-generation/#:~:text=Tiered%20Cache%20is%20part%20of,visitor%20to%20content%20is%20at">Smart Tiered Cache Topology</a> without needing to make any additional changes. Enterprise Customers can select from different prefab topologies or have a custom topology created for their unique needs.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/24qHPhnucDlLNf31ullV63/4326a33f02fb3142bf061f3d6018b40a/Screen-Shot-2021-09-13-at-10.14.33-AM.png" />
            
            </figure><p>Smart Tiered Cache dynamically selects the single best upper tier for each of your website’s origins with no configuration required. We will dynamically find the single best upper tier for an origin by using Cloudflare’s performance and routing data. Cloudflare collects latency data for each request to an origin. Using this latency data, we can determine how well any upper-tier data center is connected with an origin and can empirically select the best data center with the lowest latency to be the upper-tier for an origin.</p><p>Today, Smart Tiered Cache is being offered to <b>ALL Cloudflare customers for free</b>, in contrast to other CDNs who may charge exorbitant fees for similar or worse functionality. Current Argo customers will get additional benefits described <a href="/argo-v2/">here</a>. We think that this is a foundational improvement to the performance and economics of running a website.</p><p>But what happens if an upper-tier can’t reach an origin?</p>
    <div>
      <h3>Orpheus: solving origin reachability problems for everyone</h3>
      <a href="#orpheus-solving-origin-reachability-problems-for-everyone">
        
      </a>
    </div>
    <p>Cloudflare is a reverse proxy that receives traffic from end users and proxies requests back to customer servers or origins. To be successful, Cloudflare needs to be reachable by end users while simultaneously being able to reach origins. With end users around the world, Cloudflare needs to be able to reach origins from multiple points around the world at the same time. This is easier said than done! The Internet is not homogenous, and diverse Cloudflare network locations do not necessarily take the same paths to a given customer origin at any given time. A customer origin may be reachable from some networks but not from others.</p><p>Cloudflare developed Argo to be the Waze of the Internet, allowing our network to react to changes in Internet traffic conditions and route around congestion and breakages in real-time, ensuring end users always have a good experience. Argo Smart Routing provides amazing performance and reliability improvements to our customers.</p><p>Enter Orpheus. Orpheus provides reachability benefits for customers by finding unreachable paths on the Internet in real time, and guiding traffic away from those paths, ensuring that Cloudflare will always be able to reach an origin no matter what is happening on the Internet.  </p><p>Today, we’re excited to announce that Orpheus is available to and being used by all our customers.</p>
    <div>
      <h3>Fewer 522s</h3>
      <a href="#fewer-522s">
        
      </a>
    </div>
    <p>You may have seen this error before at one time or another.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6599jQn6DEqybzjrQpQa60/0410df7eb2f542cbd89fde544ca3d67a/image2-11.png" />
            
            </figure><p>This error indicates that a user was unable to reach content because Cloudflare couldn’t reach the origin. Because of the unpredictability of the Internet described above, users may see this error even when an origin is up and able to receive traffic.</p><p>So why do you see this error? The 522 error occurs when network instability causes traffic sent by Cloudflare to fail either before it reaches the origin, or on the way back from the origin to Cloudflare. This is the equivalent of either Cloudflare or your origin sending a request and never getting a response. Both sides think that they’re fine, but the network path between them is not reachable at all. This causes customer pain.</p><p>Orpheus solves that pain, ensuring that no matter where users are or where the origin is, an Internet application will always be reachable from Cloudflare.</p>
    <div>
      <h3>How it works</h3>
      <a href="#how-it-works">
        
      </a>
    </div>
    <p>Orpheus builds and provisions routes from Cloudflare to origins by analyzing data from users on every path from Cloudflare and ordering them on a per-data center level with the goal of eliminating connection errors and minimizing packet loss. If Orpheus detects errors on the current path from Cloudflare back to a customer origin, Orpheus will steer subsequent traffic from the impacted network path to the healthiest path available.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6ay5HW825T1WjKsgpfxOSb/bd1229a73126b57aad5fd0653719296e/pasted-image-0--1-.png" />
            
            </figure><p>This is similar to how Argo works but with some key differences: Argo is always steering traffic down the fastest path, whereas Orpheus is reactionary and steers traffic down healthy (and not necessarily the fastest) paths when needed.</p>
    <div>
      <h3>Improving origin reachability for customers</h3>
      <a href="#improving-origin-reachability-for-customers">
        
      </a>
    </div>
    <p>Let’s look at an example.</p><p>Barry has an origin hosted on WordPress in Chicago for his son’s band. This zone primarily sees traffic from three locations: the location closest to his son in Seattle, the location closest to him in Boston, and the location closest to his parents in Tampa, who check in on their grandson’s site daily for updates.</p><p>One day, a link between Tampa and the Chicago origin gets cut by a wandering backhoe. This means that Tampa loses some connectivity back to the Chicago origin. As a result, Barry’s parents start to see failures when connecting back to origin when connecting to the site. This reflects in origin reachability decreasing. Orpheus helps here by finding alternate paths for Barry’s parents, whether it’s through Boston, Seattle, or any location in between that isn’t impacted by the fiber cut seen in Tampa.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7nGtu3bi6IFo4tRu23sjw8/ed581a22fd8bb7c63f79fd28b8ce36c6/pasted-image-0--2-.png" />
            
            </figure><p>So even though there is packet loss between one of Cloudflare’s data centers and Barry’s origin, because there is a path through a different Cloudflare data center that doesn’t have loss, the traffic will still succeed because the traffic will go down the non-lossy path.</p>
    <div>
      <h3>How much does Orpheus help my origin reachability?</h3>
      <a href="#how-much-does-orpheus-help-my-origin-reachability">
        
      </a>
    </div>
    <p>In our rollout of Orpheus for customers, we observed that Orpheus improved Origin reachability by 23%, from 99.87% to 99.90%. Here is a chart showing the improvement Orpheus provides (lower is better):</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7MQRci25WsBnmAzcM4Xlpi/866e96d193212605efe017da8f8157c5/imageLikeEmbed--3-.png" />
            
            </figure><p>We measure this reachability improvement by measuring 522 rates for every data center-origin pair and then comparing traffic that traversed Orpheus routes with traffic that went directly back to origin. Orpheus was especially helpful at improving reachability for slightly lossy paths that could present small amounts of failure over a long period of time, whereas direct to origin would see those failures.</p><p>Note that we’ll never get this number to 0% because, with or without Orpheus, some origins really are unreachable because they are down!</p>
    <div>
      <h3>Orpheus makes Cloudflare products better</h3>
      <a href="#orpheus-makes-cloudflare-products-better">
        
      </a>
    </div>
    <p>Orpheus pairs well with some of our products that are already designed to provide highly available services on an uncertain Internet. Let’s go over the interactions between Orpheus and three of our products: Load Balancing, Cloudflare Network Interconnect, and Tiered Cache.</p>
    <div>
      <h3>Load Balancing</h3>
      <a href="#load-balancing">
        
      </a>
    </div>
    <p>Orpheus and Load Balancing go together to provide high reachability for every origin endpoint. Load balancing allows for automatic selection of endpoints based on health probes, ensuring that if an origin isn’t working, customers will still be available and operational. Orpheus finds reachable paths from Cloudflare to every origin. These two products in tandem provide a highly available and reachable experience for customers.</p>
    <div>
      <h3>Cloudflare Network Interconnect</h3>
      <a href="#cloudflare-network-interconnect">
        
      </a>
    </div>
    <p>Orpheus and Cloudflare Network Interconnect (CNI) combine to always provide a highly reachable path, no matter where in the world you are. Consider Acme, a company who is connected to the Internet by only one provider that has a lot of outages. Orpheus will do its best to steer traffic around the lossy paths, but if there’s only one path back to the customer, Orpheus won’t be able to find a less-lossy path. Cloudflare Network Interconnect solves this problem by providing a path that is separate from the transit provider that any Cloudflare data center can access. CNI provides a viable path back to Acme’s origin that will allow Orpheus to engage from any data center in the world if loss occurs.</p>
    <div>
      <h3>Shields for All</h3>
      <a href="#shields-for-all">
        
      </a>
    </div>
    <p>Orpheus and Tiered Cache can combine to build an adaptive shield around an origin that caches as much as possible while improving traffic back to origin. Tiered Cache topologies allow for customers to deflect much of their static traffic away from their origin to reduce load, and Orpheus helps ensure that any traffic that has to go back to the origin traverses over highly available links.</p>
    <div>
      <h3>Improving origin performance for everyone</h3>
      <a href="#improving-origin-performance-for-everyone">
        
      </a>
    </div>
    <p>The Internet is a growing, ever-changing ecosystem. With the release of Orpheus and Tiered Cache for everyone, we’ve given you the ability to navigate whatever the Internet has in store to provide the best possible experience to your customers.</p>
    <div>
      <h3>Watch on Cloudflare TV</h3>
      <a href="#watch-on-cloudflare-tv">
        
      </a>
    </div>
    <div></div> ]]></content:encoded>
            <category><![CDATA[Speed Week]]></category>
            <category><![CDATA[Speed & Reliability]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[Argo Smart Routing]]></category>
            <category><![CDATA[Tiered Cache]]></category>
            <guid isPermaLink="false">g3XxZ5z7j5WRWz41O2ZSX</guid>
            <dc:creator>David Tuber</dc:creator>
            <dc:creator>Alex Krivit</dc:creator>
        </item>
        <item>
            <title><![CDATA[Argo 2.0: Smart Routing Learns New Tricks]]></title>
            <link>https://blog.cloudflare.com/argo-v2/</link>
            <pubDate>Tue, 14 Sep 2021 12:59:10 GMT</pubDate>
            <description><![CDATA[ Starting today, all Free, Pro, and Business plan Argo customers will see improved performance with no additional configuration or charge.  ]]></description>
            <content:encoded><![CDATA[ 
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/TMGu5wPy8iJt7VUIUVChX/4802c8fdae0cefe9c84ad08059842664/Argo-2.0.png" />
            
            </figure><p>We launched Argo in 2017 to improve performance on the Internet. Argo uses real-time global network information to route around brownouts, cable cuts, packet loss, and other problems on the Internet. Argo makes the network that Cloudflare relies on—the Internet—faster, more reliable, and more secure on every hop around the world.</p><p>Without any complicated configuration, Argo is able to use real-time traffic data to pick the fastest path across the Internet, improving performance and delivering more satisfying experiences to your customers and users.</p><p>Today, Cloudflare is announcing several upgrades to Argo’s intelligent routing:</p><ul><li><p>When it launched, Argo was entirely focused on the “middle mile,” speeding up connections from Cloudflare to our customers’ servers. Argo now delivers optimal routes from clients and users <b><i>to</i></b> Cloudflare, further reducing end-to-end latency while still providing the impressive edge to origin performance that Argo is known for. These last-mile improvements reduce end user round trip times by up to 40%.</p></li><li><p>We’re also adding support for accelerating pure IP workloads, allowing Magic Transit and Magic WAN customers to build IP networks to enjoy the performance benefits of Argo.</p></li></ul><p>Starting today, all Free, Pro, and Business plan Argo customers will see improved performance with no additional configuration or charge. Enterprise customers have already enjoyed the last mile performance improvements described here for some time. Magic Transit and WAN customers can contact their account team to request Early Access to Argo Smart Routing for Packets.</p>
    <div>
      <h3>What’s Argo?</h3>
      <a href="#whats-argo">
        
      </a>
    </div>
    <p>Argo finds the best and fastest possible path for your traffic on the Internet. Every day, Cloudflare carries hundreds of billions of requests across our network and the Internet. Because our network, our customers, and their end users are well distributed globally, all of these requests flowing across our infrastructure paint a great picture of how different parts of the Internet are performing at any given time.</p><p>Just like Waze examines real data from real drivers to give you accurate, uncongested — and sometimes unorthodox — routes across town, Argo Smart Routing uses the timing data Cloudflare collects from each request to pick faster, more efficient routes across the Internet.</p><p>In practical terms, Cloudflare’s network is expansive in its reach. Some Internet links in a given region may be congested and cause poor performance (a literal traffic jam). By understanding this is happening and using alternative network locations and providers, Argo can put traffic on a less direct, but faster, route from its origin to its destination.</p><p>These benefits are not theoretical: enabling Argo Smart Routing shaves an average of 33% off HTTP time to first byte (TTFB).</p><p>One other thing we’re proud of: we’ve stayed super focused on making it easy to use. One click in the dashboard enables better, smarter routing, bringing the full weight of Cloudflare’s network, data, and engineering expertise to bear on making your traffic faster. Advanced analytics allow you to understand exactly how Argo is performing for you around the world.</p><p>You can read a lot more about how Argo works in our <a href="/argo">original launch blog post</a>.</p>
    <div>
      <h3>Even More Blazing Fast</h3>
      <a href="#even-more-blazing-fast">
        
      </a>
    </div>
    <p>We’ve continuously improved Argo since the day it was launched, making it faster, quicker to respond to changes on the Internet, and allowing more types of traffic to flow over smart routes.</p><p>Argo’s new performance optimizations improve last mile latencies and reduce time to first byte even further. Argo’s last mile optimizations can save up to 40% on last mile <a href="https://www.cloudflare.com/learning/cdn/glossary/round-trip-time-rtt/">round trip time (RTT)</a> with commensurate improvements to end-to-end latency.</p><p>Running benchmarks against an origin server in the central United States, with visitors coming from around the world, Argo delivered the following results:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5QhmKy3QxqX1bVS0ocSBEv/d99552b53695ddf774047a51b168b1c7/imageLikeEmbed.png" />
            
            </figure><p>The Argo improvements on the last mile reduced overall time to first byte by 39%, and reduced end-to-end latencies by 5% overall:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5KTqyWDQYEei9EdJEtcqyP/f3979e9eb5e4dd948fb71593787e20a2/Argo-Latency.png" />
            
            </figure>
    <div>
      <h3>Faster, better caching</h3>
      <a href="#faster-better-caching">
        
      </a>
    </div>
    <p>Argo customers don’t just see benefits to their dynamic traffic. Argo’s new found skills provide benefits for static traffic as well. Because Argo now finds the best path to Cloudflare, client TTFB for cache hits sees the same last mile benefit as dynamic traffic.</p>
    <div>
      <h3>Getting access to faster Argo</h3>
      <a href="#getting-access-to-faster-argo">
        
      </a>
    </div>
    <p>The best part about all these improvements? They’re already deployed and enabled for all Argo customers! These optimizations have been live for Enterprise customers for some time and were enabled for Free, Pro, and Business plans this week.</p>
    <div>
      <h3>Moving Down the Stack: Argo Smart Routing for Packets</h3>
      <a href="#moving-down-the-stack-argo-smart-routing-for-packets">
        
      </a>
    </div>
    <p>Customers use Magic Transit and Magic WAN to create their own IP networks on top of Cloudflare’s network, with access to a full suite of network functions (firewalls, DDoS mitigation, and more) delivered as a service. This allows customers to build secure, private, global networks without the need to purchase specialized hardware. Now, Argo Smart Routing for Packets allows these customers to create these IP networks with the performance benefits of Argo.</p><p>Consider a fictional gaming company, Golden Fleece Games. Golden Fleece deployed Magic Transit to mitigate attacks by malicious actors on the Internet. They want to be able to provide a quality game to their users while staying up. However, they also need their service to be as fast as possible. If their game sees additional latency, then users won’t play it, and even if their service is technically up, the increased latency will show a decrease in users. For Golden Fleece, being slow is just as bad as being down.</p><p>Finance customers also have similar requirements for low latency, high security scenarios. Consider Jason Financial, a fictional Magic Transit customer using Packet Smart Routing. Jason Financial employees connect to Cloudflare in New York, and their requests are routed to their data center which is connected to Cloudflare through a Cloudflare Network Interconnect attached to Cloudflare in Singapore. For Jason Financial, reducing latency is extraordinarily important: if their network is slow, then the latency penalties they incur can <a href="https://research.tabbgroup.com/report/v06-007-value-millisecond-finding-optimal-speed-trading-infrastructure">literally cost them millions of dollars</a> due to how fast the stock market moves. Jason wants Magic Transit and other Cloudflare One products to secure their network and prevent attacks, but improving performance is important for them as well.</p><p>Argo’s Smart Routing for Packets provides these customers with the security they need at speeds faster than before. Now, customers can get the best of both worlds: security and performance. Now, let’s talk a bit about how it works.</p>
    <div>
      <h3>A bird’s eye view of the Internet</h3>
      <a href="#a-birds-eye-view-of-the-internet">
        
      </a>
    </div>
    <p>Argo Smart Routing for Packets picks the fastest possible path between two points. But how does Argo know that the chosen route is the fastest? As with all Argo products, the answer comes by analyzing a wealth of network data already available on the Cloudflare edge. In Argo for HTTP or Argo for TCP, Cloudflare is able to use existing timing data from traffic that’s already being sent over our edge to optimize routes. This allows us to improve which paths are taken as traffic changes and congestion on the Internet happens. However, to build Smart Routing for Packets, the game changed, and we needed to develop a new approach to collect latency data at the IP layer.</p><p>Let’s go back to the Jason Financial case. Before, Argo would understand that the number of paths that are available from Cloudflare’s data centers back to Jason’s data center is proportional to the number of data centers Cloudflare has multiplied by the number of distinct interconnections between each data center. By looking at the traffic to Singapore, Cloudflare can use existing Layer 4 traffic and network analytics to determine the best path. But Layer 4 is not Layer 3, and when you move down the stack, you lose some insight into things like round trip time (RTT), and other metrics that compose time to first byte because that data is only produced at higher levels of the application stack. It can become harder to figure out what the best path actually is.</p><p>Optimizing performance at the IP layer can be more difficult than at higher layers. This is because protocol and application layers have additional headers and stateful protocols that allow for further optimization. For example, connection reuse is a performance improvement that can only be realized at higher layers of the stack because HTTP requests can reuse existing TCP connections. IP layers don’t have the concept of connections or requests at all: it’s just packets flowing over the wire.</p><p>To help bridge the gap, Cloudflare makes use of an existing data source that already exists for every Magic Transit customer today: health check probes. Every Magic Transit customer leverages existing health check probes from every single Cloudflare data center back to the customer origin. These probes are used to determine tunnel health for Magic Transit, so that Cloudflare knows which paths back to origin are healthy. These probes contain a variety of information that can also be used to improve performance as well. By examining health check probes and adding them to existing Layer 4 data, Cloudflare can get a better understanding of one-way latencies and can construct a map that allows us to see all the interconnected data centers and how fast they are to each other. Once this customer gets a Cloudflare Network Interconnect, Argo can use the data center-to-data center probes to create an alternate path for the customer that’s different from the public Internet.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3kkxgXreHeVPEFAl9evYK3/f3f995b39d3ee24645c68228f398bcdc/Screen-Shot-2021-09-12-at-10.39.58-AM.png" />
            
            </figure><p>Using this map, Cloudflare can construct dynamic routes for each customer based on where their traffic enters Cloudflare’s network and where they need to go. This allows us to find the optimal route for Jason Financial and allows us to always pick the fastest path.</p>
    <div>
      <h3>Packet-Level Latency Reductions</h3>
      <a href="#packet-level-latency-reductions">
        
      </a>
    </div>
    <p>We’ve kind of buried the lede here! We’ve talked about how hard it is to optimize performance for IP traffic. The important bit: despite all these difficulties, Argo Smart Routing for Packets is <b>able to provide a 10% average latency improvement worldwide</b> in our internal testing!</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7yXerXCL7gctXSGJNrllV9/88288281b1f29c98d5e1a03551f97cff/Argo-for-Packets.png" />
            
            </figure><p>Depending on your network topology, you may see latency reductions that are even higher!</p>
    <div>
      <h3>How do I get Argo Smart Routing for Packets?</h3>
      <a href="#how-do-i-get-argo-smart-routing-for-packets">
        
      </a>
    </div>
    <p>Argo Smart Routing for Packets is in closed beta and is available only for Magic Transit customers who have a Cloudflare Network Interconnect provisioned. If you are a Magic Transit customer interested in seeing the improved performance of Argo Smart Routing for Packets for yourself, reach out to your account team today! If you don’t have Magic Transit but want to take advantage of bigger performance gains while acquiring uncompromised levels of network security, begin your Magic Transit onboarding process today!</p>
    <div>
      <h3>What’s next for Argo</h3>
      <a href="#whats-next-for-argo">
        
      </a>
    </div>
    <p>Argo’s roadmap is simple: get ever faster, for any type of traffic.</p><p>Argo’s recent optimizations will help customers move data across the Internet at as close to the speed of light as possible. Internally, “how fast are we compared to the speed of light” is one of our engineering team’s key success metrics. We’re not done until we’re even.</p>
    <div>
      <h3>Watch on Cloudflare TV</h3>
      <a href="#watch-on-cloudflare-tv">
        
      </a>
    </div>
    <div></div> ]]></content:encoded>
            <category><![CDATA[Speed Week]]></category>
            <category><![CDATA[Argo Smart Routing]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[Speed & Reliability]]></category>
            <guid isPermaLink="false">5GRqhYUamcpuo7MNsWreFe</guid>
            <dc:creator>David Tuber</dc:creator>
        </item>
        <item>
            <title><![CDATA[Argo and the Cloudflare Global Private Backbone]]></title>
            <link>https://blog.cloudflare.com/argo-and-the-cloudflare-global-private-backbone/</link>
            <pubDate>Mon, 13 May 2019 13:00:00 GMT</pubDate>
            <description><![CDATA[ Today, we are announcing a faster, smarter Argo. One that leverages richer data sets, smarter routing algorithms, and under the hood advancements to deliver a faster-than-ever experience to end users.  ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Welcome to Speed Week! Each day this week, we’re going to talk about something Cloudflare is doing to make the Internet meaningfully faster for everyone.</p><p>Cloudflare has built a massive network of data centers in 180 cities in 75 countries. One way to think of Cloudflare is a global system to transport bits securely, quickly, and reliably from any point A to any other point B on the planet.</p><p>To make that a reality, we built Argo. Argo uses real-time global network information to route around brownouts, cable cuts, packet loss, and other problems on the Internet. Argo makes the network that Cloudflare relies on—the Internet—faster, more reliable, and more secure on every hop around the world.</p><p>We launched Argo two years ago, and it now carries over 22% of Cloudflare’s traffic. On an average day, Argo cuts the amount of time Internet users spend waiting for content by 112 years!</p><p>As Cloudflare and our traffic volumes have grown, it now makes sense to build our own private backbone to add further security, reliability, and speed to key connections between Cloudflare locations.</p><p>Today, we’re introducing the Cloudflare Global Private Backbone. It’s been in operation for a while now and links Cloudflare locations with private fiber connections.</p><p>This private backbone benefits all Cloudflare customers, and it shines in combination with Argo. Argo can select the best available link across the Internet on a per data center-basis, and takes full advantage of the Cloudflare Global Private Backbone automatically.</p><p>Let’s open the hood on Argo and explain how our backbone network further improves performance for our customers.</p>
    <div>
      <h3>What’s Argo?</h3>
      <a href="#whats-argo">
        
      </a>
    </div>
    <p>Argo is like Waze for the Internet. Every day, Cloudflare carries hundreds of billions of requests across our network and the Internet. Because our network, our customers, and their end-users are well distributed globally, all of these requests flowing across our infrastructure paint a great picture of how different parts of the Internet are performing at any given time.</p><p>Just like Waze examines real data from real drivers to give you accurate, uncongested (and sometimes unorthodox) routes across town, Argo Smart Routing uses the timing data Cloudflare collects from each request to pick faster, more efficient routes across the Internet.</p><p>In practical terms, Cloudflare’s network is expansive in its reach. Some of the Internet links in a given region may be congested and cause poor performance (a literal traffic jam). By understanding this is happening and using alternative network locations and providers, Argo can put traffic on a less direct, but faster, route from its origin to its destination.</p><p>These benefits are not theoretical: <b>enabling Argo Smart Routing shaves an average of 33%</b> off HTTP time to first byte (TTFB).</p><p>One other thing we’re proud of: we’ve stayed super focused on making it easy to use. One click in the dashboard enables better, smarter routing, bringing the full weight of Cloudflare’s network, data, and engineering expertise to bear on making your traffic faster. Advanced analytics allow you to understand exactly how Argo is performing for you around the world.</p><p>You can read a lot more about how Argo works in our original <a href="/argo">launch blog post</a>.</p><p>So far, we’ve been talking about Argo at a functional level: you turn it on and it makes requests that traverse the Internet to your origin faster. How does it actually work? Argo is dependent on a few things to make its magic happen: Cloudflare’s network, up-to-the-second performance data on how traffic is moving on the Internet, and machine learning routing algorithms.</p>
    <div>
      <h3>Cloudflare’s Global Network</h3>
      <a href="#cloudflares-global-network">
        
      </a>
    </div>
    <p>Cloudflare maintains a network of data centers around the world, and our network continues to grow significantly. Today, we have <a href="https://www.cloudflare.com/network/">more than 180</a> data centers in 75 countries. That’s an additional 69 data centers since we launched Argo in May 2017.</p><p>In addition to adding new locations, Cloudflare is constantly working with network partners to add connectivity options to our network locations. A single Cloudflare data center may be peered with a dozen networks, connected to multiple Internet eXchanges (IXs), connected to multiple transit providers (e.g. Telia, GTT, etc), and now, connected to our own physical backbone. A given destination may be reachable over multiple different links from the same location; each of these links will have different performance and reliability characteristics.</p><p>This increased network footprint is important in making Argo faster. Additional network locations and providers mean Argo has more options at its disposal to route around network disruptions and congestion. Every time we add a new network location, we exponentially grow the number of routing options available to any given request.</p>
    <div>
      <h3>Better routing for improved performance</h3>
      <a href="#better-routing-for-improved-performance">
        
      </a>
    </div>
    <p>Argo requires the huge global network we’ve built to do its thing. But it wouldn’t do much of anything if it didn’t have the smarts to actually take advantage of all our data centers and cables between them to move traffic faster.</p><p>Argo combines multiple <a href="https://www.cloudflare.com/learning/ai/what-is-machine-learning/">machine learning techniques</a> to build routes, test them, and disqualify routes that are not performing as we expect.</p><p>The generation of routes is performed on data using “offline” optimization techniques: Argo’s route construction algorithms take an input data set (timing data) and fixed optimization target (“minimize TTFB”), outputting routes that it believes satisfy this constraint.</p><p>Route disqualification is performed by a separate pipeline that has no knowledge of the route construction algorithms. These two systems are intentionally designed to be adversarial, allowing Argo to be both aggressive in finding better routes across the Internet but also adaptive to rapidly changing network conditions.</p><p>One specific example of Argo’s smarts is its ability to distinguish between multiple potential connectivity options as it leaves a given data center. We call this “transit selection”.</p><p>As we discussed above, some of our data centers may have a dozen different, viable options for reaching a given destination IP address. It’s as if you subscribed to every available ISP at your house, and you could choose to use any one of them for each website you tried to access. Transit selection enables Cloudflare to pick the fastest available path in real-time at every hop to reach the destination.</p><p>With transit selection, Argo is able to specify both:</p><ol><li><p>Network location waypoints on the way to the origin.</p></li><li><p>The <i>specific transit provider or link</i> at each waypoint in the journey of the packet all the way from the source to the destination.</p></li></ol><p>To analogize this to Waze, Argo giving directions <i>without</i> transit selection is like telling someone to drive to a waypoint (go to New York from San Francisco, passing through Salt Lake City), without specifying the roads to actually take <i>to</i> Salt Lake City or New York. <i>With</i> transit selection, we’re able to give full turn-by-turn directions — take I-80 out of San Francisco, take a left here, enter the Salt Lake City area using SR-201 (because I-80 is congested around SLC), etc. This allows us to route around issues on the Internet with much greater precision.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2x806c835ECNPmNDRr5fRT/7e9928cdc07f28c50732c68cd22d2ea1/Argo-Map_2x.png" />
            
            </figure><p>Transit selection requires logic in our inter-data center data plane (the components that actually move data across our network) to allow for differentiation between different providers and links available in each location. Some interesting network automation and advertisement techniques allow us to be much more discerning about which link actually gets picked to move traffic.</p><p>Without modifications to the Argo data plane, those options would be abstracted away by our edge routers, with the choice of transit left to BGP. We plan to talk more publicly about the routing techniques used in the future.</p><p>We are able to directly measure the impact transit selection has on Argo customer traffic. Looking at global average improvement, <b>transit selection gets customers an additional 16% TTFB latency benefit</b> over taking standard BGP-derived routes. That’s huge!</p><p>One thing we think about: Argo can itself change network conditions when moving traffic from one location or provider to another by <a href="https://www.citylab.com/transportation/2018/09/citylab-university-induced-demand/569455/">inducing demand</a> (adding additional data volume because of improved performance) and changing traffic profiles. With great power comes great intricacy.</p>
    <div>
      <h3>Adding the Cloudflare Global Private Backbone</h3>
      <a href="#adding-the-cloudflare-global-private-backbone">
        
      </a>
    </div>
    <p>Given our diversity of transit and connectivity options in each of our data centers, and the smarts that allow us to pick between them, why did we go through the time and trouble of building a backbone for ourselves? The short answer: operating our own private backbone allows us much more control over end-to-end performance and capacity management.</p><p>When we buy transit or use a partner for connectivity, we’re relying on that provider to manage the link’s health and ensure that it stays uncongested and available. Some networks are better than others, and conditions change all the time.</p><p>As an example, here’s a measurement of jitter (variance in round trip time) between two of our data centers, Chicago and Newark, over a transit provider’s network:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5j3Wu37eyBkXzcOEiaV1tH/70d3697a9a37bd3cbe77ed88ac2277c8/image4-1.png" />
            
            </figure><p>Average jitter over the pictured 6 hours is 4ms, with average round trip latency of 27ms. Some amount of latency is something we just need to learn to live with; the speed of light is a tough physical constant to do battle with, and network protocols are built to function over links with high or low latency.</p><p>Jitter, on the other hand, is “bad” because it is unpredictable and network protocols and applications built on them often degrade quickly when jitter rises. Jitter on a link is usually caused by more buffering, queuing, and general competition for resources in the routing hardware on either side of a connection. As an illustration, having a VoIP conversation over a network with high latency is annoying but manageable. Each party on a call will notice “lag”, but voice quality will not suffer. Jitter causes the conversation to garble, with packets arriving on top of each other and unpredictable glitches making the conversation unintelligible.</p><p>Here’s the same jitter chart between Chicago and Newark, except this time, transiting the Cloudflare Global Private Backbone:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5aHclF30Av4bIGTOkHFLg4/80e32ef8cb6de89cbafd2eaf3a449a2e/image3.png" />
            
            </figure><p>Much better! Here we see a jitter measurement of 536μs (microseconds), almost eight times better than the measurement over a transit provider between the same two sites.</p><p>The combination of fiber we control end-to-end and Argo Smart Routing allows us to unlock the full potential of Cloudflare’s backbone network. Argo’s routing system knows exactly how much capacity the backbone has available, and can manage how much additional data it tries to push through it. By controlling both ends of the pipe, and the pipe itself, we can guarantee certain performance characteristics and build those expectations into our routing models. The same principles do not apply to transit providers and networks we don’t control.</p>
    <div>
      <h3>Latency, be gone!</h3>
      <a href="#latency-be-gone">
        
      </a>
    </div>
    <p>Our private backbone is another tool available to us to improve performance on the Internet. Combining Argo’s cutting-edge machine learning and direct fiber connectivity between points on our large network allows us to route customer traffic with predictable, excellent performance.</p><p>We’re excited to see the backbone and its impact continue to expand.</p><p>Speaking personally as a product manager, Argo is really fun to work on. We make customers happier by making their websites, APIs, and networks faster. Enabling Argo allows customers to do that with one click, and see immediate benefit. Under the covers, huge investments in physical and virtual infrastructure begin working to accelerate traffic as it flows from its source to destination.  </p><p>From an engineering perspective, our weekly goals and objectives are directly measurable — did we make our customers faster by doing additional engineering work? When we ship a new optimization to Argo and immediately see our charts move up and to the right, we know we’ve done our job.</p><p>Building our physical private backbone is the latest thing we’ve done in our need for speed.</p><p>Welcome to Speed Week!</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7DjjNeUjd2lQWKw6rPgbXR/726957024645198035c25b569b64d1fd/image1-2.png" />
            
            </figure><p><a href="https://dash.cloudflare.com/traffic">Activate Argo</a> now, or <a>contact sales</a> to learn more!</p> ]]></content:encoded>
            <category><![CDATA[Argo Smart Routing]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[Speed Week]]></category>
            <category><![CDATA[Speed & Reliability]]></category>
            <guid isPermaLink="false">4YEJmuSBsWUEo0ey4KPCQW</guid>
            <dc:creator>Rustam Lalkaka</dc:creator>
        </item>
        <item>
            <title><![CDATA[Argo Tunnel + DC/OS]]></title>
            <link>https://blog.cloudflare.com/argo-tunnel-and-dc-os/</link>
            <pubDate>Mon, 21 Jan 2019 13:00:00 GMT</pubDate>
            <description><![CDATA[ Cloudflare is proud to partner with Mesosphere on their new Argo Tunnel offering available within their DC/OS (Data Center / Operating System) catalogue! Before diving deeper into the offering itself, we’ll first do a quick overview of the Mesophere platform, DC/OS. ]]></description>
            <content:encoded><![CDATA[ <p>Cloudflare is proud to partner with Mesosphere on their new Argo Tunnel offering available within their DC/OS (Data Center / Operating System) catalogue! Before diving deeper into the offering itself, we’ll first do a quick overview of the Mesophere platform, DC/OS.</p>
    <div>
      <h2>What is Mesosphere and DC/OS?</h2>
      <a href="#what-is-mesosphere-and-dc-os">
        
      </a>
    </div>
    <p>Mesosphere DC/OS provides application developers and operators an easy way to consistently deploy and run applications and data services on cloud providers and on-premise infrastructure. The unified developer and operator experience across clouds makes it easy to realize use cases like global reach, resource expansion, and business continuity.</p><p>In this multi cloud world Cloudflare and Mesosphere DC/OS are great complements. Mesosphere DC/OS provides the same common services experience for developers and operators, and Cloudflare provides the same common service access experience across cloud providers. DC/OS helps tremendously for avoiding vendor lock-in to a single provider, while Cloudflare can load balance traffic intelligently (in addition to many other services) at the edge between providers. This new offering will allow you to load balance through the use of Argo Tunnel.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2G5ZOTv2xHKdlDqCus7HnU/bd4cf9e0d12ac9fbb93e6b3a9476b8c0/multicloud-neautral_2x.png" />
            
            </figure>
    <div>
      <h3>Quick Tunnel Refresh</h3>
      <a href="#quick-tunnel-refresh">
        
      </a>
    </div>
    <p>Cloudflare Argo Tunnel is a private connection between your services and Cloudflare. Tunnel makes it such that only traffic that routes through the Cloudflare network can reach your service.</p><p>Cloudflare’s lightweight Argo Tunnel daemon creates an encrypted Tunnel between your origin web server and Cloudflare’s nearest data center — all without opening any public inbound ports. In other words, it’s a private link. Only Cloudflare can see the service and communicate with it, and for the rest of the internet, the service is reachable only through the hostname configured on Cloudflare. Check this out if you’d like to learn more.</p><p>By using Argo Tunnel, DC/OS is able to load balance your traffic to any of your hosts, wherever they are running on Earth, in any cloud provider! Need more instances in Paris? Just launch them! Are instances more affordable in a specific provider? Just launch them there, and thanks to Argo Tunnel and DC/OS your traffic will be directed to exactly where it belongs.</p>
    <div>
      <h3>Requirements</h3>
      <a href="#requirements">
        
      </a>
    </div>
    <p>In order to use this application in DC/OS you must have a zone on Cloudflare with the <a href="/Argo/">Argo service enabled</a>. Argo can be enabled for any plan type and is billed on usage. Because this application requires the use of a DC/OS ‘secrets’ the Enterprise version of DC/OS is required. To get started on Cloudflare please see <a href="https://www.cloudflare.com/plans/">here</a> and sign up for an account. To do the same with DC/OS, please see <a href="https://docs.mesosphere.com/1.12/installing/evaluation/">here</a>.</p>
    <div>
      <h3>Cloudflare Argo Tunnel Support for DC/OS</h3>
      <a href="#cloudflare-argo-tunnel-support-for-dc-os">
        
      </a>
    </div>
    <p>Argo Tunnel is the fast way to make services that run on DC/OS private agents (that are only bound to the DC/OS internal network) accessible over the public internet.</p><p>When you launch the Tunnel for your service, it creates persistent outbound connections to the 2 closest Cloudflare PoPs over which the entire Cloudflare network will route through to reach the service associated with the Tunnel. There is no need to configure DNS, update a NAT configuration, or modify firewall rules (connections are outbound). The Argo Tunnel exposed service gets all the benefits offered by the Cloudflare network (e.g. DDoS protection, CDN and performance, TLS, etc.).</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/59NfzsE9nLW3aaKhzrSR1I/f0eb0aaa4d0e97feb5dfa942e9919446/Argo-Tunnel-DC-OS.png" />
            
            </figure><p>The Cloudflare Argo Tunnel Service is available from Mesosphere DC/OS catalog.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1ILYxEsM8NriMNpWhNZrf7/fd0f075da7013e08e4d6a0c01e537636/Argo-2.png" />
            
            </figure><p>Configuration of the Argo Tunnel requires you to specify three things.</p><ul><li><p>Cloudflare Hostname - The DNS name of your service on the Cloudflare network. This is the address where you wish your service to be available from on the Internet. (Note: adding a zone to Cloudflare is extremely simple, you can get started here (link: <a href="https://www.cloudflare.com/plans/">https://www.cloudflare.com/plans/</a>)</p></li><li><p>Local Service Url - The local URL of the service that you want to make available, on the machines running Argo Tunnel.</p></li><li><p>Load Balancer Pool - The load balancer pool you want the service to be part of. Use any value you like, keeping the value consistent for tunnels you wish to load balance traffic onto as a unit. Inside Cloudflare you can manage how your traffic is balanced between and inside your pools.</p></li></ul>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5VsuRFMMuVxqBNba5cUjR0/0f06d0d22282b82610390f3c951bd3e6/LB-Configuration.png" />
            
            </figure><p>Assuming you do that setup for a service in a West Coast DC/OS cluster and an East Coast DC/OS cluster, with a respective us-west and us-east LB pool, then you end up with a Cloudflare load balancer globally balancing traffic between these clusters. The load balancer can be configured to do geosteering, which you can learn more about <a href="https://support.cloudflare.com/hc/en-us/articles/115000081911-Tutorial-How-to-Set-Up-Load-Balancing-Intelligent-Failover-on-Cloudflare">here</a>.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5XLHj1d6PnXMgyoRrj7ICc/84715d0fc25f14045c0a36454d7d1f12/LB-Settings.png" />
            
            </figure><p>For more details see the DC/OS Argo Tunnel <a href="https://github.com/dcos/examples/tree/master/cloudflare-argotunnel">documentation</a>. We hope this partnership is a meaningful step towards a simple multi-cloud solution for DC/OS customers.</p><p>To sign up for Cloudflare click <a href="https://www.cloudflare.com/plans/">here</a> and to sign up for DC/OS click <a href="https://docs.mesosphere.com/1.12/installing/evaluation/">here</a>. This partnership between Cloudflare and Mesosphere we hope will help you drive private secure and performant multi cloud deployments</p> ]]></content:encoded>
            <category><![CDATA[Data Center]]></category>
            <category><![CDATA[Argo Smart Routing]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[Cloudflare Tunnel]]></category>
            <category><![CDATA[Connectivity Cloud]]></category>
            <guid isPermaLink="false">61gNuny6p1knOUnyxbpVzZ</guid>
            <dc:creator>Tom Brightbill</dc:creator>
        </item>
        <item>
            <title><![CDATA[Argo Tunnels: Spread the Load]]></title>
            <link>https://blog.cloudflare.com/argo-tunnels-spread-the-load/</link>
            <pubDate>Wed, 20 Jun 2018 23:39:27 GMT</pubDate>
            <description><![CDATA[ We recently announced Argo Tunnel which allows you to deploy your applications anywhere, even if your webserver is sitting behind a NAT or firewall. Now, with support for load balancing, you can ]]></description>
            <content:encoded><![CDATA[ <p>We recently announced <a href="https://www.cloudflare.com/products/argo-tunnel/">Argo Tunnel</a> which allows you to deploy your applications anywhere, even if your webserver is sitting behind a NAT or firewall. Now, with support for load balancing, you can spread the traffic across your tunnels.</p>
    <div>
      <h3>A Quick Argo Tunnel Recap</h3>
      <a href="#a-quick-argo-tunnel-recap">
        
      </a>
    </div>
    <p>Argo Tunnel allows you to expose your web server to the internet without having to open routes in your firewall or setup dedicated routes. Your servers stay safe inside your infrastructure. All you need to do is install <i>cloudflared</i> (our open source agent) and point it to your server. <i>cloudflared</i> will establish secure connections to our global network and securely forward requests to your service. Since <i>cloudflared</i> initializes the connection, you don't need to open a hole in your firewall or create a complex routing policy. Think of it as a lightweight GRE tunnel from Cloudflare to your server.</p>
    <div>
      <h3>Tunnels and Load Balancers</h3>
      <a href="#tunnels-and-load-balancers">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3fRrc22YYv0PJJ8LmwBkp0/aff5d8ac12f3c85e28e30e8551fe8727/Salt_Cars.jpg" />
            
            </figure><p><a href="https://creativecommons.org/licenses/by-nc-nd/2.0/">CC BY-NC-ND 2.0</a> <a href="https://commons.wikimedia.org/wiki/File:Salt_Cars.jpg">image</a> by Carey Lyons</p><p>If you are running a simple service as a proof of concept or for local development, a single Argo Tunnel can be enough. For real-world deployments though, you almost always want multiple instances of your service running on seperate machines, availability zones, or even countries. Cloudflare’s distributed Load Balancing can now transparently balance traffic between how ever many Argo Tunnel instances you choose to create. Together this provides you with failure tolerance and, when combined with our geo-routing capabilities, improved performance around the world.</p><p>Want more performance in Australia? Just spin up more instances. Want to save money on the weekends? Just turn them off. Leave your firewalls closed and let Argo Tunnel handle the service discovery and routing for you.</p><p>On accounts with Load Balancing enabled, when you launch <i>cloudflared</i> to expose your web service, you can specify a load balancer you want to attach to, and we take care of the rest:</p>
            <pre><code>cloudflared --lb-pool my_lb_pool --hostname myshinyservice.example.com --url http://localhost:8080</code></pre>
            <p>In the example above we'll take care of:</p><ul><li><p>Creating the DNS entry for your new service (myshinyservice.example.com).</p></li><li><p>Creating the Load Balancer (myshinyservice), if it doesn't exist.</p></li><li><p>Creating the Load Balancer Pool (my_lb_pool), if it doesn't exist.</p></li><li><p>Opening a tunnel and adding it to the pool.</p></li><li><p>Proxying all traffic from myshinyservice.example.com all the way to your server running on your localhost on port 8080.</p></li><li><p>Removing the tunnels from the pool when you shutdown <i>cloudflared</i>.</p></li></ul><p>If you run the same command from another machine with another server it will automatically join the pool and start sharing the load across both. You're able to run a load balanced web service across multiple servers with a simple command. You don't even need to login to the Cloudflare UI.</p>
    <div>
      <h3>Load Balancer Features</h3>
      <a href="#load-balancer-features">
        
      </a>
    </div>
    <p>Now that you're running a resilient scalable web service, you'll probably want to delve into the other features the Cloudflare Load Balancing has to offer. Go to the traffic page and take a look at your newly minted Load Balancer. From there you can specify health checks, health check policy, routing policy and a fall-back pool in case your service is down.</p>
    <div>
      <h3>Try it Out</h3>
      <a href="#try-it-out">
        
      </a>
    </div>
    <p>Head over to your dashboard and make sure you have Argo (Traffic-&gt;Argo-&gt;Tiered Caching + Smart Routing) and Load Balancer (Traffic-&gt;Load Balancing) enabled. Start with the <a href="https://developers.cloudflare.com/argo-tunnel/quickstart/">Argo Tunnel Quickstart Guide</a> and run <i>cloudflared</i> with the --lb-pool option, just like we did in the example above. At the moment we limit our non-Enterprise customers to just a handful of origins, but expect that limitation to be removed in the near future. For now, play away!</p> ]]></content:encoded>
            <category><![CDATA[Argo Smart Routing]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[Speed & Reliability]]></category>
            <category><![CDATA[Reliability]]></category>
            <category><![CDATA[Load Balancing]]></category>
            <category><![CDATA[Cloudflare Tunnel]]></category>
            <guid isPermaLink="false">2jVs4W2uXMGJuUuy7wzzx5</guid>
            <dc:creator>Joaquin Madruga</dc:creator>
        </item>
        <item>
            <title><![CDATA[Cloudflare Argo Tunnel with Rust+Raspberry Pi]]></title>
            <link>https://blog.cloudflare.com/cloudflare-argo-tunnel-with-rust-and-raspberry-pi/</link>
            <pubDate>Fri, 06 Apr 2018 14:00:00 GMT</pubDate>
            <description><![CDATA[ Serving content from a Rust web server running on a Raspberry Pi from your home to the world, with a Cloudflare Argo Tunnels. ]]></description>
            <content:encoded><![CDATA[ <p>Yesterday Cloudflare launched <a href="https://developers.cloudflare.com/argo-tunnel/">Argo Tunnel</a>. In the words of the product team:</p><blockquote><p>Argo Tunnel exposes applications running on your local web server, on any network with an Internet connection, without adding DNS records or configuring a firewall or router. It just works.</p></blockquote><p>Once I grokked this, the first thing that came to mind was that I could actually use one of my Raspberry Pi's sitting around to serve a website, without:</p><ul><li><p>A flaky DDNS running on my router</p></li><li><p>Exposing my home network to the world</p></li><li><p>A cloud VM</p></li></ul><p>Ooooh... so exciting.</p>
    <div>
      <h3>The Rig</h3>
      <a href="#the-rig">
        
      </a>
    </div>
    <p>I'll assume you already have a Raspberry Pi with Raspbian on it.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/yhfAIyjzM1fchLSHmBBLg/f32e891d40339d5d66139573d9c0e17b/rig.JPG.jpeg" />
            
            </figure><p>Plug the Pi into your router. It should now have an IP address. Look that up in your router’s admin UI:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6nD8bAb7meAZ3X44R4o8YJ/632c6a588761b86e7c7be9f9e87f8c0e/devices.png" />
            
            </figure><p>OK, that's promising. Let's connect to that IP using the default pi/raspberry credentials:</p>
            <pre><code>$ ssh 192.168.8.26 -l pi
pi@192.168.8.26's password: 

The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
Last login: Sun Mar 18 23:24:11 2018 from stevens-air-2.lan
pi@raspberrypi:~ $ </code></pre>
            <p>We're in!</p><p><b>Pro tip: quick way to figure it out which type you have is</b></p>
            <pre><code>pi@raspberrypi:~ $ cat /proc/cpuinfo | grep 'Revision' | awk '{print $3}' | sed 's/^1000//'
a22082</code></pre>
            <p>Then look up the value in the <a href="https://elinux.org/RPi_HardwareHistory">Raspbery Pi revision history</a>. I have Raspberry Pi 3 Model B</p>
    <div>
      <h3>Internet connectivity</h3>
      <a href="#internet-connectivity">
        
      </a>
    </div>
    <p>OK, so we have a Pi connected to our router. Let's make 100% sure it can connect to the Internet.</p>
            <pre><code>pi@raspberrypi:~$ $ curl -I https://www.cloudflare.com
HTTP/2 200
date: Tue, 20 Mar 2018 22:54:20 GMT
content-type: text/html; charset=utf-8
set-cookie: __cfduid=dfb9c369ae12fe6eace48ed9b51aedbb01521586460; expires=Wed, 20-Mar-19 22:54:20 GMT; path=/; domain=.cloudflare.com; HttpOnly
x-powered-by: Express
cache-control: no-cache
x-xss-protection: 1; mode=block
strict-transport-security: max-age=15780000; includeSubDomains
x-content-type-options: nosniff
x-frame-options: SAMEORIGIN
served-in-seconds: 0.025
set-cookie: __cflb=3128081942; path=/; expires=Wed, 21-Mar-18 21:54:20 GMT
expect-ct: max-age=604800, report-uri="https://report-uri.cloudflare.com/cdn-cgi/beacon/expect-ct"
server: cloudflare
cf-ray: 3febc2914beb7f06-SFO-DOG</code></pre>
            <p>That first line HTTP/2 200 is the OK status code, which is enough to tell us we can connect out to the Internet. Normally this wouldn't be particularly exciting, as it's allowing connections <b>in</b> that causes problems. That's the promise of Argo Tunnels however, it says on the tin we don't need to poke any firewall holes or configure any DNS. Big claim, let's test it.</p>
    <div>
      <h3>Install the Agent</h3>
      <a href="#install-the-agent">
        
      </a>
    </div>
    <p>Go to <a href="https://developers.cloudflare.com/argo-tunnel/downloads/">https://developers.cloudflare.com/argo-tunnel/downloads/</a> to get the url for the ARM build for your Pi. At the time of writing it was <a href="https://bin.equinox.io/c/VdrWdbjqyF/cloudflared-stable-linux-arm.tgz">https://bin.equinox.io/c/VdrWdbjqyF/cloudflared-stable-linux-arm.tgz</a></p>
            <pre><code>$ wget https://bin.equinox.io/c/VdrWdbjqyF/cloudflared-stable-linux-arm.tgz
Resolving bin.equinox.io (bin.equinox.io)... 54.243.137.45, 107.22.233.132, 50.19.252.69, ...
Connecting to bin.equinox.io (bin.equinox.io)|54.243.137.45|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 5293773 (5.0M) [application/octet-stream]
Saving to: ‘cloudflared-stable-linux-arm.tgz’
...</code></pre>
            <p>Untar it</p>
            <pre><code>$ mkdir argo-tunnel
$ tar -xvzf cloudflared-stable-linux-arm.tgz -C ./argo-tunnel
cloudflared
$ cd argo-tunnel</code></pre>
            <p>Check you can execute it.</p>
            <pre><code>$ ./cloudflared --version
cloudflared version 2018.3.0 (built 2018-03-02-1820 UTC)</code></pre>
            <p>Looks OK. Now, we're hoping that the agent will magically connect from the Pi out to the nearest Cloudflare POP. We obviously want that to be secure. Furthermore, we're expecting that when a request comes inbound, it magically gets routed through Cloudflare's network and back to my Raspberry Pi.</p><p>Seems unlikely, but let’s have faith. Here is my mental model of what's happening:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2yhnPd0ciTQSUCgu03CHu7/b5360d9194e28944ed230e6cb5e19fc2/Argo-Tunnel-Diagram.png" />
            
            </figure><p>So let's create that secure tunnel. I guess we need some sort of certificate or credentials...</p>
            <pre><code>$ ./cloudflared login</code></pre>
            <p>You'll see output in the command window similar to this:</p>
            <pre><code>A browser window should have opened at the following URL:

https://www.cloudflare.com/a/warp?callback=&lt;some token&gt;

If the browser failed to open, open it yourself and visit the URL above.</code></pre>
            <p>Our headless Pi doesn't have a web browser, so let's copy the url from the console into the browser on our host dev machine.</p><p>This part assumes you already have a domain on Cloudflare If you don't go to the <a href="https://support.cloudflare.com/hc/en-us/articles/201720164">setup guide</a> to get started.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4NdfPejq5R5wFIBaw6fjmx/04998230fa9a9fae7cef05e0f282cc43/authorize-choose-domain.png" />
            
            </figure><p>We're being asked which domain we want this tunnel to sit behind. I've chosen <b>pacman.wiki</b>. Click Authorize.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2e3js21yw4OkiapEadWalz/bbe4e38f7dcffae2b6f3a98395d5ba9b/authorize-confirm.png" />
            
            </figure>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/49TShvgc3EHdZ9O1a8s1Be/2a346df5abf86d07f15e2f8f271a2619/authorize-complete.png" />
            
            </figure><p>You should now see this back on your pi:</p>
            <pre><code>You have successfully logged in.
If you wish to copy your credentials to a server, they have been saved to:
/home/pi/.cloudflared/cert.pem</code></pre>
            <p>Aha! That answers how the tunnel gets secured. The agent has created a certificate and will use that to secure the connection back to Cloudflare. Now let's create the tunnel and serve some content!</p><p><code>$ cloudflared --hostname [hostname] --hello-world</code></p><p><b>hostname</b> is a fully-qualified domain name under the domain you chose to Authorize for Argo Tunnels earlier. I'm going to use <b>tunnel.pacman.wiki</b></p>
            <pre><code>$ ./cloudflared --hostname tunnel.pacman.wiki --hello-world
INFO[0002] Proxying tunnel requests to https://127.0.0.1:46727 
INFO[0000] Starting Hello World server at 127.0.0.1:53030 
INFO[0000] Starting metrics server                       addr="127.0.0.1:53031"
INFO[0005] Connected to LAX                             
INFO[0010] Connected to SFO-DOG                         
INFO[0012] Connected to LAX                             
INFO[0012] Connected to SFO-DOG  </code></pre>
            <p>Huh, interesting. So, we've connected to my nearest POP(s). I'm in the San Francisco Bay Area, so SJC and LAX seems reasonable. What now though? Surely that's not it? If I'm reading this right, I can go to my browser, enter <a href="https://tunnel.pacman.wiki">https://tunnel.pacman.wiki</a> and I'll get a hello world page... surely not.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4HzTsm0DkHPzZSQ527zAYb/1dec3343ed9ec63465c60fb28dbe60ed/success-1.png" />
            
            </figure><p>And back on the Pi</p>
            <pre><code>INFO[0615] GET https://127.0.0.1:62627/ HTTP/1.1 CF-RAY=4067701b598e8184-LAX
INFO[0615] 200 OK  CF-RAY=4067701b598e8184-LAX</code></pre>
            <p>Mind. Blown. So what happened here exactly...</p><ol><li><p>The agent on the Pi created a secure tunnel (a persistent http2 connection) back to the nearest Cloudflare Argo Tunnels server</p></li><li><p>The tunnel was secured with the certificate generated by the agent.</p></li><li><p>A request for <a href="https://tunnel.pacman.wiki">https://tunnel.pacman.wiki</a> went from my browser out through the Internet and was routed to the nearest Cloudflare <a href="https://www.cloudflare.com/learning/cdn/glossary/anycast-network/">datacenter</a></p></li><li><p>Cloudflare received the request, saw the domain was Cloudflare managed and saw a tunnel set up to that hostname</p></li><li><p>The request got routed over that http2 connection back to my Pi</p></li></ol><p>I'm serving traffic over the Internet, from my Pi, with no ports opened on my home router. That is so cool.</p>
    <div>
      <h3>More than hello world</h3>
      <a href="#more-than-hello-world">
        
      </a>
    </div>
    <p>If you're reading this, I've won my battle with the Cloudflare blog editing team about long form vs short form content :p</p><p>Serving hello world is great, but I want to expose a real web server. If you're like me, if you can find any vaguely relevant reason to use Rust, then you use Rust. If you're also like me, you want to try one of these async web servers the cool kids talk about on <a href="https://www.reddit.com/r/rust/">/r/rust</a> like <a href="https://gotham.rs/">gotham</a>. Let's do it.</p><p>First, install rust using <a href="https://www.rustup.rs/">rustup</a>.</p><p><code>$ curl https://sh.rustup.rs -sSf | sh</code></p><p>When prompted, just hit enter</p>
            <pre><code>1) Proceed with installation (default)
2) Customize installation
3) Cancel installation
...
  stable installed - rustc 1.24.1 (d3ae9a9e0 2018-02-27)
...</code></pre>
            <p>OK, Rust is installed. Now clone Gotham and build the hello_world example:</p>
            <pre><code>$ git clone https://github.com/gotham-rs/gotham
$ cd gotham/examples/hello_world
$ cargo build</code></pre>
            <p><b>Pro tip:</b> if cargo is not found, run <code>source $HOME/.cargo/env</code>. It will be automatic in future sessions.</p><p>As cargo does its magic, you can think to yourself about how it's a great package manager, how there really are a lot of dependencies and how OSS really is standing on the shoulders of giants of giants of giants of giants—eventually you'll have the example built.</p>
            <pre><code>...
Compiling gotham_examples_hello_world v0.0.0 (file:///home/pi/argo-tunnel/gotham/examples/hello_world)
    Finished dev [unoptimized + debuginfo] target(s) in 502.83 secs
    
$ cd ../../target/debug
$ ./gotham_examples_hello_world 
Listening for requests at http://127.0.0.1:7878</code></pre>
            <p>We have a rust web server listening on a local port. Let's connect the tunnel to that.</p><p><code>./cloudflared --hostname gotham.pacman.wiki http://127.0.0.1:7878</code></p><p>Type <b>gotham.pacman.wiki</b> into your web browser and you'll see those glorious words, "Hello, world".</p>
    <div>
      <h2>Wait, this post was meant to be <i>more</i> than hello world.</h2>
      <a href="#wait-this-post-was-meant-to-be-more-than-hello-world">
        
      </a>
    </div>
    <p>OK, challenge accepted. Rust being fancy and modern is OK with Unicode. Let's serve some of that.</p>
            <pre><code>$ cd examples/hello_world/src/
$ nano src/main.rs </code></pre>
            <p>Replace the hello world string:</p><p><code>Some((String::from("Hello World!").into_bytes(), mime::TEXT_PLAIN)),</code></p><p>with some Unicode and a content-type hint so the browser know how to render it:</p><p><code>Some((String::from("&lt;html&gt;&lt;head&gt;&lt;meta http-equiv='Content-Type' content='text/html; charset=UTF-8'&gt;&lt;/head&gt;&lt;body&gt;&lt;marquee&gt;Pᗣᗧ•••MᗣN&lt;/marquee&gt;&lt;/body&gt;&lt;/html&gt;").into_bytes(), mime::TEXT_HTML)),</code></p><p>Build and run</p>
            <pre><code>$ cargo build
...
./gotham_examples_hello_world 
Listening for requests at http://127.0.0.1:7878</code></pre>
            <p><code>$ ./cloudflared --hostname gotham.pacman.wiki http://127.0.0.1:7878</code></p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2O0v47vQuYvFKKPsCY134y/22c094cfb746e462c6433e6db493794d/pacman-2.gif" />
            
            </figure><p>And now we have some unicode served from our Pi at home over the Internet by a highly asynchronous web server written in a fast, safe, high-level language. Cool.</p>
    <div>
      <h3>Are we done?</h3>
      <a href="#are-we-done">
        
      </a>
    </div>
    <p>We should probably auto start both the agent and the web server on boot so they don't die when we end our ssh session.</p>
            <pre><code>$ sudo ./cloudflared service install
INFO[0000] Failed to copy user configuration. Before running the service, 
ensure that /etc/cloudflared contains two files, cert.pem and config.yml  
error="open cert.pem: no such file or directory"</code></pre>
            <p>Nice error! OK, the product team have helpfully documented what to put in that file <a href="https://developers.cloudflare.com/argo-tunnel/reference/config/">here</a></p>
            <pre><code>$ sudo cp ~/.cloudflared/cert.pem /etc/cloudflared
$ sudo nano /etc/cloudflared/config.yml</code></pre>
            
            <pre><code>#config.yml
hostname: gotham.pacman.wiki
url: http://127.0.0.1:7878</code></pre>
            
    <div>
      <h4>Autostart for the Agent</h4>
      <a href="#autostart-for-the-agent">
        
      </a>
    </div>
    
            <pre><code>$ sudo ./cloudflared service install
INFO[0000] Using Systemd                                
ERRO[0000] systemctl: Created symlink from /etc/systemd/system/multi-user.target.wants/cloudflared.service to /etc/systemd/system/cloudflared.service.
INFO[0000] systemctl daemon-reload       </code></pre>
            
    <div>
      <h4>Autostart for the Web Server</h4>
      <a href="#autostart-for-the-web-server">
        
      </a>
    </div>
    <p>Copy the web server executable somewhere outside the gotham source tree so you can play around with the source code. I copied mine to <code>/home/pi/argo-tunnel/server/bin/</code></p><p><code>nano /etc/rc.local</code></p><p>Add line: <code>/home/pi/argo-tunnel/server/bin/gotham_examples_hello_world &amp;</code> just before <code>exit 0</code></p><p><code>sudo reboot</code></p><p>On restart, ssh back in again and check both the agent and server are running.</p>
            <pre><code>$ sudo ps -aux | grep tunnel
root       501  0.1  0.2  37636  1976 ?        Sl   06:30   0:00 /home/pi/argo-tunnel/server/bin/gotham_examples_hello_world
root       977 15.7  1.4 801292 13972 ?        Ssl  06:30   0:01 /home/pi/argo-tunnel/cloudflared --config /etc/cloudflared/config.yml --origincert /etc/cloudflared/cert.pem --no-autoupdate</code></pre>
            <p>Profit.</p> ]]></content:encoded>
            <category><![CDATA[Rust]]></category>
            <category><![CDATA[Argo Smart Routing]]></category>
            <category><![CDATA[Raspberry Pi]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[Speed & Reliability]]></category>
            <category><![CDATA[Programming]]></category>
            <category><![CDATA[Cloudflare Tunnel]]></category>
            <guid isPermaLink="false">1fVcP30JWQOAu5llzgN6Yk</guid>
            <dc:creator>Steven Pack</dc:creator>
        </item>
        <item>
            <title><![CDATA[Argo Tunnel: A Private Link to the Public Internet]]></title>
            <link>https://blog.cloudflare.com/argo-tunnel/</link>
            <pubDate>Thu, 05 Apr 2018 13:00:00 GMT</pubDate>
            <description><![CDATA[ Argo Tunnel lets you deploy services that are hidden on the internet. In other words, Argo Tunnel is like a P.O. box: someone can send you packets without knowing your real address. Only Cloudflare can see the server and communicate with it. ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Photo from <a href="https://commons.wikimedia.org/wiki/File:Argo-Tunnel-2009.jpg">Wikimedia Commons</a></p><p>Today we’re introducing <a href="https://www.cloudflare.com/products/argo-tunnel/">Argo Tunnel</a>, a private connection between your web server and Cloudflare. Tunnel makes it so that only traffic that routes through Cloudflare can reach your server.</p><p>You can think of Argo Tunnel as a virtual <a href="https://en.wikipedia.org/wiki/Post-office_box">P.O. box</a>. It lets someone send you packets without knowing your real address. In other words, it’s a private link. Only Cloudflare can see the server and communicate with it, and for the rest of the internet, it’s unroutable, as if the server is not even there.</p>
    <div>
      <h3><b>How this used to be done</b></h3>
      <a href="#how-this-used-to-be-done">
        
      </a>
    </div>
    <p>This type of private deployment used to be accomplished with GRE tunnels. But GRE tunnels are expensive and slow, they don’t really make sense in a 2018 internet.</p><p>GRE is a tunneling protocol for sending data between two servers by simulating a physical link. Configuring a GRE tunnel requires coordination between network administrators from both sides of the connection. It is an expensive service that is usually only available for large corporations with dedicated budgets. The GRE protocol encapsulates packets inside other packets, which means that you will have to either lower the MTU of your origin servers, or have your router do packet fragmentation, leading to slower responses.</p><p>We wanted to find a way to emulate the same security of a GRE tunnel but without the expense or hassle. And at the same time maybe it could speed up connections instead of slowing them down. And with that direction, the team started to build Tunnel.</p>
    <div>
      <h3><b>Deploy Quickly, Safely</b></h3>
      <a href="#deploy-quickly-safely">
        
      </a>
    </div>
    <p>Argo Tunnel is fast to install and run - it’s just <a href="https://developers.cloudflare.com/argo-tunnel/quickstart/quickstart/">three commands</a> to expose a locally running web application:</p>
            <pre><code>$ install cloudflared // binaries available for Linux, Mac and Windows https://developers.cloudflare.com/argo-tunnel/downloads/
$ cloudflared login
$ cloudflared --hostname example.com http://localhost:8080</code></pre>
            <p>This can be run on <a href="https://developers.cloudflare.com/argo-tunnel/downloads/">any device</a> from a Raspberry Pi, to a DigitalOcean droplet, to a hardware load balancer in your data center.</p><p>Netwrk is one of the companies using Argo Tunnel. Their Co-Founder and CTO Johan Bergström told us:</p><p>"I've been able to reduce the administrative overhead of firewalls, reduce the attack surface and get the added benefit of higher performance through the tunnel."</p>
    <div>
      <h3><b>Argo Tunnel is Powered by Argo</b></h3>
      <a href="#argo-tunnel-is-powered-by-argo">
        
      </a>
    </div>
    <p>One reason why traffic through Argo Tunnel gets a performance boost is that Tunnel is built on top of Argo, Cloudflare’s optimized smart routing (think <a href="https://www.waze.com/">Waze</a> for the internet).</p><p>Tunnel is included for free for anyone that has <a href="https://www.cloudflare.com/argo/">Argo</a> enabled.</p><p>In order for Tunnel to work we needed to get visitor traffic to reach one of the data centers closest to the origin. The right way to do this is by taking advantage of Argo. We decided it made sense to bundle Tunnel with Argo and include it at no additional cost. That way you get the best of both worlds: a secure, protected origin and the fastest path across the Internet to get to it.</p><p>Of course, we want you to one day be able to test out Tunnel without having to buy Argo, so we’re considering offering a free version of Tunnel on a Cloudflare domain. If you’re interested in testing out an early version in the future, <a href="https://goo.gl/forms/q2SNOLdqE68iH9nA2">sign up here</a>.</p>
    <div>
      <h3><b>What Happened to Warp</b></h3>
      <a href="#what-happened-to-warp">
        
      </a>
    </div>
    <p>During the beta period, Argo Tunnel went under a different name: <a href="/introducing-cloudflare-warp/">Warp</a>. While we liked Warp as a name, as soon as we realized that it made sense to bundle Warp with Argo, we wanted it to be under the Argo product name. Plus, a tunnel is what the product is so it's more descriptive.</p>
    <div>
      <h3><b>Get Started</b></h3>
      <a href="#get-started">
        
      </a>
    </div>
    <p>To get started, <a href="https://developers.cloudflare.com/argo-tunnel/downloads/">download</a> Argo Tunnel and follow our <a href="https://developers.cloudflare.com/argo-tunnel/quickstart">quickstart guide</a>. If you’re curious how it works, you can also <a href="https://github.com/cloudflare/cloudflared">check out the source</a>.</p> ]]></content:encoded>
            <category><![CDATA[Argo Smart Routing]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[Speed & Reliability]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[Cloudflare Tunnel]]></category>
            <guid isPermaLink="false">2UiUohoAsjm3Is0ezUlyI9</guid>
            <dc:creator>Dani Grant</dc:creator>
        </item>
        <item>
            <title><![CDATA[Introducing the Cloudflare Warp Ingress Controller for Kubernetes]]></title>
            <link>https://blog.cloudflare.com/cloudflare-ingress-controller/</link>
            <pubDate>Tue, 05 Dec 2017 14:00:00 GMT</pubDate>
            <description><![CDATA[ It’s ironic that the one thing most programmers would really rather not have to spend time dealing with is... a computer.  ]]></description>
            <content:encoded><![CDATA[ <p><i>NOTE: Prior to launch, this product was renamed Argo Tunnel. Read more in the </i><a href="/argo-tunnel/"><i>launch announcement</i></a><i>.</i></p><p>It’s ironic that the one thing most programmers would really rather not have to spend time dealing with is... a computer. When you write code it’s written in your head, transferred to a screen with your fingers and then it has to be run. On. A. Computer. Ugh.</p><p>Of course, code has to be run and typed on a computer so programmers spend hours configuring and optimizing shells, window managers, editors, build systems, IDEs, compilation times and more so they can minimize the friction all those things introduce. Optimizing your editor’s macros, fonts or colors is a battle to find the most efficient path to go from idea to running code.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/313EyDPITzKIECDYuj2qzq/a5eec59921a13d8aadfe7ed0e6ccc305/4532962327_c5a219d992_b.jpg" />
            
            </figure><p><a href="https://creativecommons.org/licenses/by/2.0/">CC BY 2.0</a> <a href="https://www.flickr.com/photos/ivyfield/4532962327/in/photolist-7UyBLT-87ERdK-cCZVyC-8E715f-dbdKCZ-nGKair-cQay4s-ebwmMy-nAuTVv-jw9hxd-nqxc9h-nH1hJw-cp4c1Q-8B3PLE-PUxit-6gY6pQ-4P2q52-cCZVWL-6eRJAH-kNHY-nj1peY-nqxyHa-iNw9jP-5boJ6P-J3KVad-nj1hAZ-7yYuBu-8PCwt2-aJptFP-b4WLoM-nysiQJ-b8kxAV-BtcWbK-7yKiEj-cABXZ1-b8RR72-9LbLum-a6n7fX-X3SERX-br1nSQ-qdLBYQ-4XJsbd-5zXtUQ-dWePHa-qAi9Jt-awuoCM-cACicL-cA43Y1-nGQWPs-dotR4Y">image</a> by <a href="https://www.flickr.com/photos/ivyfield/">Yutaka Tsutano</a></p><p>Once the developer is managing their own universe they can write code at the speed of their mind. But when it comes to putting their code into production (which necessarily requires running their programs on machines that they don’t control) things inevitably go wrong. Production machines are never the same as developer machines.</p><p>If you’re not a developer, here’s an analogy. Imagine carefully writing an essay on a subject dear to your heart and then publishing it only to be told “unfortunately, the word ‘the’ is not available in the version of English the publisher uses and so your essay is unreadable”. That’s the sort of problem developers face when putting their code into production.</p><p>Over time different technologies have tried to deal with this problem: dual booting, different sorts of isolation (e.g. virtualenv, chroot), totally static binaries, virtual machines running on a developer desktop, elastic computing resources in clouds, and more recently <a href="https://en.wikipedia.org/wiki/Operating-system-level_virtualization">containers</a>.</p><p>Ultimately, using containers is all about a developer being able to say “it ran on my machine” and be sure that it’ll run in production, because fighting incompatibilities between operating systems, libraries and runtimes that differ from development to production is a waste of time (in particular developer brain time).</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3w1Pjk0GrYgcVwz3LiliHI/9c14d99ad7b4366a4db6d961f6a5e472/14403331148_bf25864944_k-1.jpg" />
            
            </figure><p><a href="https://creativecommons.org/licenses/by/2.0/">CC BY 2.0</a> <a href="https://www.flickr.com/photos/jumilla/14403331148/in/photolist-nWLQxE-3eHezK-a85qp6-iEoK39-iYjss-9at6Z7-iYjsr-9gYg6a-bW3NeL-6AMx91-6rJaN3-28rg3L-TvqJmB-GjMFdK-2UrVHv-fs2WCj-4LGK2-2UrYz4-2Us2tK-Eeqeo-85HX8j-rF6SGG-o9rBXe-fWrkwA-dcGZAo-aoHuTF-SGpPT3-boaQy8-u8Bei-62JAWa-s9fFGo-61fNWq-fJYrjR-axYxm-2h42pU-2h42w7-rRNyES-fKfUKQ-6YXYGU-VjiSN1-4Xcg61-7YmaKY-WyF1oq-bE83qB-dvQoQw-CRQx6-82fwLo-fvJhXq-gmkcM-U3mP5E">image</a> by <a href="https://www.flickr.com/photos/jumilla/">Jumilla</a></p><p>In parallel, the rise of microservices is also a push to optimize developer brain time. The reality is that we all have limited brain power and ability to comprehend the complex systems that we build in their entirety and so we break them down into small parts that we can understand and test: functions, modules and services.</p><p>A microservice with a well-defined API and related tests running in a container is the ultimate developer fantasy. An entire program, known to operate correctly, that runs on their machine and in production.</p><p>Of course, no silver lining is without its cloud and containers beget a coordination problem: how do all these little programs find each other, scale, handle failure, log messages, communicate and remain secure. The answer, of course, is a coordination system like <a href="https://kubernetes.io/">Kubernetes</a>.</p><p>Kubernetes completes the developer fantasy by allowing them to write and deploy a service and have it take part in a whole.</p><p>Sadly, these little programs have one last hurdle before they turn into useful Internet services: they have to be connected to the brutish outside world. Services must be safely and scalably exposed to the Internet.</p><p>Recently, Cloudflare introduced a new service that can be used to connect a web server to Cloudflare without needing to have a public IP address for it. That service, <a href="https://warp.cloudflare.com">Cloudflare Warp</a>, maintains a connection from the server into the Cloudflare network. The server is then only exposed to the Internet through Cloudflare with no way for attackers to reach the server directly.</p><p>That means that any connection to it is protected and accelerated by Cloudflare’s service.</p>
    <div>
      <h3>Cloudflare Warp Ingress Controller and StackPointCloud</h3>
      <a href="#cloudflare-warp-ingress-controller-and-stackpointcloud">
        
      </a>
    </div>
    <p>Today, we are extending Warp’s reach by announcing the Cloudflare Warp <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/">Ingress Controller</a> for Kubernetes (it’s an open source project and can be found <a href="https://github.com/cloudflare/cloudflare-warp-ingress">here</a>). We worked closely with the team at <a href="https://stackpoint.io/">StackPointCloud</a> to integrate Warp, Kubernetes and their universal control plane for Kubernetes.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/P1ACOSWhcrBdZU7WlNcHi/798b6421302ef4c4c070607c2df97fb8/Screen-Shot-2017-12-04-at-5.28.18-PM-2.png" />
            
            </figure><p>Within Kubernetes creating an ingress with annotation <code>kubernetes.io/ingress.class: cloudflare-warp</code> will automatically create secure Warp tunnels to Cloudflare for any service using that ingress. The entire lifecycle of tunnels is transparently managed by the ingress controller making exposing Kubernetes-managed services securely via Cloudflare Warp trivially easy.</p><p>The Warp Ingress Controller is responsible for finding Warp-enabled services and registering them with Cloudflare using the hostname(s) specified in the Ingress resource. It is added to a Kubernetes cluster by creating a file called warp-controller.yaml with the content below:</p>
            <pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: "1"
  creationTimestamp: null
  generation: 1
  labels:
    run: warp-controller
  name: warp-controller
spec:
  replicas: 1
  selector:
    matchLabels:
      run: warp-controller
  strategy:
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 1
    type: RollingUpdate
  template:
    metadata:
      creationTimestamp: null
      labels:
        run: warp-controller
    spec:
      containers:
      - command:
        - /warp-controller
        - -v=6
        image: quay.io/stackpoint/warp-controller:beta
        imagePullPolicy: Always
        name: warp-controller
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - name: cloudflare-warp-cert
          mountPath: /etc/cloudflare-warp
          readOnly: true
      volumes:
        - name: cloudflare-warp-cert
          secret:
            secretName: cloudflare-warp-cert
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30</code></pre>
            <p>The full documentation is <a href="https://developers.cloudflare.com/argo-tunnel/reference/kubernetes/">here</a> and shows how to get up and running with Kubernetes and Cloudflare Warp on StackPointCloud, Google GKE, Amazon EKS or even <a href="https://kubernetes.io/docs/getting-started-guides/minikube/">minikube</a>.</p>
    <div>
      <h3>One Click with StackPointCloud</h3>
      <a href="#one-click-with-stackpointcloud">
        
      </a>
    </div>
    <p>Within StackPointCloud adding the Cloudflare Warp Ingress Controller requires just <i>a single click</i>. And one more click and you've deployed a Kubernetes cluster.</p><p>The connection between the Kubernetes cluster and Cloudflare is made using a TLS tunnel ensuring that all communication between the cluster and the outside world is secure.</p><p>Once connected the cluster and its services then benefit from Cloudflare's DDoS protection, WAF, global load balancing and health checks and huge global network.</p><p>The combination of Kubernetes and Cloudflare makes managing, scaling, accelerating and protecting Internet facing services simple and fast.</p> ]]></content:encoded>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[Kubernetes]]></category>
            <category><![CDATA[Argo Smart Routing]]></category>
            <category><![CDATA[Optimization]]></category>
            <category><![CDATA[Speed & Reliability]]></category>
            <category><![CDATA[Cloudflare Tunnel]]></category>
            <guid isPermaLink="false">3JzDlyg9g51wqTsZdMuHW3</guid>
            <dc:creator>John Graham-Cumming</dc:creator>
        </item>
        <item>
            <title><![CDATA[Want to try Warp? We just enabled the beta for you]]></title>
            <link>https://blog.cloudflare.com/get-started-with-warp/</link>
            <pubDate>Thu, 23 Nov 2017 02:00:00 GMT</pubDate>
            <description><![CDATA[ Tomorrow is Thanksgiving in the United States. It’s a holiday for getting together with family characterized by turkey dinner and whatever it is that happens in American football. ]]></description>
            <content:encoded><![CDATA[ <p><i>NOTE: Prior to launch, this product was renamed Argo Tunnel. Read more in the </i><a href="/argo-tunnel/"><i>launch announcement</i></a><i>.</i></p><p>Tomorrow is Thanksgiving in the United States. It’s a holiday for getting together with family characterized by turkey dinner and whatever it is that happens in American football. While celebrating with family is great, if you use a computer for your main line of work, sometimes the conversation turns to how to setup the home wifi or can Russia really use Facebook to hack the US election. Just in case you’re a geek who finds yourself in that position this week, we wanted to give you something to play with. To that end, we’re opening the <a href="http://warp.cloudflare.com">Warp</a> beta to all Cloudflare users. Feel free to tell your family there’s been an important technical development you need to attend to immediately and enjoy!</p>
    <div>
      <h3>Hello Warp! Getting Started</h3>
      <a href="#hello-warp-getting-started">
        
      </a>
    </div>
    <p>Warp allows you to expose a locally running web server to the internet without having to open up ports in the firewall or even needing a public IP address. Warp connects a web server directly to the Cloudflare network where Cloudflare acts as your web server’s network gateway. Every request reaching your origin must travel to the Cloudflare network where you can apply rate limits, access policies and authentication before the request hits your origin. Plus, because your origin is never exposed directly to the internet, attackers can’t bypass protections to reach your origin.</p><p>Warp is really easy to get started with. If you use homebrew (we also have <a href="https://warp.cloudflare.com/downloads/">packages for Linux and Windows</a>) you can do:</p>
            <pre><code>$ brew install cloudflare/cloudflare/warp
$ cloudflare-warp login
$ cloudflare-warp --hostname warp.example.com --hello-world</code></pre>
            <p>In this example, replace example.com with the domain you chose at the login command. The warp.example.com subdomain doesn’t need to exist yet in DNS, Warp will automatically add it for you.</p><p>That last command spins up a web server on your machine serving the hello warp world webpage. Then Warp starts up an encrypted virtual tunnel from that web server to the Cloudflare edge. When you visit warp.example.com (or whatever domain you chose), your request first hits a Cloudflare data center, then is routed back to your locally running hello world web server on your machine.</p><p>If someone far away visits warp.example.com, they connect to the Cloudflare data center closest to them, and then are routed to the Cloudflare data center your Warp instance is connected to, and then over the Warp tunnel back to your web server. If you want to make that connection between Cloudflare data centers really fast, <a href="https://www.cloudflare.com/a/traffic/">enable Argo</a>, which bypasses internet latencies and network congestions on optimized routes linking the Cloudflare data centers.</p><p>To point Warp at a real web server you are running instead of the hello world web server, replace the hello-world flag with the location of your locally running server:</p>
            <pre><code>$ cloudflare-warp --hostname warp.example.com http://localhost:8080</code></pre>
            
    <div>
      <h3>Using Warp for Load Balancing</h3>
      <a href="#using-warp-for-load-balancing">
        
      </a>
    </div>
    <p>Let’s say you have multiple instances of your application running and you want to balance load between them or always route to the closest one for any given visitor. As you spin up Warp, you can register the origins behind Warp to a load balancer. For example, I can run this on 2 different servers (e.g. one on a container in ECS and one on a container in GKE):</p>
            <pre><code>$ cloudflare-warp --hostname warp.example.com --lb-pool origin-pool-1 http://localhost:8080</code></pre>
            <p>And connections to warp.example.com will be routed seamlessly between the two servers. You can do this with an existing origin pool or a brand new one. If you visit the <a href="https://www.cloudflare.com/a/traffic/">load balancing dashboard</a> you will see the new pool created with your origins in it, or the origins added to an existing pool.</p><p>You can also <a href="https://www.cloudflare.com/a/traffic/">set up a health check</a> so that if one goes offline, it automatically gets deregistered from the load balancer pool and requests are only routed to the online pools.</p>
    <div>
      <h3>Automating Warp with Docker</h3>
      <a href="#automating-warp-with-docker">
        
      </a>
    </div>
    <p>You can add Warp to your Dockerfile so that as containers spin up or as you autoscale, containers automatically register themselves with Warp to connect to Cloudflare. This acts as a kind of service discovery.</p><p>A reference <a href="https://warp.cloudflare.com/docs/docker/">Dockerfile is available here</a>.</p>
    <div>
      <h3>Requiring User Authentication</h3>
      <a href="#requiring-user-authentication">
        
      </a>
    </div>
    <p>If you use Warp to expose dashboards, staging sites and other internal tools to the internet that you don’t want to be available for everyone, we have a new product in beta that allows you to quickly put up a login page in front of your Warp tunnel.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/24ItxFhwmPF9EcZc1qHE45/9028a584093a8597833b93318c7cc256/1Screen-Shot-2017-11-08-at-9.00.33-AM.png" />
            
            </figure><p>To get started, go to the <a href="https://www.cloudflare.com/a/access/">Access tab in the Cloudflare dashboard</a>.</p><p>There you can define which users should be able to login to use your applications. For example, if I wanted to limit access to warp.example.com to just people who work at Cloudflare, I can do:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/sGHDnCCZoCGhGRqM1GICF/ae667903ba8524e99308853be795b13a/Screen-Shot-2017-11-22-at-11.24.51-AM.png" />
            
            </figure>
    <div>
      <h3>Enjoy!</h3>
      <a href="#enjoy">
        
      </a>
    </div>
    <p>Enjoy the Warp beta! (But don't wander too deep into the Warp tunnel and forget to enjoy time with your family.) The whole <a href="https://community.cloudflare.com/t/cloudflare-warp-beta/5656">Warp team is following this thread</a> for comments, ideas, feedback and show and tell. We’re excited to see what you build.</p> ]]></content:encoded>
            <category><![CDATA[Beta]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[Argo Smart Routing]]></category>
            <category><![CDATA[Cloudflare Tunnel]]></category>
            <guid isPermaLink="false">4HMPtPqGBoeFZ65Yv3Tnf3</guid>
            <dc:creator>Dani Grant</dc:creator>
        </item>
        <item>
            <title><![CDATA[Introducing Cloudflare Warp: Hide Behind The Edge]]></title>
            <link>https://blog.cloudflare.com/introducing-cloudflare-warp/</link>
            <pubDate>Thu, 28 Sep 2017 13:01:00 GMT</pubDate>
            <description><![CDATA[ I work at a company whose job it is to be attacked. As I’m writing this, an automatic mitigation is fighting two ongoing DDoS attacks. Any machine that’s publicly routable on the internet today can be a vector for attack, and that’s a problem. ]]></description>
            <content:encoded><![CDATA[ <p><i>NOTE: Prior to launch, this product was renamed Argo Tunnel. Read more in the </i><a href="/argo-tunnel/"><i>launch announcement</i></a><i>.</i></p><p>I work at a company whose job it is to be attacked. As I’m writing this, an <a href="/meet-gatebot-a-bot-that-allows-us-to-sleep/">automatic mitigation</a> is fighting two ongoing DDoS attacks. Any machine that’s publicly routable on the internet today can be a vector for attack, and that’s a problem.</p><p>Today we want to turn the tables and give you a new way of exposing services to the internet without having them be directly, publicly routable. Meet Cloudflare Warp.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3bb2am85Di5godGargTcgE/8d8b3a6c8db786eb1256ced19bf1b787/5934405346_edd94956e8_b.jpg" />
            
            </figure><p><a href="https://creativecommons.org/licenses/by-sa/2.0/">CC BY-SA 2.0</a> <a href="https://c1.staticflickr.com/7/6004/5934405346_edd94956e8_b.jpg">image</a> by <a href="https://www.flickr.com/photos/39483037@N00/">Christian Ortiz</a></p>
    <div>
      <h3>Playing Hide and Seek with Bots and Hackers</h3>
      <a href="#playing-hide-and-seek-with-bots-and-hackers">
        
      </a>
    </div>
    <p>Cloudflare internally runs about 4,000 containers that make up about 1.5K services and applications. Some of these containers need to network with other local containers, and others need to accept connections over the wire.</p><p>Every devops engineer knows that bad things happen to good machines, and so our platform operations team tries to hide servers altogether from the internet. There are several ways to do this:</p><ul><li><p>Rotate IP addresses</p></li><li><p>Deploy proxies</p></li><li><p>Create firewall rules</p></li><li><p>Configure IP tables</p></li><li><p>Limit connections by client certificate</p></li><li><p>Cross connect with an upstream provider</p></li><li><p>Configure a GRE tunnel</p></li><li><p>Authentication mechanisms like OAuth or OIDC</p></li></ul><p>These can be complicated or time consuming, yet none of them are guarantees.</p><p>We knew we could make it easier. We started building an internal tool for ourselves - a safer way to expose services running on our own infrastructure (with some service discovery and automation benefits as well...more on that later) and after talking to developers and security engineers that use Cloudflare, we realized there was benefit in opening it up to everyone.</p>
    <div>
      <h3>Cloudflare Warp</h3>
      <a href="#cloudflare-warp">
        
      </a>
    </div>
    <p>Cloudflare Warp is a security-conscious tool for exposing web applications without needing to expose the server they run on. With Cloudflare Warp, traffic to your application is run over a private, encrypted, virtual tunnel from the Cloudflare edge and traffic is only able to find and access your server if it routes through Cloudflare.</p><p>Only Cloudflare knows how to dial back to the application through the virtual tunnel created between the application and Cloudflare. Traffic can never hit your origin directly because it can never find it, your origin isn’t on the internet, it’s only there if you go through Cloudflare, via Warp. Instead, the client connects to the nearest Cloudflare data center, never directly to the application itself.</p><p>To start up Cloudflare Warp, it’s just one command. For example, if I want to run Cloudflare Warp to expose an application running locally on port 4000, I run:</p>
            <pre><code>cloudflare-warp --hostname example.com https://localhost:4000</code></pre>
            <p>Behind the scenes, Cloudflare Warp issues an <a href="https://www.cloudflare.com/application-services/products/ssl/">SSL certificate</a>, installs it on the application server and uses it to generate an encrypted, tunnelled connection back to Cloudflare. (The internal project name for Cloudflare Warp was E.T. because of this ‘phoning home’ behavior). Cloudflare Warp then sets up the corresponding DNS records for the application so that when a visitor next goes to your application, they will be connected through the virtual tunnel back to the application running locally at port 4000.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5RDYfy14A0lBORLEnBLr3t/e9fab4eb1eefde05bc224f72c4fa5d50/Screen-Shot-2017-09-27-at-7.54.05-PM.png" />
            
            </figure>
    <div>
      <h3>One Secure Gateway</h3>
      <a href="#one-secure-gateway">
        
      </a>
    </div>
    <p>With this setup, Cloudflare’s edge acts as a network shield in front of your infrastructure. At Cloudflare’s edge you can describe policies (allow 50 connections per second, only to these routes, only from these IP’s and only if they are authenticated) and because traffic through Warp can only reach your servers after it’s traveled through Cloudflare, you can drop unexpected traffic at the edge, only receive clean traffic on your server, and know that it’s been validated by Cloudflare. As you continue to set up applications connected to Cloudflare using Warp, you only have to configure this once with Cloudflare and it can apply holistically across all of your applications, protecting your entire infrastructure.</p>
    <div>
      <h3>Did we say service discovery?</h3>
      <a href="#did-we-say-service-discovery">
        
      </a>
    </div>
    <p>One of the side benefits of Cloudflare Warp is that immediately when you spin up the Cloudflare Warp agent, it registers DNS records for your application, making it an effective tool for service discovery.</p><p>We also allow you to tag tunnels the way you would label your kubernetes pods with key-value pairs like <code>release:stable</code> and <code>release:canary</code>. Soon you’ll also be able to configure routing based on these labels (send 90% of my traffic to the stable release and 10% to the canary release).</p>
    <div>
      <h3>What’s next?</h3>
      <a href="#whats-next">
        
      </a>
    </div>
    <p>The Cloudflare Warp beta is available today and we’re gradually adding people every day. Ready to get started? You can <a href="https://warp.cloudflare.com">jump in and read the docs</a> or <a href="https://cloudflare.com/products/cloudflare-warp">sign up for access to the beta</a>.</p> ]]></content:encoded>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[Speed & Reliability]]></category>
            <category><![CDATA[Argo Smart Routing]]></category>
            <category><![CDATA[Cloudflare Tunnel]]></category>
            <guid isPermaLink="false">1oFK3fbdqFnp5IZ5c9u0Pb</guid>
            <dc:creator>Dani Grant</dc:creator>
        </item>
        <item>
            <title><![CDATA[Down the Rabbit Hole: The Making of Cloudflare Warp]]></title>
            <link>https://blog.cloudflare.com/the-making-of-cloudflare-warp/</link>
            <pubDate>Thu, 28 Sep 2017 13:00:00 GMT</pubDate>
            <description><![CDATA[ In an abstract sense Cloudflare Warp is similar; its connection strategy punches a hole through firewalls and NAT, and provides easy and secure passage for HTTP traffic to your origin.  ]]></description>
            <content:encoded><![CDATA[ <p><i>NOTE: Prior to launch, this product was renamed Argo Tunnel. Read more in the </i><a href="/argo-tunnel/"><i>launch announcement</i></a><i>.</i></p><p>In the real world, tunnels are often carved out from the mass of something bigger - a hill, the ground, but also man-made structures.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6OsNZgEQM7Ug8rMo9Wotdb/8d136c38e13ea7d387816d847115be9e/11421362975_6f97cac5dc_o.jpg" />
            
            </figure><p><a href="https://creativecommons.org/licenses/by-sa/2.0/">CC BY-SA 2.0</a> <a href="https://www.flickr.com/photos/57868312@N00/11421362975">image</a> by <a href="https://www.flickr.com/photos/londonmatt/">Matt Brown</a></p><p>In an abstract sense Cloudflare Warp is similar; its connection strategy punches a hole through firewalls and NAT, and provides easy and secure passage for HTTP traffic to your origin. But the technical reality is a bit more interesting than this strained metaphor invoked by the name of similar predecessor technologies like GRE tunnels.</p>
    <div>
      <h3>Relics</h3>
      <a href="#relics">
        
      </a>
    </div>
    <p>Generic Routing Encapsulation or GRE is a well-supported <a href="https://tools.ietf.org/html/rfc1701">standard</a>, commonly used to join two networks together over the public Internet, and by some CDNs to shield an origin from DDoS attacks. It forms the basis of the legacy VPN protocol <a href="https://tools.ietf.org/html/rfc2637">PPTP</a>.</p><p>Establishing a GRE tunnel requires configuring both ends of the tunnel to accept the other end’s packets and deciding which IP ranges should be routed through the tunnel. With this in place, an IP packet destined for any address in the configured range will be encapsulated within a GRE packet. The GRE packet is delivered directly to the other end of the tunnel, which removes the encapsulation and forwards the original packet to its intended destination.</p><p>GRE is a simple and useful protocol suitable for encapsulating any network protocol, but this minimalism is not without its costs. When used over a network with a fixed maximum transmission unit (MTU) like the Internet, the overhead of encapsulation reduces the effective bandwidth and <a href="/path-mtu-discovery-in-practice/">there may be compatibility issues with software and hardware expecting a higher MTU</a>.</p><p>There is also no additional security. Unencrypted payloads like HTTP traffic can be read by anyone in the path of the tunneled packets. Even while using TLS, the routing data remains in the clear so anyone can discover who you are communicating with. Other tunneling protocols like IPsec ESP fix this but are hard to use in comparison.</p>
    <div>
      <h3>The Next Phase</h3>
      <a href="#the-next-phase">
        
      </a>
    </div>
    <p>For Cloudflare Warp, we wanted to build a better, easier way for you to control and secure connections between your origin and the Cloudflare network, optimised for everything that Cloudflare offers while accommodating a diverse set of needs.</p><p>To get started using Cloudflare Warp, you need only a Cloudflare account and a domain to try it on. Configuring the client is simple: with your account details, we will automatically configure your website’s DNS records to use an internal address corresponding to the established tunnel, and issue a certificate with <a href="/cloudflare-ca-encryption-origin/">Origin CA</a> to ensure that your tunnel’s traffic is secure and authenticated within the Cloudflare network. Traffic destined for a Cloudflare Warp-enabled origin uses the strictest SSL verification, regardless of your zone’s security settings.</p><p>The tunnelling protocol is based on <a href="https://http2.github.io/">HTTP/2</a> which powers the modern web. Its multiplexing support means you can receive multiple HTTP requests on a single connection simultaneously and never have to establish a new connection, with all of the latency that entails. A single multiplexed connection is also the most efficient way to support multiple streams of data while still being able to traverse NAT, for origins hosted within a home or office network (e.g. on a developer’s laptop) or for servers with egress-only traffic.</p><p>It also uses HPACK header compression to save bandwidth and reduce the time-to-first-byte; and since we provide the implementation for both ends of the connection, we can even add support for new compression schemes in the future, such as the one used by our dynamic content accelerator, <a href="/cacheing-the-uncacheable-cloudflares-railgun-73454/">Railgun</a>.</p><p>Thanks to Go’s cross-compilation support and well-engineered libraries, we can provide a downloadable tunnel agent for the most popular OSes and processor architectures.</p><p>Yet, the technology used to develop Cloudflare Warp isn’t the most impressive part of the story.</p>
    <div>
      <h3>The Best Of Both Worlds</h3>
      <a href="#the-best-of-both-worlds">
        
      </a>
    </div>
    <p>Cloudflare’s anycast network is great for users of the Internet; lower round-trip times mean faster TLS connections and cached content can be served at lightning speeds. But there was no corresponding benefit for the path to the unicast origin, until the introduction of <a href="/argo/">Argo</a>.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/o4GDzVjQLGQoWgfFfWDQb/aca5a8a5752048bc0b9dd2a5ee297818/Argo_horizontal.png" />
            
            </figure><p>Argo provides the “virtual backbone” necessary for our anycast network to work as effectively for customers’ origins as it does for their visitors. Using anycast, Warp connects to a nearby Cloudflare PoP. But depending on your server’s location, the route between your visitor’s closest Cloudflare PoP and the one Warp is connected to may not be as fast as if you had connected directly to the origin. Argo levels the playing field by optimising the route within the Cloudflare network. That’s why Argo is enabled for all requests to a Warp-enabled origin.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3aRZ64o1U0oipPLsCaL3BA/765c1dff494c55c2ae142d319a4b2334/argo_animation.gif" />
            
            </figure>
    <div>
      <h3>Unification</h3>
      <a href="#unification">
        
      </a>
    </div>
    <p>While there may be performance benefits to be had by a single persistent connection to a nearby PoP, this also introduces a scary single-point-of-failure. Warp introduces redundancy by connecting to another nearby PoP, using a special anycast addressing scheme designed to guarantee that the second PoP is different from the first. If anything happens to either connection, traffic can be routed through the other tunnel connection - either through standard DNS round-robin or using Load Balancing.</p>
    <div>
      <h3>Tapestry</h3>
      <a href="#tapestry">
        
      </a>
    </div>
    <p>The final piece of Cloudflare Warp is the integration with <a href="/introducing-load-balancing-intelligent-failover-with-cloudflare/">Load Balancing</a>. Warp will automatically add and remove origins from a load balancing pool, making it the ideal companion to cloud services. But in addition to the active and passive monitoring provided by Load Balancing, we constantly monitor the health and performance of tunnel connections. Whether they’re idling or saturated with data, we can detect an adverse network condition or a sudden failure with your server or cloud provider faster than ever before with Warp.</p><p>However, Warp’s health checks are a complement, not a replacement for Load Balancing’s monitors. Warp sees only network and agent health, whereas active monitoring can determine if a server is still responsive to requests.</p>
    <div>
      <h3>All Good Things…</h3>
      <a href="#all-good-things">
        
      </a>
    </div>
    <p>It is the combination of technologies that make Cloudflare Warp possible, and will make it even better in the future. We’re excited to see how you decide to integrate it into your existing systems and workflows.</p><p>Ready to try it out? <a href="https://www.cloudflare.com/products/cloudflare-warp/">Sign up for the beta</a> and set a course for the <a href="https://warp.cloudflare.com">docs</a> … engage! And if building HTTP/2 encrypted tunnels sounds like fun to you, you’re one of us and <a href="https://boards.greenhouse.io/cloudflare/jobs/865738#.Wcx5aRNSxE4">we’d like to meet you</a>.</p> ]]></content:encoded>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[Birthday Week]]></category>
            <category><![CDATA[Argo Smart Routing]]></category>
            <category><![CDATA[Speed & Reliability]]></category>
            <category><![CDATA[Cloudflare Tunnel]]></category>
            <guid isPermaLink="false">3MJ2wWdYIs5ZHco6XmBXAj</guid>
            <dc:creator>Chris Branch</dc:creator>
        </item>
        <item>
            <title><![CDATA[How to use Cloudflare for Service Discovery]]></title>
            <link>https://blog.cloudflare.com/service-discovery/</link>
            <pubDate>Fri, 21 Jul 2017 08:01:00 GMT</pubDate>
            <description><![CDATA[ Cloudflare runs 3,588 containers, making up 1,264 apps and services that all need to be able to find and discover each other in order to communicate -- a problem solved with service discovery. ]]></description>
            <content:encoded><![CDATA[ <p>Cloudflare runs 3,588 containers, making up 1,264 apps and services that all need to be able to find and discover each other in order to communicate -- a problem solved with service discovery.</p><p>You can use Cloudflare for service discovery. By deploying microservices behind Cloudflare, microservices’ origins are masked, secured from DDoS and L7 exploits and authenticated, and service discovery is natively built in. Cloudflare is also cloud platform agnostic, which means that if you have distributed infrastructure deployed across cloud platforms, you still get a holistic view of your services and the ability to manage your security and authentication policies in one place, independent of where services are actually deployed.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/461lV3kUQYG0CMPbfHrqTF/d31991eaa6a3d3fed698bddebbb74710/Service-Discovery-Diagram.png" />
            
            </figure>
    <div>
      <h3>How it works</h3>
      <a href="#how-it-works">
        
      </a>
    </div>
    <p>Service locations and metadata are stored in a distributed KV store deployed in all 100+ Cloudflare edge locations (the service registry).</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7IwFdoeuc1YrVKYY2rS39X/3351746e40fa6143fe8ab0d85259458a/ServiceRegistry.png" />
            
            </figure><p>Services register themselves to the service registry when they start up and deregister themselves when they spin down via a POST to Cloudflare’s API. Services provide data in the form of a DNS record, either by giving Cloudflare the address of the service in an A (IPv4) or AAAA (IPv6) record, or by providing more metadata like transport protocol and port in an SRV record.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5XIaHZx9pShYjAUc6emZcq/df2b9766e24cb0cd62404439c147ee2c/SRV-POST.png" />
            
            </figure><p>Services are also automatically registered and deregistered by health check monitors so only healthy nodes are sent traffic. Health checks are over HTTP and can be setup with custom configuration so that responses to the health check must return a specific response body and or response code otherwise the nodes are marked as unhealthy.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3NrIrLYeW04BqxhC7vXoBd/c7230db77ea6169e57cd9161d5cd410e/healthcheck.png" />
            
            </figure><p>Traffic is distributed evenly between redundant nodes using a load balancer. Clients of the service discovery query the load balancer directly over <a href="https://www.cloudflare.com/learning/dns/what-is-dns/">DNS</a>. The load balancer receives data from the service registry and returns the corresponding service address. If services are behind Cloudflare, the load balancer returns a Cloudflare IP address to route traffic to the service through Cloudflare’s L7 proxy.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4ZhYdGIkVKzDb6xlk9rYRR/baf664081d5ed6da310c41581efb1eb7/loadbalancer.png" />
            
            </figure><p>Traffic can also be sent to specific service nodes based on <a href="https://support.cloudflare.com/hc/en-us/articles/115000540888-Load-Balancing-Geographic-Regions">client geography</a>, so the data replication service in North America, for example, can talk to a specific North American version of the billing service, or European data can stay in Europe.</p><p>Clients query the service registry over DNS, and service location and metadata is packaged in A, AAAA, CNAME or SRV records. The benefit of this is that no additional client software needs to be installed on service nodes beyond a DNS client. Cloudflare works natively over DNS, meaning that if your services have a DNS client, there’s no extra software to install, manage, upgrade or patch.</p><p>While usually, TTL’s in DNS mean that if a service location changes or deregisters, clients may still get stale information, Cloudflare DNS keeps low TTL’s (it’s able to do this and maintain <a href="https://www.dnsperf.com/">fast performance</a> because of its distributed network) and if you are using Cloudflare as a proxy, the DNS answers always point back to Cloudflare even when the IP’s of services behind Cloudflare change, removing the effect of cache staleness.</p><p>If your services communicate over HTTP/S and websockets, you can additionally use Cloudflare as a L7 proxy for added security, authentication and optimization. Cloudflare <a href="https://www.cloudflare.com/learning/ddos/how-to-prevent-ddos-attacks/">prevents DDoS attacks</a> from hitting your infrastructure, masks your IP’s behind its network, and routes traffic through an <a href="https://www.cloudflare.com/argo/">optimized edge PoP to edge PoP route</a> to shave latency off the internet.</p><p>Once service &lt;--&gt; service traffic is going through Cloudflare, you can use TLS client certificates to <a href="/introducing-tls-client-auth/">authenticate traffic</a> between your services. Cloudflare can authenticate traffic at the edge by ensuring that the client certificate presented during the TLS handshake is signed by your root CA.</p>
    <div>
      <h3>Setting it up</h3>
      <a href="#setting-it-up">
        
      </a>
    </div>
    <p><a href="https://cloudflare.com/a/sign-up">Sign up for Cloudflare account</a></p><p>During the signup process, add all your initial services as DNS records in the DNS editor.</p><p>To finish sign up, move DNS to Cloudflare by logging into your <a href="https://www.cloudflare.com/learning/dns/glossary/what-is-a-domain-name-registrar/">registrar</a> and changing your nameservers to the Cloudflare nameservers assigned to you when you signed up for Cloudflare. If you want traffic to those services to be proxied through Cloudflare, click on the cloud next to each DNS record to make it orange.</p><p>Run a script on each node so that:</p><p>On startup, the node sends a <a href="https://api.cloudflare.com/#dns-records-for-a-zone-create-dns-record">POST to the DNS record API</a> to register itself and <a href="https://api.cloudflare.com/#load-balancer-pools-modify-a-pool">PUT to load balancing API</a> to add itself to the origin pool.</p><p>On shutdown, the node sends a <a href="https://api.cloudflare.com/#dns-records-for-a-zone-delete-dns-record">DELETE to the DNS record API</a> to deregister itself and <a href="https://api.cloudflare.com/#load-balancer-pools-modify-a-pool">PUT to load balancing API</a> to remove itself to the origin pool.</p><p>These can be accomplished via <a href="https://cloud.google.com/compute/docs/startupscript">startup and shutdown scripts on Google Compute Engine</a> or <a href="http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/user-data.html#user-data-shell-scripts">user data scripts</a> or <a href="http://docs.aws.amazon.com/autoscaling/latest/userguide/lifecycle-hooks.html">auto scaling lifecycle hooks</a> on AWS.</p><p>Registration:</p>
            <pre><code>curl -X POST "https://api.cloudflare.com/client/v4/zones/023e105f4ecef8ad9ca31a8372d0c353/dns_records" \
     -H "X-Auth-Email: user@example.com" \
     -H "X-Auth-Key: c2547eb745079dac9320b638f5e225cf483cc5cfdda41" \
     -H "Content-Type: application/json" \
     --data '{"type":"SRV","data":{"service":"_http","proto":"_tcp","name":"name","priority":1,"weight":1,"port":80,"target":"staging.badtortilla.com"},"ttl":1,"zone_name":"badtortilla.com","name":"_http._tcp.name.","content":"SRV 1 1 80 staging.badtortilla.com.","proxied":false,"proxiable":false,"priority":1}'
</code></pre>
            <p>De-Registration:</p>
            <pre><code>curl -X DELETE "https://api.cloudflare.com/client/v4/zones/023e105f4ecef8ad9ca31a8372d0c353/dns_records/372e67954025e0ba6aaa6d586b9e0b59" \
     -H "X-Auth-Email: user@example.com" \
     -H "X-Auth-Key: c2547eb745079dac9320b638f5e225cf483cc5cfdda41" \
     -H "Content-Type: application/json"</code></pre>
            <p>Add or remove an origin from an origin pool (this should be a unique IP per node added to the pool):</p>
            <pre><code>curl -X PUT "https://api.cloudflare.com/client/v4/user/load_balancers/pools/17b5962d775c646f3f9725cbc7a53df4" \
     -H "X-Auth-Email: user@example.com" \
     -H "X-Auth-Key: c2547eb745079dac9320b638f5e225cf483cc5cfdda41" \
     -H "Content-Type: application/json" \
     --data '{"description":"Primary data center - Provider XYZ","name":"primary-dc-1","enabled":true,"monitor":"f1aba936b94213e5b8dca0c0dbf1f9cc","origins":[{"name":"app-server-1","address":"1.2.3.4","enabled":true}],"notification_email":"someone@example.com"}'</code></pre>
            <p>Create a health check. You can do this <a href="https://api.cloudflare.com/#load-balancer-monitors-create-a-monitor">in the API</a> or in the <a href="https://www.cloudflare.com/a/traffic/">Cloudflare dashboard</a> (in the Load Balancer card).</p>
            <pre><code>curl -X POST "https://api.cloudflare.com/client/v4/organizations/01a7362d577a6c3019a474fd6f485823/load_balancers/monitors" \
     -H "X-Auth-Email: user@example.com" \
     -H "X-Auth-Key: c2547eb745079dac9320b638f5e225cf483cc5cfdda41" \
     -H "Content-Type: application/json" \
     --data '{"type":"https","description":"Login page monitor","method":"GET","path":"/health","header":{"Host":["example.com"],"X-App-ID":["abc123"]},"timeout":3,"retries":0,"interval":90,"expected_body":"alive","expected_codes":"2xx"}'</code></pre>
            <p>Create an initial load balancer, either <a href="https://api.cloudflare.com/#load-balancers-create-a-load-balancer">through the API</a> or in the <a href="https://www.cloudflare.com/a/traffic/">Cloudflare dashboard</a>.</p>
            <pre><code>curl -X POST "https://api.cloudflare.com/client/v4/zones/699d98642c564d2e855e9661899b7252/load_balancers" \
     -H "X-Auth-Email: user@example.com" \
     -H "X-Auth-Key: c2547eb745079dac9320b638f5e225cf483cc5cfdda41" \
     -H "Content-Type: application/json" \
     --data '{"description":"Load Balancer for www.example.com","name":"www.example.com","ttl":30,"fallback_pool":"17b5962d775c646f3f9725cbc7a53df4","default_pools":["de90f38ced07c2e2f4df50b1f61d4194","9290f38c5d07c2e2f4df57b1f61d4196","00920f38ce07c2e2f4df50b1f61d4194"],"region_pools":{"WNAM":["de90f38ced07c2e2f4df50b1f61d4194","9290f38c5d07c2e2f4df57b1f61d4196"],"ENAM":["00920f38ce07c2e2f4df50b1f61d4194"]},"pop_pools":{"LAX":["de90f38ced07c2e2f4df50b1f61d4194","9290f38c5d07c2e2f4df57b1f61d4196"],"LHR":["abd90f38ced07c2e2f4df50b1f61d4194","f9138c5d07c2e2f4df57b1f61d4196"],"SJC":["00920f38ce07c2e2f4df50b1f61d4194"]},"proxied":true}'</code></pre>
            <p>(optional) Setup geographic routing rules. You can do this <a href="https://api.cloudflare.com/#load-balancers-modify-a-load-balancer">via API</a> or in the <a href="https://www.cloudflare.com/a/traffic/">Cloudflare dashboard</a>.</p>
            <pre><code>curl -X POST "https://api.cloudflare.com/client/v4/zones/699d98642c564d2e855e9661899b7252/load_balancers" \
     -H "X-Auth-Email: user@example.com" \
     -H "X-Auth-Key: c2547eb745079dac9320b638f5e225cf483cc5cfdda41" \
     -H "Content-Type: application/json" \
     --data '{"description":"Load Balancer for www.example.com","name":"www.example.com","ttl":30,"fallback_pool":"17b5962d775c646f3f9725cbc7a53df4","default_pools":["de90f38ced07c2e2f4df50b1f61d4194","9290f38c5d07c2e2f4df57b1f61d4196","00920f38ce07c2e2f4df50b1f61d4194"],"region_pools":{"WNAM":["de90f38ced07c2e2f4df50b1f61d4194","9290f38c5d07c2e2f4df57b1f61d4196"],"ENAM":["00920f38ce07c2e2f4df50b1f61d4194"]},"pop_pools":{"LAX":["de90f38ced07c2e2f4df50b1f61d4194","9290f38c5d07c2e2f4df57b1f61d4196"],"LHR":["abd90f38ced07c2e2f4df50b1f61d4194","f9138c5d07c2e2f4df57b1f61d4196"],"SJC":["00920f38ce07c2e2f4df50b1f61d4194"]},"proxied":true}'</code></pre>
            <p>(optional) Setup Argo for faster PoP to PoP transit in the <a href="https://www.cloudflare.com/a/traffic/">traffic app of the Cloudflare dashboard</a>.</p><p>(optional) Setup rate limiting <a href="https://api.cloudflare.com/#rate-limits-for-a-zone-create-a-ratelimit">via API</a> or <a href="https://www.cloudflare.com/a/firewall/">in the dashboard</a></p>
            <pre><code>curl -X POST "https://api.cloudflare.com/client/v4/zones/023e105f4ecef8ad9ca31a8372d0c353/rate_limits" \
     -H "X-Auth-Email: user@example.com" \
     -H "X-Auth-Key: c2547eb745079dac9320b638f5e225cf483cc5cfdda41" \
     -H "Content-Type: application/json" \
     --data '{"id":"372e67954025e0ba6aaa6d586b9e0b59","disabled":false,"description":"Prevent multiple login failures to mitigate brute force attacks","match":{"request":{"methods":["GET","POST"],"schemes":["HTTP","HTTPS"],"url":"*.example.org/path*"},"response":{"status":[401,403],"origin_traffic":true}},"bypass":[{"name":"url","value":"api.example.com/*"}],"threshold":60,"period":900,"action":{"mode":"simulate","timeout":86400,"response":{"content_type":"text/xml","body":"&lt;error&gt;This request has been rate-limited.&lt;/error&gt;"}}}'</code></pre>
            <p>(optional) Setup TLS client authentication. (Enterprise only) Send your account manager your root CA certificate and <a href="https://support.cloudflare.com/hc/en-us/articles/115000088491-Cloudflare-TLS-Client-Auth">which options you would like enabled</a>.</p> ]]></content:encoded>
            <category><![CDATA[DDoS]]></category>
            <category><![CDATA[Reliability]]></category>
            <category><![CDATA[Speed & Reliability]]></category>
            <category><![CDATA[IPv4]]></category>
            <category><![CDATA[DNS]]></category>
            <category><![CDATA[API]]></category>
            <category><![CDATA[Programming]]></category>
            <category><![CDATA[Argo Smart Routing]]></category>
            <category><![CDATA[TLS]]></category>
            <category><![CDATA[Security]]></category>
            <guid isPermaLink="false">6M0qCxRf8vx0pS0XWu7sMU</guid>
            <dc:creator>Dani Grant</dc:creator>
        </item>
        <item>
            <title><![CDATA[Introducing Argo — A faster, more reliable, more secure Internet for everyone]]></title>
            <link>https://blog.cloudflare.com/argo/</link>
            <pubDate>Thu, 18 May 2017 13:00:00 GMT</pubDate>
            <description><![CDATA[ The Internet is inherently unreliable, a collection of networks connected to each other with fiber optics, copper, microwaves and trust. ]]></description>
            <content:encoded><![CDATA[ <p>The Internet is inherently unreliable, a collection of networks connected to each other with fiber optics, copper, microwaves and trust. It’s a magical thing, but things on the Internet break all the time; cables get cut, bogus routes get advertised, routers crash. Most of the time, these failures are noticed but inexplicable to the average user — ”The Internet is slow today!” — frustrating user experiences as people go about their lives on the Internet.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6XfrNmoyuuVQ12stjKU1sO/a2601cdb4d5c874c634297f966008f1e/Argo_horizontal.png" />
            
            </figure><p>Today, to fix all of this, Cloudflare is launching <a href="https://cloudflare.com/argo">Argo</a>, a “virtual backbone” for the modern Internet. Argo analyzes and optimizes routing decisions across the global Internet in real-time. Think Waze, the automobile route optimization app, but for Internet traffic.</p><p>Just as Waze can tell you which route to take when driving by monitoring which roads are congested or blocked, Argo can route connections across the Internet efficiently by avoiding packet loss, congestion, and outages.</p><p>Cloudflare’s Argo is able to deliver content across our network with dramatically reduced latency, increased reliability, heightened encryption, and reduced cost vs. an equivalent path across the open Internet. The results are impressive: <b>an average 35% decrease in latency, a 27% decrease in connection errors, and a 60% decrease in cache misses</b>. Websites, APIs, and applications using Argo have seen bandwidth bills fall by more than half and speed improvements end users can feel.</p><p>Argo is a central nervous system for the Internet, processing information from every request we see to determine which routes are fast, which are slow, and what the optimum path from visitor to content is at that given moment. Through Cloudflare’s <a href="https://www.cloudflare.com/network/">115 PoPs</a> and 6 million domains, we see every ISP and every user of the Internet pass through our network. The intelligence from this gives us a billion eyes feeding information about brownouts, faults, and packet loss globally.</p><p>Today, Argo includes two core features: Smart Routing and Tiered Cache. All customers can enable Argo today in the <a href="https://www.cloudflare.com/a/traffic">Traffic app</a> in the dashboard. Argo is priced at \$5/domain monthly, plus \$0.10 per GB of transfer from Cloudflare to your visitors.</p>
    <div>
      <h3>Argo Smart Routing</h3>
      <a href="#argo-smart-routing">
        
      </a>
    </div>
    <p>Networks on the Internet rely on legacy technologies like <a href="https://en.wikipedia.org/wiki/Border_Gateway_Protocol">BGP</a> to propagate and calculate routes from network to network, ultimately getting you from laptop-on-couch to video-on-YouTube. BGP has been around for decades, and was not designed for a world with malicious or incompetent actors lurking at every network hop.</p><p>In one comical example from 2008, a Pakistani ISP turned a botched censorship order into a <a href="http://www.nytimes.com/2008/02/26/technology/26tube.html">global YouTube outage</a>, bringing the fragility of core Internet routing algorithms into the public eye. In the same situation, Argo Smart Routing would detect which transit providers had valid routes to YouTube and which did not, keeping end user experience fast, reliable, and secure.</p><p><a href="https://en.wikipedia.org/wiki/Metcalfe%27s_law">Metcalfe’s Law</a> states that the value of a network is defined by the square of the number of nodes that make up the network. The existing Internet is incredibly valuable because of the number and diversity of nodes connected to the network. Unfortunately, this makes it difficult to pick up and start over; no Internet started from scratch, with sounder routing and traffic management, would come close to delivering the value provided by the current incarnation without a similar network footprint.</p><p>Because of our physical and virtual presence around the world, Cloudflare is uniquely positioned to rebuild the core of the Internet. Every customer we bring on increases the size of our network and the value of that network to each of our customers. Argo is Metcalfe’s Law brought to life.</p><p>Argo Smart Routing uses latency and packet loss data collected from each request that traverses our network to pick optimal paths across the Internet. Using this latency data, we’re able to determine which of our transit providers are performing best between any two points on the planet. Cloudflare now sees about 10% of all HTTP/HTTPS requests on the Internet. With Argo, each of those requests is providing the insight necessary to speed up all of its peers.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1ni36rsDqDZXGjvJGN4mLY/2dfcfed39920719b2b5b2c50d8b5d8b0/916142_ddc2fd0140.jpg" />
            
            </figure><p><a href="https://creativecommons.org/licenses/by/2.0/">CC BY 2.0</a> <a href="https://www.flickr.com/photos/jurvetson/916142/in/photolist-5Gky-SWsDvP-hBvqn-fv6F5e-rUNDSQ-5cdENs-9W2TUo-4ziFqE-5CttRH-5pSrgB-8tbzvW-63k6cB-qoTfsN-qrQMkd-bjZX2J-9hCGJC-8QM8B1-2H7uc6-aygB5n-8B47Zf-4WF7HT-sMeQW-sMeNR-yP5Hfz-6DDNqT-sMeN8-aF9ufE-6UvUtt-7Tf66X-5CTbb4-2H7uw6-5ny4q-5B6pL4-sMeRJ-dMNYzx-35Hks2-d27sow-3xPib-6sgGGQ-5Rp8Fq-6gZLGF-bRZ4Wx-eVQmrb-9hzzzV-9hzsBz-SQ2H3n-6gZLyV-mJFXRy-9hCxZj-bcEVA">image</a> by <a href="https://www.flickr.com/photos/jurvetson/">Steve Jurvetson</a></p><p>Enabling Argo (and Smart Routing with it) results in breathtaking reductions in latency. As an example, <a href="https://www.okcupid.com/"><b>OKCupid</b></a><b> enabled Argo and immediately saw a 36% decrease in request latency</b>, as measured by TTFB (Time To First Byte). Without Argo, requests back to the origin from a Cloudflare PoP traverse the public Internet, subject to vagaries of routers, cables, and computers they will touch on their journey. With Argo, requests back to the origin are tunneled over our secure overlay network, on a path to the origin we've learned the performance of from all the requests that have traversed before it.</p><p>Transit over the public Internet is like driving with paper maps; it usually works, but using a modern navigation system that takes current traffic conditions into account will almost always be faster.</p><p>Routing over intelligently determined paths also results in significant reliability gains. Argo picks the fastest, most reliable route to the origin, which means routing around flapping links and routers that refuse to do their job. In a real-world illustration of these reliability gains, <b>OKCupid saw a 42% drop in the number of connection timeouts</b> on their site with Argo enabled.</p><p>It’s not just OKCupid that’s happy with Argo. 50,000 customers, large and small, have been beta testing Argo over the last 12 months. On average, these Argo Smart Routing beta customers saw a 35% decrease in latency and a 27% decrease in connection timeouts.</p>
    <div>
      <h3>Argo Tiered Cache</h3>
      <a href="#argo-tiered-cache">
        
      </a>
    </div>
    <p>Argo Tiered Cache uses the size of our network to reduce requests to customer origins by dramatically increasing cache hit ratios. By having 115 PoPs around the world, Cloudflare caches content very close to end users, but if a piece of content is not in cache, the Cloudflare edge PoP must contact the origin server to receive the cacheable content. This can be slow and places load on an origin server compared to serving directly from cache.</p><p>Argo Tiered Cache lowers origin load, increases cache hit ratios, and improves end user experience by first asking other Cloudflare PoPs if they have the requested content when a cache miss occurs. This results in improved performance for visitors, because distances and links traversed between Cloudflare PoPs are generally shorter and faster than the links between PoPs and origins. It also reduces load on origins, making web properties more economical to operate. Customers enabling Argo can expect to see a <b>60% reduction in their cache miss rate</b> as compared to Cloudflare’s traditional CDN service.</p><p>Argo Tiered Cache also concentrates connections to origin servers so they come from a small number of PoPs rather than the full set of 115 PoPs. This results in fewer open connections using server resources. In our testing, we've found many customers save more on their cloud hosting bills than Argo costs, because of reduced bandwidth usage and fewer requests to the origin. This makes the service a “no brainer” to enable.</p>
    <div>
      <h3>Additional Benefits</h3>
      <a href="#additional-benefits">
        
      </a>
    </div>
    <p>In addition to performance and reliability gains, Argo also delivers a more secure online experience. All traffic between Cloudflare data centers is protected by mutually authenticated TLS, ensuring any traffic traversing the Argo backbone is protected from interception, tampering, and eavesdropping.</p><p>With Argo, we’ve rebuilt things at the very core of the Internet, the algorithms that figure out where traffic should flow and how. We’ve done all this without any disruption to how the Internet works or <a href="https://www.cloudflare.com/application-services/solutions/app-performance-monitoring/">how applications behave</a>.</p><p>Cloudflare has built a suite of products to address lots of pains on the Internet. Argo is our newest offering.</p><p>Go ahead and enable it — you’ll find it in the <a href="https://www.cloudflare.com/a/traffic">Traffic tab</a> in your dashboard.</p><p>PS. Interested in working on Argo? <a href="https://www.cloudflare.com/careers/">Drop us a line!</a></p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7hNDPxqURG6CNrMyZmDs1w/f7942f9dff0f8119310287c28e721a6f/Argo-infographic-1.jpg" />
            
            </figure> ]]></content:encoded>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[Speed & Reliability]]></category>
            <category><![CDATA[Speed]]></category>
            <category><![CDATA[Reliability]]></category>
            <category><![CDATA[Argo Smart Routing]]></category>
            <guid isPermaLink="false">2YdzvdlJkxHFF5TAkGPOy2</guid>
            <dc:creator>Rustam Lalkaka</dc:creator>
        </item>
    </channel>
</rss>