
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/">
    <channel>
        <title><![CDATA[ The Cloudflare Blog ]]></title>
        <description><![CDATA[ Get the latest news on how products at Cloudflare are built, technologies used, and join the teams helping to build a better Internet. ]]></description>
        <link>https://blog.cloudflare.com</link>
        <atom:link href="https://blog.cloudflare.com/" rel="self" type="application/rss+xml"/>
        <language>en-us</language>
        
        <lastBuildDate>Wed, 08 Apr 2026 10:05:43 GMT</lastBuildDate>
        <item>
            <title><![CDATA[How Cloudflare erroneously throttled a customer’s web traffic]]></title>
            <link>https://blog.cloudflare.com/how-cloudflare-erroneously-throttled-a-customers-web-traffic/</link>
            <pubDate>Tue, 07 Feb 2023 18:20:49 GMT</pubDate>
            <description><![CDATA[ Today’s post is a little different. It’s about a single customer’s website not working correctly because of incorrect action taken by Cloudflare. ]]></description>
            <content:encoded><![CDATA[ <p></p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6TlF3dN51Im2Zyzwy04oyr/8fbcd20d22abe9ba7b78dd673e9f04a1/BLOG-1707-header-1.png" />
            
            </figure><p>Over the years when Cloudflare has had an <a href="/tag/outage/">outage</a> that affected our customers we have very quickly blogged about what happened, why, and what we are doing to address the causes of the outage. Today’s post is a little different. It’s about a single customer’s website <a href="https://news.ycombinator.com/item?id=34639212">not working correctly</a> because of incorrect action taken by Cloudflare.</p><p>Although the customer was not in any way banned from Cloudflare, or lost access to their account, their website didn’t work. And it didn’t work because Cloudflare applied a bandwidth throttle between us and their origin server. The effect was that the website was unusable.</p><p>Because of this unusual throttle there was some internal confusion for our customer support team about what had happened. They, incorrectly, believed that the customer had been limited because of a breach of section 2.8 of our <a href="https://www.cloudflare.com/terms/">Self-Serve Subscription Agreement</a> which prohibits use of our self-service CDN to serve excessive non-HTML content, such as images and video, without a paid plan that includes those services (this is, for example, designed to prevent someone building an image-hosting service on Cloudflare and consuming a huge amount of bandwidth; for that sort of use case we have paid <a href="https://www.cloudflare.com/products/cloudflare-images/">image</a> and <a href="https://www.cloudflare.com/products/cloudflare-stream/">video</a> plans).</p><p>However, this customer wasn’t breaking section 2.8, and they were both a paying customer and a paying customer of Cloudflare Workers through which the throttled traffic was passing. This throttle should not have happened. In addition, there is and was no need for the customer to upgrade to some other plan level.</p><p>This incident has set off a number of workstreams inside Cloudflare to ensure better communication between teams, prevent such an incident happening, and to ensure that communications between Cloudflare and our customers are much clearer.</p><p>Before we explain our own mistake and how it came to be, we’d like to apologize to the customer. We realize the serious impact this had, and how we fell short of expectations. In this blog post, we want to explain what happened, and more importantly what we’re going to change to make sure it does not happen again.</p>
    <div>
      <h3>Background</h3>
      <a href="#background">
        
      </a>
    </div>
    <p>On February 2, an on-call network engineer received an alert for a congesting interface with Equinix IX in our Ashburn data center. While this is not an unusual alert, this one stood out for two reasons. First, it was the second day in a row that it happened, and second, the congestion was due to a sudden and extreme spike of traffic.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2BHPQTMGbXizjZHfUEwf7S/7b9665371ed4e291c5df992066bc3c82/image2-1.png" />
            
            </figure><p>The engineer in charge identified the customer’s domain, tardis.dev, as being responsible for this sudden spike of traffic between Cloudflare and their origin network, a storage provider. Because this congestion happens on a physical interface connected to external peers, there was an immediate impact to many of our customers and peers. A port congestion like this one typically incurs packet loss, slow throughput and higher than usual latency. While we have automatic mitigation in place for congesting interfaces, in this case the mitigation was unable to resolve the impact completely.</p><p>The traffic from this customer went suddenly from an average of 1,500 requests per second, and a 0.5 MB payload per request, to 3,000 requests per second (2x) and more than 12 MB payload per request (25x).</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3ROLTRXsuoeYEpw0ewWDFX/c4af0bfaf7ef4c3c3966058153416209/image1-4.png" />
            
            </figure><p>The congestion happened between Cloudflare and the origin network. Caching did not happen because the requests were all unique URLs going to the origin, and therefore we had no ability to serve from cache.</p><p><b>A Cloudflare engineer decided to apply a throttling mechanism to prevent the zone from pulling so much traffic from their origin. Let's be very clear on this action: Cloudflare does not have an established process to throttle customers that consume large amounts of bandwidth, and does not intend to have one. This remediation was a mistake, it was not sanctioned, and we deeply regret it.</b></p><p>We lifted the throttle through internal escalation 12 hours and 53 minutes after having set it up.</p>
    <div>
      <h3>What's next</h3>
      <a href="#whats-next">
        
      </a>
    </div>
    <p>To make sure a similar incident does not happen, we are establishing clear rules to mitigate issues like this one. Any action taken against a customer domain, paying or not, will require multiple levels of approval and clear communication to the customer. Our tooling will be improved to reflect this. We have many ways of traffic shaping in situations where a huge spike of traffic affects a link and could have applied a different mitigation in this instance.</p><p>We are in the process of rewriting our terms of service to better reflect the type of services that our customers deliver on our platform today. We are also committed to explaining to our users in plain language what is permitted under self-service plans. As a developer-first company with transparency as one of its core principles, we know we can do better here. We will follow up with a blog post dedicated to these changes later.</p><p>Once again, we apologize to the customer for this action and for the confusion it created for other Cloudflare customers.</p> ]]></content:encoded>
            <category><![CDATA[Customers]]></category>
            <category><![CDATA[Transparency]]></category>
            <guid isPermaLink="false">5Ulx28kIpVehkdG8jDUoLB</guid>
            <dc:creator>Jeremy Hartman</dc:creator>
            <dc:creator>Jérôme Fleury</dc:creator>
        </item>
        <item>
            <title><![CDATA[RPKI and BGP: our path to securing Internet Routing]]></title>
            <link>https://blog.cloudflare.com/rpki-details/</link>
            <pubDate>Wed, 19 Sep 2018 12:01:00 GMT</pubDate>
            <description><![CDATA[ This article will talk about our approach to network security using technologies like RPKI to sign Internet routes and protect our users and customers from route hijacks and misconfigurations. ]]></description>
            <content:encoded><![CDATA[ <p></p><p>This article will talk about our approach to <a href="https://www.cloudflare.com/network-security/">network security</a> using technologies like RPKI to sign Internet routes and protect our users and customers from route hijacks and misconfigurations. We are proud to announce we have started deploying active filtering by using RPKI for routing decisions and signing our routes.</p><p>Back in April, articles including our blog post on <a href="/bgp-leaks-and-crypto-currencies/">BGP and route-leaks</a> were reported in the news, highlighting how IP addresses can be redirected maliciously or by mistake. While enormous, the underlying routing infrastructure, the bedrock of the Internet, has remained mostly unsecured.</p><p>At Cloudflare, we decided to secure our part of the Internet by protecting our customers and everyone using our services including our recursive resolver <a href="https://www.cloudflare.com/learning/dns/what-is-1.1.1.1/">1.1.1.1</a>.</p>
    <div>
      <h3>From BGP to RPKI, how do we Internet ?</h3>
      <a href="#from-bgp-to-rpki-how-do-we-internet">
        
      </a>
    </div>
    <p>A prefix is a range of IP addresses, for instance, <code>10.0.0.0/24</code>, whose first address is <code>10.0.0.0</code> and the last one is <code>10.0.0.255</code>. A computer or a server usually have one. A router creates a list of reachable prefixes called a routing table and uses this routing table to transport packets from a source to a destination.  </p><p>On the Internet, network devices exchange routes via a protocol called <a href="https://www.cloudflare.com/learning/security/glossary/what-is-bgp/">BGP</a> (Border Gateway Protocol). BGP enables a map of the interconnections on the Internet so that packets can be sent across different networks to reach their final destination. BGP binds the separate networks together into the Internet.</p><p>This dynamic protocol is also what makes Internet so resilient by providing multiple paths in case a router on the way fails. A BGP announcement is usually composed of a <i>prefix</i> which can be reached at a <i>destination</i> and was originated by an <i>Autonomous System Number</i> (ASN).</p><p>IP addresses and Autonomous Systems Numbers are allocated by five Regional Internet Registries (RIR): <a href="https://afrinic.net/">Afrinic</a> for Africa, <a href="https://www.apnic.net/">APNIC</a> for Asia-Pacific, <a href="https://www.arin.net">ARIN</a> for North America, <a href="https://www.lacnic.net">LACNIC</a> for Central and South America and <a href="https://www.ripe.net">RIPE</a> for Europe, Middle-East and Russia. Each one operates independently.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3fnGjueGh1e3tInixUT2zc/49f63762c184aacac85079da9e50b744/rirs-01.png" />
            
            </figure><p>With more than 700,000 IPv4 routes and 40,000 IPv6 routes announced by all Internet actors, it is difficult to know who owns which resource.</p><p>There is no simple relationship between the entity that has a prefix assigned, the one that announces it with an ASN and the ones that receive or send packets with these IP addresses. An entity owning <code>10.0.0.0/8</code> may be delegating a subset <code>10.1.0.0/24</code> of that space to another operator while being announced through the AS of another entity.</p><p>Thereby, a route leak or a route hijack is defined as the illegitimate advertisement of an IP space. The term <i>route hijack</i> implies a malicious purpose while a route leak usually happens because of a misconfiguration.</p><p>A change in route will cause the traffic to be redirected via other networks. Unencrypted traffic can be read and modified. HTTP webpages and DNS without DNSSEC are sensitive to these exploits.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/wdvXA5fV7m7z6e5Xdtqx2/adce2d2dc0b79010b7c4a3e353fec50a/bgp-hijacking-technical-flow-1.png" />
            
            </figure><p>You can learn more about BGP Hijacking in our <a href="https://www.cloudflare.com/learning/security/glossary/bgp-hijacking/">Learning Center</a>.</p><p>When an illegitimate announcement is detected by a peer, they usually notify the origin and reconfigure their network to reject the invalid route.Unfortunately, the time to detect and act upon may take from a few minutes up to a few days, more than enough to steal cryptocurrencies, <a href="https://en.wikipedia.org/wiki/DNS_spoofing">poison a DNS</a> cache or make a website unavailable.</p><p>A few systems exist to document and prevent illegitimate BGP announcements.</p><p><b>The Internet Routing Registries (IRR)</b> are semi-public databases used by network operators to register their assigned Internet resources. Some database maintainers do not check whether the entry was actually made by the owner, nor check if the prefix has been transferred to somebody else. This makes them prone to error and not completely reliable.</p><p><b>Resource Public Key Infrastructure (RPKI)</b> is similar to the IRR “route” objects, but adding the authentication with cryptography.</p><p>Here’s how it works: each RIR has a root certificate. They can generate a signed certificate for a Local Internet Registry (LIR, a.k.a. a network operator) with all the resources they are assigned (IPs and ASNs). The LIR then signs the prefix containing the origin AS that they intend to use: a ROA (Route Object Authorization) is created. ROAs are just simple X.509 certificates.</p><p>If you are used to <a href="https://www.cloudflare.com/application-services/products/ssl/">SSL/TLS certificates</a> used by browsers to authenticate the holder of a website, then ROAs are the equivalent in the Internet routing world.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/790gd4F1LviExZpxP0HAFk/f1430645a7c769788129fb47c3b9dca6/roas_3x-1.png" />
            
            </figure>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4tQYkBINEbYbCSl7cM5Jvw/568c706d6256d0f44740aecb28297cd6/routing-rpki-2-01.png" />
            
            </figure>
    <div>
      <h3>Signing prefixes</h3>
      <a href="#signing-prefixes">
        
      </a>
    </div>
    <p>Each network operator owning and managing Internet resources (IP addresses, Autonomous System Numbers) has access to their Regional Internet Registry portal. Signing their prefixes through the portal or the API of their RIR is the easiest way to start with RPKI.</p><p>Because of our global presence, Cloudflare has resources in each of the 5 RIR regions. With more than 800 prefix announcements over different ASNs, the first step was to ensure the prefixes we were going to sign were correctly announced.</p><p>We started by signing our less used prefixes, checked if the traffic levels remained the same and then signed more prefixes. Today about 25% of Cloudflare prefixes are signed. This includes our critical DNS servers and our <a href="https://one.one.one.one">public 1.1.1.1 resolver</a>.</p>
    <div>
      <h3>Enforcing validated prefixes</h3>
      <a href="#enforcing-validated-prefixes">
        
      </a>
    </div>
    <p>Signing the prefixes is one thing. But ensuring that the prefixes we receive from our peers match their certificates is another.</p><p>The first part is validating the certificate chain. It is done by synchronizing the RIR databases of ROAs through rsync (although there are some new proposals regarding <a href="https://tools.ietf.org/html/rfc8182">distribution over HTTPS</a>), then check the signature of every ROA against the RIR’s certificate public key. Once the valid records are known, this information is sent to the routers.</p><p>Major vendors support a protocol called <a href="https://tools.ietf.org/html/rfc6810">RPKI to Router Protocol</a> (abbreviated as RTR). This is a simple protocol for passing a list of valid prefixes with their origin ASN and expected mask length. However, while the RFC defines 4 different secure transport methods, vendors have only implemented the insecure one. Routes sent in clear text over TCP can be tampered with.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7ahyNsczbXfzaYtuvvbJ6g/2bf47a4a24b03da433a29a456725cce3/RPKI-diagram-_3x-2.png" />
            
            </figure><p>With more than 150 routers over the globe, it would be unsafe to rely on these cleartext TCP sessions over the insecure and lossy Internet to our validator. We needed local distribution on a link we know secure and reliable.</p><p>One option we considered was to install an RPKI RTR server and a validator in each of our 150+ datacenters, but doing so would cause a significant increase in operational cost and reduce debugging capabilities.</p>
    <div>
      <h4>Introducing GoRTR</h4>
      <a href="#introducing-gortr">
        
      </a>
    </div>
    <p>We needed an easier way of passing an RPKI cache securely. After some system design sessions, we settled on distributing the list of valid routes from a central validator securely, distribute it via our own <a href="https://www.cloudflare.com/learning/cdn/what-is-a-cdn/">Content Delivery Network</a> and use a lightweight local RTR server. This server fetches the cache file over HTTPS and passes the routes over RTR.</p><p>Rolling out this system on all our PoPs using automation was straightforward and we are progressively moving towards enforcing strict validation of RPKI signed routes everywhere.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/ztRweRuwqRgBnTauHI0P9/215d2f4b6ee60a54a26956d55e3f3839/gortr-2-01.png" />
            
            </figure><p>To encourage adoption of Route Origin Validation on the Internet, we also want to provide this service to everyone, for free. You can already download our <a href="https://github.com/cloudflare/gortr">RTR server</a> with the cache behind Cloudflare. Just configure your <a href="https://www.juniper.net/documentation/en_US/junos/topics/topic-map/bgp-origin-as-validation.html">Juniper</a> or <a href="https://www.cisco.com/c/en/us/td/docs/routers/asr9000/software/asr9k_r6-1/routing/configuration/guide/b-routing-cg-asr9k-61x/b-routing-cg-asr9k-61x_chapter_010.html#concept_A84818AD41744DFFBD094DA7FCD7FE8B">Cisco</a> router. And if you do not want to use our file of prefixes, it is compatible with the RIPE RPKI Validator Export format.</p><p>We are also working on providing a public RTR server using our own <a href="https://www.cloudflare.com/products/cloudflare-spectrum/">Spectrum service</a> so that you will not have to install anything, just make sure you peer with us! Cloudflare is present on many Internet Exchange Points so we are one hop away from most routers.</p>
    <div>
      <h3>Certificate transparency</h3>
      <a href="#certificate-transparency">
        
      </a>
    </div>
    <p>A few months ago, <a href="/author/nick-sullivan/">Nick Sullivan</a> introduced our new <a href="/introducing-certificate-transparency-and-nimbus/">Nimbus Certificate Transparency Log</a>.</p><p>In order to track the emitted certificates in the RPKI, our Crypto team created a new Certificate Transparency Log called <a href="https://ct.cloudflare.com/logs/cirrus">Cirrus</a> which includes the five RIRs root certificates as trust anchors. Certificate transparency is a great tool for detecting bad behavior in the RPKI because it keeps a permanent record of all valid certificates that are submitted to it in an append-only database that can’t be modified without detection. It also enables users to download the entire set of certificates via an HTTP API.</p>
    <div>
      <h3>Being aware of route leaks</h3>
      <a href="#being-aware-of-route-leaks">
        
      </a>
    </div>
    <p>We use services like <a href="https://www.bgpmon.net">BGPmon</a> and other public observation services extensively to ensure quick action if some of our prefixes are leaked. We also have internal BGP and BMP collectors, aggregating more than 60 millions routes and processing live updates.</p><p>Our filters make use of this live feed to ensure we are alerted when a suspicious route appears.</p>
    <div>
      <h3>The future</h3>
      <a href="#the-future">
        
      </a>
    </div>
    <p>The <a href="https://blog.benjojo.co.uk/post/are-bgps-security-features-working-yet-rpki">latest statistics</a> suggest that around 8.7% of the IPv4 Internet routes are signed with RPKI, but only 0.5% of all the networks apply strict RPKI validation.Even with RPKI validation enforced, a BGP actor could still impersonate your origin AS and advertise your BGP route through a malicious router configuration.</p><p>However that can be partially solved by denser interconnections, that Cloudflare already has through an extensive network of private and public interconnections.To be fully effective, RPKI must be deployed by multiple major network operators.</p><p>As said by <a href="http://instituut.net/~job/">Job Snijders</a> from NTT Communications, who’s been at the forefront of the efforts of securing Internet routing:</p><blockquote><p>If the Internet's major content providers use RPKI and validate routes, the impact of BGP attacks is greatly reduced because protected paths are formed back and forth. It'll only take a small specific group of densely connected organizations to positively impact the Internet experience for billions of end users.</p></blockquote><p>RPKI is not a bullet-proof solution to securing all routing on the Internet, however it represents the first milestone in moving from trust based to authentication based routing. Our intention is to demonstrate that it can be done simply and cost efficiently. We are inviting operators of critical Internet infrastructure to follow us in a large scale deployment.</p><p><a href="/subscribe/"><i>Subscribe to the blog</i></a><i> for daily updates on our announcements.</i></p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1H1JDnSWx28sdAjdF3sd0Z/b5425cf515b60c2c3a9be6b2420d8a3b/CRYPTO-WEEK-banner_2x.png" />
            
            </figure><p></p> ]]></content:encoded>
            <category><![CDATA[Crypto Week]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[RPKI]]></category>
            <category><![CDATA[BGP]]></category>
            <category><![CDATA[Cryptography]]></category>
            <guid isPermaLink="false">2D6tCrWBtiucUXYsoEFWJZ</guid>
            <dc:creator>Jérôme Fleury</dc:creator>
            <dc:creator>Louis Poinsignon</dc:creator>
        </item>
        <item>
            <title><![CDATA[Hurricane Irma]]></title>
            <link>https://blog.cloudflare.com/irma/</link>
            <pubDate>Sat, 09 Sep 2017 15:10:00 GMT</pubDate>
            <description><![CDATA[ Yesterday, we described how Hurricane Irma impacted several Caribbean islands, with the damage including a significant disruption to Internet access. ]]></description>
            <content:encoded><![CDATA[ <p>Yesterday, we described how Hurricane Irma impacted several Caribbean islands, with the damage including a significant <a href="/the-story-of-two-outages/">disruption to Internet access</a>.</p>
            <figure>
            <a href="https://www.accuweather.com/en/weather-news/major-hurricane-irma-likely-to-deliver-destructive-blow-to-florida-this-weekend/70002657">
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/15C0Sv2OYS3PGzBuEuNwzR/8de3d21e2fa83117eb3d07ce4f8e216c/irma.jpeg.jpeg" />
            </a>
            </figure><p>As Irma is now forecast to hit southern Florida as category 5 this weekend with gusty winds reaching up to 155mph, it is also expected that Internet infrastructure in the region will suffer.</p><p>At the time of writing, we haven’t noticed any decrease in traffic in the region of Miami despite calls to evacuate.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2mOeFVhbXloT1EwcHBx0PD/ee4777b8d1534db18090ddbe48310e65/Screen-Shot-2017-09-09-at-8.23.20-AM.png" />
            
            </figure>
    <div>
      <h3>Resilient Data Centers</h3>
      <a href="#resilient-data-centers">
        
      </a>
    </div>
    <p>Contrary to popular belief, <a href="https://en.wikipedia.org/wiki/ARPANET#Debate_on_design_goals">Internet wasn't built for the purpose of resisting a nuclear attack</a>. That doesn't mean that datacenters aren't built to resist catastrophic events.</p><p>The Miami datacenter housing servers for Cloudflare and other Internet operators is classified as Tier IV. What does this tiering mean? As defined by the ANSI (American National Standards Institute), a <a href="https://journal.uptimeinstitute.com/explaining-uptime-institutes-tier-classification-system/">Tier IV datacenter</a> is the stringent classification in term of redundancy of the critical components of a datacenter: power and cooling. It guarantees 99.995% uptime per year, that is only 26 minutes of unavailability. Tier IV datacenters provide this level of uptime by being connected to separate power grids, allowing their customers to connect their devices to both of these grids. They also provide fuel-powered backup generators, which can themselves be made redundant, for up to 96 hours of autonomy.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6TRBQ9ONCW5zs2KdjTJZ38/0baac386d083c01e9667b3ecb0ac80a1/tier-4-data-center.jpg" />
            
            </figure><p>Data center facilities have already taken precautionary measures during these last days, one of them contacting their customers with the following (excerpt):</p><ul><li><p><i>The generators have been tested, fuel tanks are full, levels verified.</i></p></li><li><p><i>Special arrangements have been made to make sure staff is available on site as necessary to maintain operability standards.</i></p></li><li><p><i>We will secure hotel rooms near the sites, and have cots, MREs, and other emergency supplies on site should the situation become extreme.</i></p></li></ul><p>Due to their importance in Internet infrastructure, Tier IV datacenters also have the most available connectivity to the Internet. This is the case for our Miami data center, which is connected to multiple Tier 1 transit providers and Internet Exchanges and will provide backup routes in case of an outage with a particular infrastructure.</p><p>As a last resort, in the event our Miami datacenter would be taken offline, our Anycast routing will smoothly reroute packets to our nearest data centers in the United States: Tampa, Atlanta and Ashburn (Washington DC).</p><p>Our technical teams will take all the necessary steps to ensure our services stay online during these unfortunate events. We’d like to remind our users to follow all precautions, and <a href="http://www.floridadisaster.org/info/knowyourzone.htm">evacuate the regions as advised by the local authorities</a>.</p> ]]></content:encoded>
            <category><![CDATA[Hurricane]]></category>
            <category><![CDATA[Community]]></category>
            <category><![CDATA[Data Center]]></category>
            <guid isPermaLink="false">6klluLhocSBvzmubjdTE5Q</guid>
            <dc:creator>Jérôme Fleury</dc:creator>
        </item>
        <item>
            <title><![CDATA[The Internet is Hostile: Building a More Resilient Network]]></title>
            <link>https://blog.cloudflare.com/the-internet-is-hostile-building-a-more-resilient-network/</link>
            <pubDate>Tue, 08 Nov 2016 18:56:49 GMT</pubDate>
            <description><![CDATA[ The strength of the Internet is its ability to interconnect all sorts of networks — big data centers, e-commerce websites at small hosting companies, Internet Service Providers (ISP), and Content Delivery Networks (CDN) — just to name a few.  ]]></description>
            <content:encoded><![CDATA[ <p>In a recent <a href="/a-post-mortem-on-this-mornings-incident/">post</a> we discussed how we have been adding resilience to our network.</p><p>The strength of the Internet is its ability to interconnect all sorts of networks — big data centers, <a href="https://www.cloudflare.com/ecommerce/">e-commerce websites</a> at small hosting companies, Internet Service Providers (ISP), and <a href="https://www.cloudflare.com/learning/cdn/what-is-a-cdn/">Content Delivery Networks (CDN)</a> — just to name a few. These networks are either interconnected with each other directly using a dedicated physical fiber cable, through a common interconnection platform called an Internet Exchange (IXP), or they can even talk to each other by simply being on the Internet connected through intermediaries called transit providers.</p><p>The Internet is like the network of roads across a country and navigating roads means answering questions like “How do I get from Atlanta to Boise?” The Internet equivalent of that question is asking how to reach one network from another. For example, as you are reading this on the Cloudflare blog, your web browser is connected to your ISP and packets from your computer found their way across the Internet to Cloudflare’s blog server.</p><p>Figuring out the route between networks is accomplished through a protocol designed 25 years ago (on <a href="http://www.computerhistory.org/atchm/the-two-napkin-protocol/">two napkins</a>) called <a href="https://en.wikipedia.org/wiki/Border_Gateway_Protocol">BGP</a>.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4hf3aJRfh6KcsuPEJfYTSe/27f3c58f9490b6f6d135dafc287ee486/BGP.jpg" />
            
            </figure><p>BGP allows interconnections between networks to change dynamically. It provides an administrative protocol to exchange routes between networks, and allows for withdrawals in the case that a path is no longer viable (when some route no longer works).</p><p>The Internet has become such a complex set of tangled fibers, neighboring routers, and millions of servers that you can be certain there is a server failing or a optical fibre being damaged at any moment. Whether it’s in a datacenter, a trench next to a railroad, or <a href="https://en.wikipedia.org/wiki/2008_submarine_cable_disruption">at the bottom of the ocean</a>. The reality is that the Internet is in a constant state of flux as connections break and are fixed; it’s incredible strength is that it operates in the face of the real world where conditions constantly change.</p><p>While BGP is the cornerstone of Internet routing, it does not provide first class mechanisms to automatically deal with these events, nor does it provide tools to manage quality of service in general.</p><p>Although BGP is able to handle the coming and going of networks with grace, it wasn’t designed to deal with Internet brownouts. One common problem is that a connection enters a state where it hasn’t failed, but isn’t working correctly either. This usually presents itself as packet loss: packets enter a connection and never arrive at their destination. The only solution to these brownouts is active, continuous monitoring of the health of the Internet.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2HIfOhD0jflahkGrgaVNZA/3165e9cdb056961f56364bc86568a508/916142_ddc2fd0140_o.gif" />
            
            </figure><p><a href="https://creativecommons.org/licenses/by/2.0/">CC BY 2.0</a> <a href="https://www.flickr.com/photos/jurvetson/916142/in/photolist-5Gky-6yWE3-e2fQKB-eWnwZ-6wHvD2-dgZcgm-6KosGR-e3Hopo-px8hdd-7ZC1ZE-6Kpc5g-mwSyiS-mwSmbA-9cEASh-jXe2g-mDdk7X-far2ZD-ajTJc1-jVhjV4-fsq7m3-p7tksy-6Dfpax-7mbpjF-8m3K8i-ryoZoC-7wCB5-687rPk-njcKr4-7wzXXn-4EnS8a-2kafd3-tcCu6-tcCti-6V5RaY-pGCHzT-4yzuYg-9uwrFi-d9CeFw-7BfKzq-7Bc244-7Bc15c-7BfQRh-6JRaSd-7Bc1zp-7BfRgJ-k17mn-6JM5pM-q4R4Kw-aBAv3F-7BfNCY">image</a> by <a href="https://www.flickr.com/photos/jurvetson/">Steve Jurvetson</a></p><p>Again, the metaphor of a system of roads is useful. A printed map may tell you the route from one city to another, but it won't tell you where there's a traffic jam. However, modern GPS applications such as Waze can tell you which roads are congested and which are clear. Similarly, Internet monitoring shows which parts of the Internet are blocked or losing packets and which are working well.</p><p>At Cloudflare we decided to deploy our own mechanisms to react to unpredictable events causing these brownouts. While most events do not fall under our jurisdiction — they are “external” to the Cloudflare network — we have to operate a reliable service by minimizing the impact of external events.</p><p>This is a journey of continual improvement, and it can be deconstructed into a few simple components:</p><ul><li><p>Building an exhaustive and consistent view of the quality of the Internet</p></li><li><p>Building a detection and alerting mechanism on top of this view</p></li><li><p>Building the automatic mitigation mechanisms to ensure the best reaction time</p></li></ul>
    <div>
      <h3>Monitoring the Internet</h3>
      <a href="#monitoring-the-internet">
        
      </a>
    </div>
    <p>Having deployed our network in <a href="/amsterdam-to-zhuzhou-cloudflare-global-network/">a hundred locations</a> worldwide, we are in a unique position to monitor the quality of the Internet from a wide variety of locations. To do this, we are leveraging the probing capabilities of our network hardware and have added some extra tools that we’ve built.</p><p>By collecting data from thousands of automatically deployed probes, we have a real-time view of the Internet’s infrastructure: packet loss in any of our transit provider’s backbones, packet loss on Internet Exchanges, or packet loss between continents. It is salutary to watch this real-time view over time and realize how often parts of the Internet fail and how resilient the overall network is.</p><p>Our monitoring data is stored in real-time in our metrics pipeline powered by a mix of open-source software: <a href="http://zeromq.org">ZeroMQ</a>, <a href="https://prometheus.io">Prometheus</a> and <a href="http://opentsdb.net/">OpenTSDB</a>. The data can then be queried and filtered on a single dashboard to give us a clear view of the state of a specific transit provider, or one specific PoP.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/451Jz0B6feZP4SHDMdUm0q/85f6ccc1e0ca94684c2abc4bef3c30d1/loss_1.gif" />
            
            </figure><p>Above we can see a time-lapse of a transit provider having some packet loss issues.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3g8KEnve4KdnOmZqvTWLrD/0666a47829dce22097cbdf4aa68bfdf5/Screenshot-2016-10-30-16.34.45.png" />
            
            </figure><p>Here we see a transit provider having some trouble on the US West Coast on October 28, 2016.</p>
    <div>
      <h3>Building a Detection Mechanism</h3>
      <a href="#building-a-detection-mechanism">
        
      </a>
    </div>
    <p>We didn’t want to stop here. Having a real-time map of Internet quality puts us in a great position to detect problems and create alerts as they unfold. We have defined a set of triggers that we know are indicative of a network issue, which allow us to quickly analyze and repair problems.</p><p>For example, 3% packet loss from Latin America to Asia is expected under normal Internet conditions and not something that would trigger an alert. However, 3% packet loss between two countries in Europe usually indicates a bigger and potentially more impactful problem, and thus will immediately trigger alerts for our Systems Reliability Engineering and Network Engineering teams to look into the issue.</p><p>Sitting between eyeball networks and content networks, it is easy for us to correlate this packet loss with various other metrics in our system, such as difficulty connecting to customer origin servers (which manifest as Cloudflare error 522) or a sudden decrease of traffic from a local ISP.</p>
    <div>
      <h3>Automatic Mitigation</h3>
      <a href="#automatic-mitigation">
        
      </a>
    </div>
    <p>Receiving valuable and actionable alerts is great, however we were still facing the hard to compress time-to-reaction factor. Thankfully in our early years, we’ve learned a lot from DDoS attacks. We’ve learned how to detect and auto-mitigate most attacks with our <a href="/introducing-the-bpf-tools/">efficient automated DDoS mitigation pipeline</a>. So naturally we wondered if we could apply what we’ve learned from DDoS mitigation to these generic internet events? After all, they do exhibit the same characteristics: they’re unpredictable, they’re external to our network, and they can impact our service.</p><p>The next step was to correlate these alerts with automated actions. The actions should reflect what an on-call network engineer would have done given the same information. This includes running some important checks: is the packet loss really external to our network? Is the packet loss correlated to an actual impact? Do we currently have enough capacity to reroute the traffic? When all the stars align, we know we have a case to perform some action.</p><p>All that said, automating actions on network devices turns out to be more complicated than one would imagine. Without going into too much detail, we struggled to find a common language to talk to our equipment with because we’re a multi-vendor network. We decided to contribute to the brilliant open-source project <a href="https://github.com/napalm-automation/napalm">Napalm</a>, coupled it with the automation framework <a href="https://saltstack.com/">Salt</a>, and <a href="http://nanog.org/meetings/abstract?id=2951">and improved it to bring us the features we needed</a>.</p><p>We wanted to be able to perform actions such as configuring probes, retrieving their data, and managing complex BGP neighbor configuration regardless of the network device a given PoP was using. With all these features put together into an automated system, we can see the impact of actions it has taken:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7BUEy4XCIg71TuqUQvJqo1/3b49d0cee7acff31949542ad2e3f3665/Screenshot-2016-10-31-11.04.44.png" />
            
            </figure><p>Here you can see one of our transit provider having a sudden problem in Hong-Kong. Our system automatically detects the fault and takes the necessary action, which is to disable this link for our routing.</p><p>Our system keeps improving every day, but it is already running at a high pace and making immediate adjustments across our network to <a href="https://www.cloudflare.com/solutions/ecommerce/optimization/">optimize performance</a> every single day.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6d2bX0eRLMj91oM9LrN2Ny/21f516b784e01bcdb7326cf3037e3962/Screenshot-2016-10-31-14.31.19-2.png" />
            
            </figure><p>Here we can see actions taken during 90 days of our mitigation bot.</p><p>The impact of this is that we’ve managed to make the Internet perform better for our customers and reduce the number of errors that they'd see if they weren't using Cloudflare. One way to measure this is how often we're unable to reach a customer's origin. Sometimes origins are completely offline. However, we are increasingly at a point where if an origin is reachable we'll find a path to reach it. You can see the effects of our improvements over the last year in the graph below.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1Xhgv1pZz3h5ql0xp26nv9/d6493cdf3157ec25d3fcf3708cd6c561/522_year-1.png" />
            
            </figure>
    <div>
      <h3>The Future</h3>
      <a href="#the-future">
        
      </a>
    </div>
    <p>While we keep improving this resiliency pipeline every day, we are looking forward to deploying some new technologies to streamline it further: <a href="http://movingpackets.net/2016/01/11/99-problems-and-configuration-and-telemetry-aint-two/">Telemetry</a> will permit a more real-time collection of our data by moving from a pull model to a push model, and new automation languages like <a href="http://www.openconfig.net/">OpenConfig</a> will unify and simplify our communication with network devices. We look forward to deploying these improvements as soon as they are mature enough for us to release.</p><p>At Cloudflare our mission is to help build a better internet. The internet though, by its nature and size, is in constant flux — breaking down, being added to, and being repaired at almost any given moment — meaning services are often interrupted and traffic is slowed without warning. By enhancing the reliability and resiliency of this complex network of networks we think we are one step closer to fulfilling our mission and building a better internet.</p> ]]></content:encoded>
            <category><![CDATA[DDoS]]></category>
            <category><![CDATA[Mitigation]]></category>
            <category><![CDATA[Attacks]]></category>
            <category><![CDATA[Salt]]></category>
            <category><![CDATA[Network]]></category>
            <guid isPermaLink="false">1YLXNdbFOfJePGpwFKQuXa</guid>
            <dc:creator>Jérôme Fleury</dc:creator>
        </item>
        <item>
            <title><![CDATA[A Post Mortem on this Morning's Incident]]></title>
            <link>https://blog.cloudflare.com/a-post-mortem-on-this-mornings-incident/</link>
            <pubDate>Tue, 21 Jun 2016 06:03:30 GMT</pubDate>
            <description><![CDATA[ We would like to share more details with our customers and readers on the internet outages that occurred this morning and earlier in the week, and what we are doing to prevent these from happening again. ]]></description>
            <content:encoded><![CDATA[ <p>We would like to share more details with our customers and readers on the internet outages that occurred this morning and earlier in the week, and what we are doing to prevent these from happening again.</p>
    <div>
      <h3>June 17 incident</h3>
      <a href="#june-17-incident">
        
      </a>
    </div>
    <p>On June 17, at 08:32 UTC, our systems detected a significant packet loss between multiple destinations on one of our major transit provider backbone networks, <a href="http://www.teliacarrier.com">Telia Carrier</a>.In the timeframe where the incident was being analysed by our engineers, the loss became intermittent and finally disappeared.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7rEfi2Ivn585mb9vkdDIr2/d17ee4d712e40f77a6e70ab27fc9996e/Screenshot-2016-06-20-14-16-27.png" />
            
            </figure><p><i>Packet loss on Telia Carrier (AS1299)</i></p>
    <div>
      <h3>Today’s incident</h3>
      <a href="#todays-incident">
        
      </a>
    </div>
    <p>Today, Jun 20, at 12:10 UTC, our systems again detected massive packet loss on one of our major transit provider backbone networks: Telia Carrier.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6oh5nRVKJXMRIQPqKCguNE/5e6d09d61d915c795eaafc816e487779/Screenshot-2016-06-20-13-07-33.png" />
            
            </figure><p><i>Packet loss on Telia Carrier (AS1299)</i></p><p>Typically, transit providers are very reliable and transport all of our packets from one point of the globe to the other without loss - that’s what we pay them for. In this case, our packets (and that of other Telia customers), were being dropped.</p><p>While Internet users usually take it for granted that they can reach any destination in the world from their homes and businesses, the reality is harsher than that. Our planet is big, and the Internet pipes are not always reliable. Fortunately, the Internet is mostly built on the <a href="https://en.wikipedia.org/wiki/Transmission_Control_Protocol">TCP protocol</a> which allows lost packets to be retransmitted. That is especially useful on lossy links. In most cases, you won’t notice these packets being lost and retransmitted, however, when the loss is too significant, as was the case this morning, your browser can’t do much.</p><p>Our systems caught the problem instantly and recorded it. Here is an animated map of the packet loss being detected during the event:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5t6Age3Kz1x46V9DhMR6VK/2ce8890115631935d749a75ca4a43d0a/telia-outage-6-20-2016--1-.gif" />
            
            </figure><p><i>CloudFlare detects packet loss (denoted by thickness)</i></p><p>Because transit providers are usually reliable, they tend to fix their problems rather quickly. In this case, that did not happen and we had to take our ports down with Telia at 12:30 UTC. Because we are interconnected with most Tier 1 providers, we are able to shift traffic away from one problematic provider and let others, who are performing better, take care of transporting our packets.</p>
    <div>
      <h3>Impact on our customers</h3>
      <a href="#impact-on-our-customers">
        
      </a>
    </div>
    <p>We saw a big increase in our 522s errors. A 522 HTTP error indicates that our servers are unable to reach the origin servers of our customers. You can see the spike and the breakdown here:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3UWZh0zq7omcnrnrm3zOdI/f8eb778a679ad8129834a2fa4053fe13/Screen-Shot-2016-06-20-at-15-36-32.png" />
            
            </figure><p><i>Spike in 522 errors across PoPs (in reaching origin servers)</i></p>
    <div>
      <h3>On our communication</h3>
      <a href="#on-our-communication">
        
      </a>
    </div>
    <p>Communicating in this kind of incident is crucial and difficult at the same time. Our customers understandably expect prompt, accurate information and want the impact to stop as soon as possible. In today’s incident, we identified weaknesses in our communication: the scope of the incident was incorrectly identified in Europe only, and our response time was not adequate. We want to reassure you that we are taking all the steps to improve our communication, including implementation of automated detection and mitigation systems that can react much more quickly than any human operator. We already have such systems in place for our smaller data centers and are actively testing their accuracy and efficacy before turning them on for larger PoPs.</p><p>Taking down an important transit provider globally is not an easy decision, and many cautious steps are to be taken before doing it. The Internet and its associated protocols are a community based on mutual trust. Any weak link in the chain will cause the entire chain to fail and requires collaboration and cooperation from all parties to make it successful.</p><p>We know how important it is to communicate on our status page. We heard from our customers and took the necessary steps to improve on our communication. Our support team is working on improvements in how we update our status page and reviewing the content for accuracy as well as transparency.</p>
    <div>
      <h3>Building a resilient network</h3>
      <a href="#building-a-resilient-network">
        
      </a>
    </div>
    <p>Even as CloudFlare has grown to become critical Internet infrastructure sitting in front of four million Internet-facing applications, investing in greater resiliency continues to be a work-in-progress. This is achieved through a combination of greater interconnection, automated mitigation, and increased failover capacity.</p><p>We fill our cache from the public Internet using multiple transit providers (such as Telia), and deliver traffic to local eyeball networks using transit providers and multiple peering relationships. To the extent possible, we maintain buffer capacity with our providers to allow them to bear the impact of potential failures on multiple other networks. Spreading out traffic across providers allows for diversity and reduces the impact of potential outages from our upstream providers. Even so, today’s incident impacted a significant fraction of traffic that relied on the Telia backbone.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/46TEFOaWNqhcniJqK8W80J/83a0cd7620a024c0bc3aa9151057b4df/Screenshot-2016-06-20-22-28-05.png" />
            
            </figure><p>_Traffic switching from one provider to the other after we reroute_</p><p>Where possible, we try to failover traffic to a redundant provider or data center while keeping traffic within the same country.</p><p>BGP is the protocol used to route packets between autonomous networks on the Internet. While it is doing a great job at keeping interconnections alive, it has no mechanism built in to detect packet loss and performance issues on a path.</p><p>We have been working on building a mechanism (which augments BGP) to proactively detect packet loss and move traffic away from providers experiencing packet loss. Because this system is currently activated only for our most remote and smallest locations, it didn't trigger in this morning’s incident. We plan to extend the capability in the next 2 weeks to switch from a manual reaction to an automatic one in all our POPs. For example, in this screenshot, you can see our POP in Johannesburg being automatically removed from our network because of problems detected when connecting to origin servers:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5k9ZgDYRcM1DDlK8Pl9Mpa/a55d73164c5bab3ab0c5b46d366feef7/Screenshot-2016-06-20-13-50-03.png" />
            
            </figure><p></p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/35qNsGHp1t145O9KH3uimH/f7409776a680dad20c0adc733781e036/Screenshot-2016-06-20-14-09-57.png" />
            
            </figure><p><i>Johannesburg PoP gracefully fails over to nearest PoP</i></p>
    <div>
      <h3>Summary</h3>
      <a href="#summary">
        
      </a>
    </div>
    <p>We understand how critical our infrastructure is for our customers’ businesses, and so we will continue to move towards completely automated systems to deal with this type of incident. Our goal is to minimize disruptions and outages for our customers regardless of the origin of the issue.</p> ]]></content:encoded>
            <category><![CDATA[Post Mortem]]></category>
            <category><![CDATA[Reliability]]></category>
            <category><![CDATA[Traffic]]></category>
            <category><![CDATA[BGP]]></category>
            <category><![CDATA[Vulnerabilities]]></category>
            <category><![CDATA[Security]]></category>
            <guid isPermaLink="false">daxS35A8ZzqAB5LsXbt8H</guid>
            <dc:creator>Jérôme Fleury</dc:creator>
        </item>
        <item>
            <title><![CDATA[Good News: Vulnerable NTP Servers Closing Down]]></title>
            <link>https://blog.cloudflare.com/good-news-vulnerable-ntp-servers-closing-down/</link>
            <pubDate>Sun, 23 Feb 2014 11:00:00 GMT</pubDate>
            <description><![CDATA[ On Monday, February 10th, CloudFlare experienced a large DDoS attack, with nearly 400Gbps of NTP attack traffic hitting our network.  ]]></description>
            <content:encoded><![CDATA[ <p>On Monday, February 10th, CloudFlare experienced a large DDoS attack, with nearly <a href="/technical-details-behind-a-400gbps-ntp-amplification-ddos-attack">400Gbps of NTP attack traffic hitting our network</a>. We were not the only networks getting hit by massive NTP attacks. Around the same time, OVH reported a similarly sized attack. Since the attack we’ve heard from a number of other networks that have seen large NTP-based attacks over the last few weeks.</p><blockquote><p>We see today lot of new DDoS attacks from Internet to our network. Type: NTP AMP Size: &gt;350Gbps. No issue. VAC is great :) — Oles (@olesovhcom) <a href="https://twitter.com/olesovhcom/statuses/433631778620702721">February 12, 2014</a></p></blockquote><p>John-Graham Cumming on our team wrote a blog post before the attack describing how such an attack is possible by <a href="/understanding-and-mitigating-ntp-based-ddos-attacks">using a combination of spoofed UDP packets and vulnerable NTP servers</a>.</p><p>During the 400Gbps attack we saw 4,259 IPv4 addresses of involved vulnerable servers that were sending attack traffic to our network. These networks were not controlled by the attacker directly but instead were running network time protocol (NTP) servers that responded to commands that would create very large responses, thus acting as a good amplification vector. Specifically, all of these servers were used by attackers to reply large packets in response to the "monlist" command.</p>
    <div>
      <h3>Some Good News</h3>
      <a href="#some-good-news">
        
      </a>
    </div>
    <p>In the aftermath of this massive attack, we decided to publish the list of networks originating these attacks hoping to have them fix the problem. Since the blog post we’ve been monitoring the networks to see whether attention to this problem has helped close the vulnerable NTP servers. The results are encouraging:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4ZKy4WIyO90QOic4tInJcl/6b2a24482821d990ee1b4b0dbe4ebbe0/Screen_Shot_2014-02-23_at_3.23.56_PM.png" />
            
            </figure><p>After a week and a half, more than 75% of the vulnerable servers involved in the attack are now no longer vulnerable. While in some cases the servers might be temporarily unreachable, the trend is clear: network administrators have gotten the message and are closing vulnerable NTP servers.</p><p>The people behind the <a href="http://www.openntpproject.org/">openntp.org</a> project also have noticed a massive improvement of the situation worlwide:</p><blockquote><p>NTP MONLIST Amplifiers down from 490k -&gt; 349k in the last week. <a href="http://t.co/35vLsj3DZJ">http://t.co/35vLsj3DZJ</a> to check your network. — jared mauch (@jaredmauch) <a href="https://twitter.com/jaredmauch/statuses/434353900670300161">February 14, 2014</a></p></blockquote><p>Notably, we’ve seen a huge decrease from OVH, who have taken significant <a href="http://travaux.ovh.net/?do=details&amp;id=10222">measures</a> to prevent NTP attacks coming from its large installed base of servers. This is an encouraging achievement from the community, deploying tremendous efforts to solve a real problem.</p> ]]></content:encoded>
            <category><![CDATA[Attacks]]></category>
            <category><![CDATA[Reliability]]></category>
            <guid isPermaLink="false">1nCSJHjvs7fALk6jm9mGAT</guid>
            <dc:creator>Jérôme Fleury</dc:creator>
        </item>
    </channel>
</rss>