
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/">
    <channel>
        <title><![CDATA[ The Cloudflare Blog ]]></title>
        <description><![CDATA[ Get the latest news on how products at Cloudflare are built, technologies used, and join the teams helping to build a better Internet. ]]></description>
        <link>https://blog.cloudflare.com</link>
        <atom:link href="https://blog.cloudflare.com/" rel="self" type="application/rss+xml"/>
        <language>en-us</language>
        
        <lastBuildDate>Sun, 05 Apr 2026 03:50:59 GMT</lastBuildDate>
        <item>
            <title><![CDATA[What happened on the Internet during the Facebook outage]]></title>
            <link>https://blog.cloudflare.com/during-the-facebook-outage/</link>
            <pubDate>Fri, 08 Oct 2021 15:16:00 GMT</pubDate>
            <description><![CDATA[ Today, we're going to show you how the Facebook and affiliate sites downtime affected us, and what we can see in our data. ]]></description>
            <content:encoded><![CDATA[ <p>It's been a few days now since Facebook, Instagram, and WhatsApp went AWOL and experienced one of the most extended and rough downtime periods in their existence.</p><p>When that happened, we reported our bird's-eye view of the event and posted the blog <a href="/october-2021-facebook-outage/">Understanding How Facebook Disappeared from the Internet</a> where we tried to explain what we saw and how <a href="https://www.cloudflare.com/learning/dns/what-is-dns/">DNS</a> and BGP, two of the technologies at the center of the outage, played a role in the event.</p><p>In the meantime, more information has surfaced, and Facebook has <a href="https://engineering.fb.com/2021/10/05/networking-traffic/outage-details/">published a blog post</a> giving more details of what happened internally.</p><p>As we said before, these events are a gentle reminder that the Internet is a vast network of networks, and we, as industry players and end-users, are part of it and should work together.</p><p>In the aftermath of an event of this size, we don't waste much time debating how peers handled the situation. We do, however, ask ourselves the more important questions: "How did this affect us?" and "What if this had happened to us?" Asking and answering these questions whenever something like this happens is a great and healthy exercise that helps us improve our own resilience.</p><p>Today, we're going to show you how the Facebook and affiliate sites downtime affected us, and what we can see in our data.</p>
    <div>
      <h3>1.1.1.1</h3>
      <a href="#1-1-1-1">
        
      </a>
    </div>
    <p>1.1.1.1 is a fast and privacy-centric public DNS resolver operated by Cloudflare, used by millions of users, browsers, and devices worldwide. Let's look at our telemetry and see what we find.</p><p>First, the obvious. If we look at the response rate, there was a massive spike in the number of SERVFAIL codes. SERVFAILs can happen for several reasons; we have an excellent blog called <a href="/unwrap-the-servfail/">Unwrap the SERVFAIL</a> that you should read if you're curious.</p><p>In this case, we started serving SERVFAIL responses to all facebook.com and whatsapp.com DNS queries because our resolver couldn't access the upstream Facebook authoritative servers. About 60x times more than the average on a typical day.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7BKM7m3fUz3fCRrVuCN3jh/4cb1ccccd0cbe22f2fed10cf10360084/image16.png" />
            
            </figure><p>If we look at all the queries, not specific to Facebook or WhatsApp domains, and we split them by IPv4 and IPv6 clients, we can see that our load increased too.</p><p>As explained before, this is due to a snowball effect associated with applications and users retrying after the errors and generating even more traffic. In this case, 1.1.1.1 had to handle more than the expected rate for A and AAAA queries.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7a67lujBcgIkqVusTr4R1t/6f5bd1971964e4b3721d6096b50bc8d7/image3-12.png" />
            
            </figure><p>Here's another fun one.</p><p>DNS vs. DoT and DoH. Typically, DNS queries and responses are <a href="https://datatracker.ietf.org/doc/html/rfc1035#section-4.2">sent in plaintext over UDP</a> (or TCP sometimes), and that's been the case for decades now. Naturally, this poses security and privacy risks to end-users as it allows in-transit attacks or traffic snooping.</p><p>With DNS over TLS (DoT) and DNS over HTTPS, clients can talk DNS using well-known, well-supported encryption and authentication protocols.</p><p>Our learning center has a good article on "<a href="https://www.cloudflare.com/learning/dns/dns-over-tls/">DNS over TLS vs. DNS over HTTPS</a>" that you can read. Browsers like Chrome, Firefox, and Edge have supported DoH for some time now, WAP uses DoH too, and you can even configure your operating system to use the new protocols.</p><p>When Facebook went offline, we saw the number of DoT+DoH SERVFAILs responses grow by over x300 vs. the average rate.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/TNHkfPvSSHUxljyh89S5a/4de08b52d1a9cd23862d56a90943677b/image14.png" />
            
            </figure>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2pdogcCvQMRd10yEPUV0hu/c2ce7f4eeabd871727af41e2fc574524/image11-1.png" />
            
            </figure>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6GJdW0PO7zIHBlnkKTyGa2/59fd778bee5c0a837877535399455210/image4-13.png" />
            
            </figure><p>So, we got hammered with lots of requests and errors, causing traffic spikes to our 1.1.1.1 resolver and causing an unexpected load in the edge network and systems. How did we perform during this stressful period?</p><p>Quite well. 1.1.1.1 kept its cool and continued serving the vast majority of requests around the <a href="https://www.dnsperf.com/#!dns-resolvers">famous 10ms mark</a>. An insignificant fraction of p95 and p99 percentiles saw increased response times, probably due to timeouts trying to reach Facebook’s nameservers.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/ZzJKGMnii2ghnbRxI8UoT/8c282529bb5efc0f834728c14047b463/image6-11.png" />
            
            </figure><p>Another interesting perspective is the distribution of the ratio between SERVFAIL and good DNS answers, by country. In theory, the higher this ratio is, the more the country uses Facebook. Here's the map with the countries that suffered the most:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3XJYvy2Ore7FARCv4oBbiD/d1700bca70c1c85c2fe01d3e49019fdb/image18.png" />
            
            </figure><p>Here’s the top twelve country list, ordered by those that apparently use Facebook, WhatsApp and Instagram the most:</p><table><tr><td><p><b>Country</b></p></td><td><p><b>SERVFAIL/Good Answers ratio</b></p></td></tr><tr><td><p>Turkey</p></td><td><p>7.34</p></td></tr><tr><td><p>Grenada</p></td><td><p>4.84</p></td></tr><tr><td><p>Congo</p></td><td><p>4.44</p></td></tr><tr><td><p>Lesotho</p></td><td><p>3.94</p></td></tr><tr><td><p>Nicaragua</p></td><td><p>3.57</p></td></tr><tr><td><p>South Sudan</p></td><td><p>3.47</p></td></tr><tr><td><p>Syrian Arab Republic</p></td><td><p>3.41</p></td></tr><tr><td><p>Serbia</p></td><td><p>3.25</p></td></tr><tr><td><p>Turkmenistan</p></td><td><p>3.23</p></td></tr><tr><td><p>United Arab Emirates</p></td><td><p>3.17</p></td></tr><tr><td><p>Togo</p></td><td><p>3.14</p></td></tr><tr><td><p>French Guiana</p></td><td><p>3.00</p></td></tr></table>
    <div>
      <h3>Impact on other sites</h3>
      <a href="#impact-on-other-sites">
        
      </a>
    </div>
    <p>When Facebook, Instagram, and WhatsApp aren't around, the world turns to other places to look for information on what's going on, other forms of entertainment or other applications to communicate with their friends and family. Our data shows us those shifts. While Facebook was going down, other services and platforms were going up.</p><p>To get an idea of the changing traffic patterns we look at DNS queries as an indicator of increased traffic to specific sites or types of site.</p><p>Here are a few examples.</p><p>Other social media platforms saw a slight increase in use, compared to normal.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7uFhqEr4MIl6HxpBI9GG1M/a995ce00e0ff2b2b96d9aa9fb3341fa6/image17.png" />
            
            </figure><p>Traffic to messaging platforms like Telegram, Signal, Discord and Slack got a little push too.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/16AyoRRD7YSUSd3MuHNNGO/c587c9a318702877248025d86f921c79/image9-6.png" />
            
            </figure><p>Nothing like a little gaming time when Instagram is down, we guess, when looking at traffic to sites like Steam, Xbox, Minecraft and others.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2v3JZdLK2rSbGPlhgicDJn/e8f2bb509432c9b5fa2837588bd1f927/image8-10.png" />
            
            </figure><p>And yes, people want to know what’s going on and fall back on news sites like CNN, New York Times, The Guardian, Wall Street Journal, Washington Post, Huffington Post, BBC, and others:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4JR6ipTzivZfWYOS5rmnlX/c9b363c7babfaa94b096ebd657780b5a/image5-12.png" />
            
            </figure>
    <div>
      <h3>Attacks</h3>
      <a href="#attacks">
        
      </a>
    </div>
    <p>One could speculate that the Internet was under attack from malicious hackers. Our Firewall doesn't agree; nothing out of the ordinary stands out.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6lBrIk5Tx5b64G2SGtfFXZ/b899bd50829087a48ef38b31b97cfd7a/image13.png" />
            
            </figure>
    <div>
      <h3>Network Error Logs</h3>
      <a href="#network-error-logs">
        
      </a>
    </div>
    <p><a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Network_Error_Logging">Network Error Logging</a>, NEL for short, is an experimental technology supported in Chrome. A website can issue a Report-To header and ask the browser to send reports about network problems, like bad requests or <a href="https://www.cloudflare.com/learning/dns/common-dns-issues/">DNS issues</a>, to a specific endpoint.</p><p>Cloudflare uses NEL data to quickly help triage end-user connectivity issues when end-users reach our network. You can learn more about this feature in our <a href="https://support.cloudflare.com/hc/en-us/articles/360050691831-Understanding-Network-Error-Logging">help center</a>.</p><p>If Facebook is down and their DNS isn't responding, Chrome will start reporting NEL events every time one of the pages in our zones fails to load Facebook comments, posts, ads, or authentication buttons. This chart shows it clearly.<b>​​</b></p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/56Sp2huqEtyGiEOmzBrsiU/6ce9a0f9affe20691bf8588d9f4d6a8f/image7-8.png" />
            
            </figure>
    <div>
      <h3>WARP</h3>
      <a href="#warp">
        
      </a>
    </div>
    <p>Cloudflare announced <a href="https://1.1.1.1/">WARP</a> in 2019, and called it "<a href="/1111-warp-better-vpn/">A VPN for People Who Don't Know What V.P.N. Stands For</a>" and offered it for free to its customers. Today WARP is used by millions of people worldwide to securely and privately access the Internet on their desktop and mobile devices. Here's what we saw during the outage by looking at traffic volume between WARP and Facebook’s network:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4UIMVg1PpKr27RRkKagcdp/979d1e3ef2dda3107f4db2799bb2f3f6/WARP-graph-Facebook-outage-Oct-2021.png" />
            
            </figure><p>You can see how the steep drop on Facebook ASN traffic coincides with the start of the incident and how it compares to the same period the day before.</p>
    <div>
      <h3>Our own traffic</h3>
      <a href="#our-own-traffic">
        
      </a>
    </div>
    <p>People tend to think of Facebook as a place to visit. We log in, and we access Facebook, we post. It turns out that Facebook likes to visit us too, quite a lot. Like Google and other platforms, Facebook uses an army of crawlers to constantly check websites for data and updates. Those robots gather information about websites content, such as its titles, descriptions, thumbnail images, and metadata. You can learn more about this on the "<a href="https://developers.facebook.com/docs/sharing/webmasters/crawler/">The Facebook Crawler</a>" page and the <a href="https://ogp.me/">Open Graph</a> website.</p><p>Here's what we see when traffic is coming from the Facebook ASN, supposedly from crawlers, to our CDN sites:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5bWgF42TihmULktktsg6TX/669f1a605301c7be4f8983d45c42ffd3/image10-3.png" />
            
            </figure><p>The robots went silent.</p><p>What about the traffic coming to our CDN sites from Facebook <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/User-Agent">User-Agents</a>? The gap is indisputable.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/WCcPk5XJu2bmFxhfNjJq3/825ee59fe0ac3b20f14fe249e18f701d/image1-16.png" />
            
            </figure><p>We see about 30% of a typical request rate hitting us. But it's not zero; why is that?</p><p>We'll let you know a little secret. Never trust User-Agent information; it's broken. User-Agent spoofing is everywhere. Browsers, apps, and other clients deliberately change the User-Agent string when they fetch pages from the Internet to hide, obtain access to certain features, or bypass paywalls (because pay-walled sites want sites like Facebook to index their content, so that then they get more traffic from links).</p><p>Fortunately, there are newer, and privacy-centric standards emerging like <a href="https://developer.mozilla.org/en-US/docs/Web/API/User-Agent_Client_Hints_API">User-Agent Client Hints</a>.</p>
    <div>
      <h3>Core Web Vitals</h3>
      <a href="#core-web-vitals">
        
      </a>
    </div>
    <p>Core Web Vitals are the subset of <a href="https://web.dev/vitals/">Web Vitals</a>, an initiative by Google to provide a unified interface to measure real-world quality signals when a user visits a web page. Such signals include Largest Contentful Paint (LCP), First Input Delay (FID), and Cumulative Layout Shift (CLS).</p><p>We <a href="/web-analytics-vitals-explorer/">use Core Web Vitals</a> with our privacy-centric Web Analytics product and collect anonymized data on how end-users experience the websites that enable this feature.</p><p>One of the metrics we can calculate using these signals is the page load time. Our theory is that if a page includes scripts coming from external sites (for example, Facebook "like" buttons, comments, ads), and they are unreachable, its total load time gets affected.</p><p>We used a list of about 400 domains that we know embed Facebook scripts in their pages and looked at the data.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/HZbQOhcV4ki3H7NVMD6Yq/117d2298e9d0e06521083d5176a34f77/image12.png" />
            
            </figure><p>Now let's look at the Largest Contentful Paint. <a href="https://web.dev/lcp/">LCP</a> marks the point in the page load timeline when the page's main content has likely loaded. The faster the LCP is, the better the end-user experience.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/QJiUkTvEWln7dpvLkVKiV/eb6595b2c781e5ccc68c03b4bd233e3b/image15.png" />
            
            </figure><p>Again, the page load experience got visibly degraded.</p><p>The outcome seems clear. The sites that use Facebook scripts in their pages took 1.5x more time to load their pages during the outage, with some of them taking more than 2x the usual time. Facebook's outage dragged the performance of some other sites down.</p>
    <div>
      <h3>Conclusion</h3>
      <a href="#conclusion">
        
      </a>
    </div>
    <p>When Facebook, Instagram, and WhatsApp went down, the Web felt it. Some websites got slower or lost traffic, other services and platforms got unexpected load, and people lost the ability to communicate or do business normally.</p> ]]></content:encoded>
            <category><![CDATA[Outage]]></category>
            <category><![CDATA[BGP]]></category>
            <category><![CDATA[DNS]]></category>
            <category><![CDATA[Trends]]></category>
            <category><![CDATA[Facebook]]></category>
            <guid isPermaLink="false">4sF0eFLy72giKT8ZsHadhg</guid>
            <dc:creator>Celso Martinho</dc:creator>
            <dc:creator>Sabina Zejnilovic</dc:creator>
        </item>
        <item>
            <title><![CDATA[Understanding how Facebook disappeared from the Internet]]></title>
            <link>https://blog.cloudflare.com/october-2021-facebook-outage/</link>
            <pubDate>Mon, 04 Oct 2021 21:08:52 GMT</pubDate>
            <description><![CDATA[ Today at 1651 UTC, we opened an internal incident entitled "Facebook DNS lookup returning SERVFAIL" because we were worried that something was wrong with our DNS resolver 1.1.1.1.  But as we were about to post on our public status page we realized something else more serious was going on. ]]></description>
            <content:encoded><![CDATA[ <p></p><p>The Internet - A Network of Networks</p><p>“<i>Facebook can't be down, can it?</i>”, we thought, for a second.</p><p>Today at 15:51 UTC, we opened an internal incident entitled "Facebook DNS lookup returning SERVFAIL" because we were worried that something was wrong with our DNS resolver <a href="https://developers.cloudflare.com/warp-client/">1.1.1.1</a>.  But as we were about to post on our <a href="https://www.cloudflarestatus.com/">public status</a> page we realized something else more serious was going on.</p><p>Social media quickly burst into flames, reporting what our engineers rapidly confirmed too. Facebook and its affiliated services WhatsApp and Instagram were, in fact, all down. Their DNS names stopped resolving, and their infrastructure IPs were unreachable. It was as if someone had "pulled the cables" from their data centers all at once and disconnected them from the Internet.</p><p>This wasn't a <a href="https://www.cloudflare.com/learning/dns/common-dns-issues/">DNS issue</a> itself, but failing DNS was the first symptom we'd seen of a larger Facebook outage.</p><p>How's that even possible?</p>
    <div>
      <h3>Update from Facebook</h3>
      <a href="#update-from-facebook">
        
      </a>
    </div>
    <p>Facebook has now <a href="https://engineering.fb.com/2021/10/04/networking-traffic/outage/">published a blog post</a> giving some details of what happened internally. Externally, we saw the BGP and DNS problems outlined in this post but the problem actually began with a configuration change that affected the entire internal backbone. That cascaded into Facebook and other properties disappearing and staff internal to Facebook having difficulty getting service going again.</p><p>Facebook posted <a href="https://engineering.fb.com/2021/10/05/networking-traffic/outage-details/">a further blog post</a> with a lot more detail about what happened. You can read that post for the inside view and this post for the outside view.</p><p>Now on to what we saw from the outside.</p>
    <div>
      <h3>Meet BGP</h3>
      <a href="#meet-bgp">
        
      </a>
    </div>
    <p><a href="https://www.cloudflare.com/learning/security/glossary/what-is-bgp/">BGP</a> stands for Border Gateway Protocol. It's a mechanism to exchange routing information between autonomous systems (AS) on the Internet. The big routers that make the Internet work have huge, constantly updated lists of the possible routes that can be used to deliver every network packet to their final destinations. Without BGP, the Internet routers wouldn't know what to do, and the Internet wouldn't work.</p><p>The Internet is literally a network of networks, and it’s bound together by BGP. BGP allows one network (say Facebook) to advertise its presence to other networks that form the Internet. As we write Facebook is not advertising its presence, ISPs and other networks can’t find Facebook’s network and so it is unavailable.</p><p>The individual networks each have an ASN: an Autonomous System Number. An Autonomous System (AS) is an individual network with a unified internal routing policy. An AS can originate prefixes (say that they control a group of IP addresses), as well as transit prefixes (say they know how to reach specific groups of IP addresses).</p><p>Cloudflare's ASN is <a href="https://www.peeringdb.com/asn/13335">AS13335</a>. Every ASN needs to announce its prefix routes to the Internet using BGP; otherwise, no one will know how to connect and where to find us.</p><p>Our <a href="https://www.cloudflare.com/learning/">learning center</a> has a good overview of what <a href="https://www.cloudflare.com/learning/security/glossary/what-is-bgp/">BGP</a> and <a href="https://www.cloudflare.com/learning/network-layer/what-is-an-autonomous-system/">ASNs</a> are and how they work.</p><p>In this simplified diagram, you can see six autonomous systems on the Internet and two possible routes that one packet can use to go from Start to End. AS1 → AS2 → AS3 being the fastest, and AS1 → AS6 → AS5 → AS4 → AS3 being the slowest, but that can be used if the first fails.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/32OFXCXQN61ABya2ei1YR2/b8dc0f86b97079926dd6de75533f0912/image5-10.png" />
            
            </figure><p>At 15:58 UTC we noticed that Facebook had stopped announcing the routes to their DNS prefixes. That meant that, at least, Facebook’s DNS servers were unavailable. Because of this Cloudflare’s 1.1.1.1 DNS resolver could no longer respond to queries asking for the IP address of facebook.com.</p>
            <pre><code>route-views&gt;show ip bgp 185.89.218.0/23
% Network not in table
route-views&gt;

route-views&gt;show ip bgp 129.134.30.0/23
% Network not in table
route-views&gt;</code></pre>
            <p>Meanwhile, other Facebook IP addresses remained routed but weren’t particularly useful since without DNS Facebook and related services were effectively unavailable:</p>
            <pre><code>route-views&gt;show ip bgp 129.134.30.0   
BGP routing table entry for 129.134.0.0/17, version 1025798334
Paths: (24 available, best #14, table default)
  Not advertised to any peer
  Refresh Epoch 2
  3303 6453 32934
    217.192.89.50 from 217.192.89.50 (138.187.128.158)
      Origin IGP, localpref 100, valid, external
      Community: 3303:1004 3303:1006 3303:3075 6453:3000 6453:3400 6453:3402
      path 7FE1408ED9C8 RPKI State not found
      rx pathid: 0, tx pathid: 0
  Refresh Epoch 1
route-views&gt;</code></pre>
            <p>We keep track of all the BGP updates and announcements we see in our global network. At our scale, the data we collect gives us a view of how the Internet is connected and where the traffic is meant to flow from and to everywhere on the planet.</p><p>A BGP UPDATE message informs a router of any changes you’ve made to a prefix advertisement or entirely withdraws the prefix. We can clearly see this in the number of updates we received from Facebook when checking our time-series BGP database. Normally this chart is fairly quiet: Facebook doesn’t make a lot of changes to its network minute to minute.</p><p>But at around 15:40 UTC we saw a peak of routing changes from Facebook. That’s when the trouble began.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5y7pAAyjmIsSJXessKptMg/0f842af9af2a9c2550ff64f43ea7c365/image4-11.png" />
            
            </figure><p>If we split this view by routes announcements and withdrawals, we get an even better idea of what happened. Routes were withdrawn, Facebook’s DNS servers went offline, and one minute after the problem occurred, Cloudflare engineers were in a room wondering why 1.1.1.1 couldn’t resolve facebook.com and worrying that it was somehow a fault with our systems.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/71IU6JSuaRY843UulhFiKW/cb6d58af732594976beb1a78c0f8d308/image3-9.png" />
            
            </figure><p>With those withdrawals, Facebook and its sites had effectively disconnected themselves from the Internet.</p>
    <div>
      <h3>DNS gets affected</h3>
      <a href="#dns-gets-affected">
        
      </a>
    </div>
    <p>As a direct consequence of this, DNS resolvers all over the world stopped resolving their domain names.</p>
            <pre><code>➜  ~ dig @1.1.1.1 facebook.com
;; -&gt;&gt;HEADER&lt;&lt;- opcode: QUERY, status: SERVFAIL, id: 31322
;facebook.com.			IN	A
➜  ~ dig @1.1.1.1 whatsapp.com
;; -&gt;&gt;HEADER&lt;&lt;- opcode: QUERY, status: SERVFAIL, id: 31322
;whatsapp.com.			IN	A
➜  ~ dig @8.8.8.8 facebook.com
;; -&gt;&gt;HEADER&lt;&lt;- opcode: QUERY, status: SERVFAIL, id: 31322
;facebook.com.			IN	A
➜  ~ dig @8.8.8.8 whatsapp.com
;; -&gt;&gt;HEADER&lt;&lt;- opcode: QUERY, status: SERVFAIL, id: 31322
;whatsapp.com.			IN	A</code></pre>
            <p>This happens because DNS, like many other systems on the Internet, also has its routing mechanism. When someone types the <a href="https://facebook.com">https://facebook.com</a> URL in the browser, the DNS resolver, responsible for translating <a href="https://www.cloudflare.com/learning/dns/glossary/what-is-a-domain-name/">domain names</a> into actual IP addresses to connect to, first checks if it has something in its cache and uses it. If not, it tries to grab the answer from the domain nameservers, typically hosted by the entity that owns it.</p><p>If the nameservers are unreachable or fail to respond because of some other reason, then a SERVFAIL is returned, and the browser issues an error to the user.</p><p>Again, our learning center provides a <a href="https://www.cloudflare.com/learning/dns/what-is-dns/">good explanation</a> on how DNS works.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/aYyNe0TkIR47XpI3RjaN4/179f32b67057dc440e1ad3af2a0061e2/image8-8.png" />
            
            </figure><p>Due to Facebook stopping announcing their DNS prefix routes through BGP, our and everyone else's DNS resolvers had no way to connect to their nameservers. Consequently, 1.1.1.1, 8.8.8.8, and other major public DNS resolvers started issuing (and caching) SERVFAIL responses.</p><p>But that's not all. Now human behavior and application logic kicks in and causes another exponential effect. A tsunami of additional DNS traffic follows.</p><p>This happened in part because apps won't accept an error for an answer and start retrying, sometimes aggressively, and in part because end-users also won't take an error for an answer and start reloading the pages, or killing and relaunching their apps, sometimes also aggressively.</p><p>This is the traffic increase (in number of requests) that we saw on 1.1.1.1:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/JgzupnGlQqx0cZjijoRfb/bce5744c0d0a655e77c9b72f9d297496/image6-9.png" />
            
            </figure><p>So now, because Facebook and their sites are so big, we have DNS resolvers worldwide handling 30x more queries than usual and potentially causing latency and timeout issues to other platforms.</p><p>Fortunately, 1.1.1.1 was built to be Free, Private, Fast (as the independent DNS monitor <a href="https://www.dnsperf.com/#!dns-resolvers">DNSPerf</a> can attest), and scalable, and we were able to keep servicing our users with minimal impact.</p><p>The vast majority of our DNS requests kept resolving in under 10ms. At the same time, a minimal fraction of p95 and p99 percentiles saw increased response times, probably due to expired TTLs having to resort to the Facebook nameservers and timeout. The 10 seconds DNS timeout limit is well known amongst engineers.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7HcKMYPFQWk5QlVyMY4dxH/c51ddb3df72b0baec1a0ff99b1a208fc/image2-11.png" />
            
            </figure>
    <div>
      <h3>Impacting other services</h3>
      <a href="#impacting-other-services">
        
      </a>
    </div>
    <p>People look for alternatives and want to know more or discuss what’s going on. When Facebook became unreachable, we started seeing increased DNS queries to Twitter, Signal and other messaging and social media platforms.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4LNsXe0n3q2kZgSwbiTgD8/ddc52a824634bf383fc2aaf8fbadc25e/image1-12.png" />
            
            </figure><p>We can also see another side effect of this unreachability in our WARP traffic to and from Facebook's affected ASN 32934. This chart shows how traffic changed from 15:45 UTC to 16:45 UTC compared with three hours before in each country. All over the world WARP traffic to and from Facebook’s network simply disappeared.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6pe57CevhAkyCi0MYDHurW/fc424989247ad4faa11f3b4c64781502/image7-6.png" />
            
            </figure>
    <div>
      <h3>The Internet</h3>
      <a href="#the-internet">
        
      </a>
    </div>
    <p>Today's events are a gentle reminder that the Internet is a very complex and interdependent system of millions of systems and protocols working together. That trust, standardization, and cooperation between entities are at the center of making it work for almost five billion active users worldwide.</p>
    <div>
      <h3>Update</h3>
      <a href="#update">
        
      </a>
    </div>
    <p>At around 21:00 UTC we saw renewed BGP activity from Facebook's network which peaked at 21:17 UTC.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/47HV9JWOs1yaUtXV1g269v/1effede60e4dc9e1eab042bb1fc1b4fe/unnamed-3-3.png" />
            
            </figure><p>This chart shows the availability of the DNS name 'facebook.com' on Cloudflare's DNS resolver 1.1.1.1. It stopped being available at around 15:50 UTC and returned at 21:20 UTC.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5y4sNviiduLRAipw6I6UHs/770e842b223754d4aa61fffe3ee8d7c6/unnamed-4.png" />
            
            </figure><p>Undoubtedly Facebook, WhatsApp and Instagram services will take further time to come online but as of 21:28 UTC Facebook appears to be reconnected to the global Internet and DNS working again.</p> ]]></content:encoded>
            <category><![CDATA[Trends]]></category>
            <category><![CDATA[Outage]]></category>
            <category><![CDATA[BGP]]></category>
            <category><![CDATA[DNS]]></category>
            <category><![CDATA[Facebook]]></category>
            <guid isPermaLink="false">7jh9UDGts4LJU26IAOLsgK</guid>
            <dc:creator>Celso Martinho</dc:creator>
            <dc:creator>Tom Strickx</dc:creator>
        </item>
    </channel>
</rss>