
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/">
    <channel>
        <title><![CDATA[ The Cloudflare Blog ]]></title>
        <description><![CDATA[ Get the latest news on how products at Cloudflare are built, technologies used, and join the teams helping to build a better Internet. ]]></description>
        <link>https://blog.cloudflare.com</link>
        <atom:link href="https://blog.cloudflare.com/" rel="self" type="application/rss+xml"/>
        <language>en-us</language>
        
        <lastBuildDate>Sat, 04 Apr 2026 06:18:34 GMT</lastBuildDate>
        <item>
            <title><![CDATA[Cloudflare's global network grows to 300 cities and ever closer to end users with connections to 12,000 networks]]></title>
            <link>https://blog.cloudflare.com/cloudflare-connected-in-over-300-cities/</link>
            <pubDate>Mon, 19 Jun 2023 13:00:16 GMT</pubDate>
            <description><![CDATA[ We are pleased to announce that Cloudflare is now connected to over 12,000 Internet networks in over 300 cities around the world ]]></description>
            <content:encoded><![CDATA[ <p></p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4PkXmBAp3jn8r0gnWqIEAx/15007c52bdd3178d13352edb92914e97/12-000-networks-1.png" />
            
            </figure><p>We make no secret about how passionate we are about building a world-class global <a href="https://www.cloudflare.com/network/">network</a> to deliver the best possible experience for our customers. This means an unwavering and continual dedication to always improving the breadth (number of cities) and depth (number of interconnects) of our network.</p><p><b>This is why we are pleased to announce that Cloudflare is now connected to over 12,000 Internet networks in over 300 cities around the world!</b></p><p>The Cloudflare global network runs every service in every data center so your users have a consistent experience everywhere—whether you are in <a href="/reykjavik-cloudflares-northernmost-location/">Reykjavík</a>, <a href="/cloudflare-deployment-in-guam/">Guam</a> or in the vicinity of any of the 300 cities where Cloudflare lives. This means all customer traffic is processed at the data center closest to its source, with no backhauling or performance tradeoffs.</p><p>Having Cloudflare’s network present in hundreds of cities globally is critical to providing new and more convenient ways to serve our customers and their customers. However, the breadth of our infrastructure network provides other critical purposes. Let’s take a closer look at the reasons we build and the real world impact we’ve seen to customer experience:</p>
    <div>
      <h3>Reduce latency</h3>
      <a href="#reduce-latency">
        
      </a>
    </div>
    <p>Our network allows us to sit approximately 50 ms from 95% of the Internet-connected population globally. Nevertheless, we are constantly reviewing network performance metrics and working with local regional Internet service providers to ensure we focus on growing underserved markets where we can add value and improve performance. So far, in 2023 we’ve already added 12 new cities to bring our network to over 300 cities spanning 122 unique countries!</p>
<table>
<thead>
  <tr>
    <th><span>City</span></th>
  </tr>
</thead>
<tbody>
  <tr>
    <td><span>Albuquerque, New Mexico, US</span></td>
  </tr>
  <tr>
    <td><span>Austin, Texas, US</span></td>
  </tr>
  <tr>
    <td><span>Bangor, Maine, US</span></td>
  </tr>
  <tr>
    <td><span>Campos dos Goytacazes, Brazil</span></td>
  </tr>
  <tr>
    <td><span>Fukuoka, Japan</span></td>
  </tr>
  <tr>
    <td><span>Kingston, Jamaica</span></td>
  </tr>
  <tr>
    <td><span>Kinshasa, Democratic Republic of the Congo</span></td>
  </tr>
  <tr>
    <td><span>Lyon, France</span></td>
  </tr>
  <tr>
    <td><span>Oran, Algeria</span></td>
  </tr>
  <tr>
    <td><span>São José dos Campos, Brazil</span></td>
  </tr>
  <tr>
    <td><span>Stuttgart, Germany</span></td>
  </tr>
  <tr>
    <td><span>Vitoria, Brazil</span></td>
  </tr>
</tbody>
</table><p>In May, we activated a new data center in Campos dos Goytacazes, Brazil, where we interconnected with a regional network provider, serving 100+ local ISPs. While it's not too far from Rio de Janeiro (270km) it still cut our 50th and 75th percentile latency measured from the TCP handshake between Cloudflare's servers and the user's device in half and provided a noticeable performance improvement!</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1CETPT4paJnZPdfob5xoWw/868652ca9f3643e7d1affa1f908b758d/image1-8.png" />
            
            </figure>
    <div>
      <h3>Improve interconnections</h3>
      <a href="#improve-interconnections">
        
      </a>
    </div>
    <p>A larger number of local interconnections facilitates direct connections between network providers, <a href="https://www.cloudflare.com/learning/cdn/what-is-a-cdn/">content delivery networks</a>, and regional Internet Service Providers. These interconnections enable faster and more efficient data exchange, content delivery, and collaboration between networks.</p><p>Currently there are approximately 74,000<sup>1</sup> AS numbers routed globally. An <a href="https://www.cloudflare.com/learning/network-layer/what-is-an-autonomous-system/">Autonomous System</a> (AS) number is a unique number allocated per ISP, enterprise, cloud, or similar network that maintains Internet routing capabilities using BGP. Of these approximate 74,000 ASNs, 43,000<sup>2</sup> of them are stub ASNs, or only connected to one other network. These are often enterprise, or internal use ASNs, that only connect to their own ISP or internal network, but not with other networks.</p><p>It’s mind blowing to consider that Cloudflare is directly connected to 12,372 unique Internet networks, or approximately 1/3rd of the possible networks to connect globally! This direct connectivity builds resilience and enables performance, making sure there are multiple places to connect between networks, ISPs, and enterprises, but also making sure those connections are as fast as possible.</p><p>A previous example of this was shown as we started connecting more locally. As seen in this <a href="/30-more-traffic-in-less-than-a-blink-of-an-ey/">blog post</a> the local connections even increased how much our network was being used: better performance drives further usage!</p><p>At Cloudflare we ensure that infrastructure expansion strategically aligns to building in markets where we can interconnect deeper, because increasing our network breadth is only as valuable as the number of local interconnections that it enables. For example, we recently connected to a local ISP (representing a new ASN connection) in Pakistan, where the 50th percentile improved from ~90ms to 5ms!</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6LklYvBqVmhxoxrOmREPqr/047aa3b950c377ea6894dde7b9fa4cc3/image2-7.png" />
            
            </figure>
    <div>
      <h3>Build resilience</h3>
      <a href="#build-resilience">
        
      </a>
    </div>
    <p>Network expansion may be driven by reducing latency and improving interconnections, but it’s equally valuable to our existing network infrastructure. Increasing our geographic reach strengthens our redundancy, localizes failover and helps further distribute compute workload resulting in more effective capacity management. This improved resilience reduces the risk of service disruptions and ensures network availability even in the event of hardware failures, natural disasters, or other unforeseen circumstances. It enhances reliability and prevents single points of failure in the network architecture.</p><p>Ultimately, our commitment to strategically expanding the breadth and depth of our network delivers improved latency, stronger interconnections and a more resilient architecture - all critical components of a better Internet! If you’re a network operator, and are interested in how, together, we can deliver an improved user experience, we’re here to help! Please check out our <a href="https://www.cloudflare.com/partners/peering-portal/">Edge Partner Program</a> and let’s get connected.</p><p>........</p><p><sup>1</sup><a href="https://www.cidr-report.org/as2.0/">CIDR Report</a></p><p><sup>2</sup><a href="https://bgp.potaroo.net/cgi-bin/plota?file=%2fvar%2fdata%2fbgp%2frva%2dmrt%2f6447%2fbgp%2das%2done%2etxt&amp;descr=Origin%20ASs%20announced%20via%20a%20single%20AS%20path&amp;ylabel=Origin%20ASs%20announced%20via%20a%20single%20AS%20path&amp;with=step">Origin ASs announced via a single AS path</a></p> ]]></content:encoded>
            <category><![CDATA[Speed Week]]></category>
            <category><![CDATA[Network]]></category>
            <category><![CDATA[Network Interconnect]]></category>
            <guid isPermaLink="false">1NlDmm0M6PYgsQlYzkeBLz</guid>
            <dc:creator>Damian Matacz</dc:creator>
            <dc:creator>Marcelo Affonso</dc:creator>
            <dc:creator>Tom Paseka</dc:creator>
            <dc:creator>Joanne Liew</dc:creator>
        </item>
        <item>
            <title><![CDATA[How The Gambia lost access to the Internet for more than 8 hours]]></title>
            <link>https://blog.cloudflare.com/the-gambia-without-internet/</link>
            <pubDate>Wed, 05 Jan 2022 16:14:04 GMT</pubDate>
            <description><![CDATA[ On the morning of January 4, 2022, citizens of The Gambia woke up to a country-wide Internet outage ]]></description>
            <content:encoded><![CDATA[ 
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5XkB4Alnn5RpAXiR6nooQ7/8e337cc29a67e274a94ad453b33b5b81/The-Gambia-Internet-Outage-JAN-2022.png" />
            
            </figure><p>Internet outages are more common than most people think, and may be caused by misconfigurations, power outages, extreme weather, or infrastructure damage. Note that such outages are distinct from state-imposed <a href="/tag/internet-shutdown/">shutdowns</a> that also happen all too frequently, generally used to deal with situations of <a href="/unrest-in-gabon-leads-to-internet-shutdown/">unrest</a>, <a href="/uganda-january-13-2021-internet-shut-down/">elections</a> or even <a href="/sudans-exam-related-internet-shutdowns/">exams</a>.</p><p>On the morning of January 4, 2022, citizens of The Gambia woke up to a country-wide Internet <a href="https://twitter.com/dbelson/status/1478347956944310274">outage</a>. <a href="https://en.wikipedia.org/wiki/Gamtel">Gamtel</a> (the main state-owned telecommunications company of the West Africa country), <a href="https://twitter.com/Gamtel/status/1478310096639770625">announced</a> that it happened due to "technical issues on the backup links" — we elaborate more on this below.</p><p>Cloudflare Radar shows that the outage had a significant impact on Internet traffic in the country and started after 01:00 UTC (which is the same local time), lasting until ~09:45 — a disruption of over 8 hours.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/22KA345Bo9T1RUp9dTr13B/3bdc8388af3f2f4aa3104278eecdbb61/image4-1.png" />
            
            </figure><p>Looking at  <a href="https://www.cloudflare.com/learning/security/glossary/what-is-bgp/">BGP</a> (Border Gateway Protocol) updates from Gambian ASNs around the time of the outage, we see a clear spike at 01:10 UTC. These update messages are BGP signaling that the Gambian ASNs are no longer routable.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7CAXydcGjnQwf0vKoaFKF2/1698a32da1dfced0588e317368487f54/image5-1.png" />
            
            </figure><p>It is important to know that BGP is a mechanism to exchange routing information between autonomous systems (networks) on the Internet. The routers that make the Internet work have huge, constantly updated lists of the possible routes that can be used to deliver every network packet to their final destinations. Without BGP, the Internet routers wouldn't know what to do, and the Internet wouldn't work. As we saw in our blog post in 2021 about <a href="/october-2021-facebook-outage/">how Facebook disappeared from the Internet</a>, the Internet is literally a network of networks, and it’s bound together by BGP.</p><p>The Gambia’s Internet access is solely dependent on a single provider, Gamtel. Because The Gambia’s international Internet connectivity via the ACE submarine cable was unavailable, it was reliant on the “backup links” referenced above - terrestrial connectivity via Senegal and the provider Sonatel. This is visible in BGP data. If we look at the ASNs that are allocated to networks in The Gambia (AS25250, AS37309, AS37503, AS37552, AS37524, AS37323, AS328488, AS328140), and put those into a regular expression on BGP routing tools like <a href="http://www.routeviews.org/routeviews/">route-views</a> as so:</p>
            <pre><code>route-views&gt;show ip bgp regexp .*_(25250|37309|37503|37552|37524|37323|328488|328140)</code></pre>
            <p>We are able to see all the possible upstream ASN paths from these networks to the rest of the Internet.</p><p>Looking at the “Path” results, we see that AS8346 (Sonatel) and AS25250 (Gamtel) are in the path for all the Gambian networks.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7AelkVnU67WSv5qkhLt2VW/1cfb56ec61006e69a028f6ee598efdfa/image1-1.png" />
            
            </figure><p>Visualized, you can see the dependency on this network path for The Gambia’s Internet access.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5q3QiELxPvQxzebO9OdrtH/3855bc7959f6f832b88213affd229a44/image3-2.png" />
            
            </figure><p>No interruptions were seen in Sonatel (AS8346), so this indicates that the single network path between Sonatel and Gamtel (AS25250) is a critical point for connectivity. A failure in either of these networks could result in The Gambia going offline again.</p><p>Yesterday’s outage in The Gambia outage illustrates something we frequently reference here in the blog: the Internet is literally a network of networks. A significant amount of  Internet traffic is carried by a complex network of <a href="https://en.wikipedia.org/wiki/Submarine_communications_cable">undersea fiber-optic cables</a> that connect countries and continents — all the <a href="https://en.wikipedia.org/wiki/List_of_international_submarine_communications_cables">cable systems</a> used have landing points in two or more countries. So a problem in one country can easily affect others.</p><p>Going back to The Gambia, Gamtel explained in a January 5, 2022, <a href="https://twitter.com/Gamtel/status/1478670244067651587/photo/1">press release</a> that there was “a primary link failure at <a href="https://en.wikipedia.org/wiki/Africa_Coast_to_Europe_(cable_system)">ACE</a>” — the cable system that serves 24 countries, from Europe to Africa. “The ACE cable repair is expected to be completed in mid-January, 2022,” explained the company.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2DV29RysbNePQPo9JVmNaW/649dcc3f160cb049e2e591582a924bae/image2-1.png" />
            
            </figure><p>The full <a href="https://en.wikipedia.org/wiki/Africa_Coast_to_Europe_(cable_system)">ACE</a> (Africa Coast to Europe) submarine cable system. From <a href="https://afterfibre.nsrc.org/">NSRC</a></p><p>The “backup failure” here was “due to a faulty card at Toubakota, in Senegal”. That problem affects “both the Karang and Seleti links [points of cable connections from Senegal to The Gambia] as both North and South links converges there”. “Thus, the reason for the complete isolation on the Sonatel link”, concludes Gamtel.</p><p>Recognizing the critical importance of reliable Internet connectivity, The Gambia Public Utilities Regulatory Authority also <a href="https://pura.gm/pura-explains-frequent-fibre-cuts-and-plans-to-explore-securing-second-fibre-cable-and-extra-backup-facilities/">issued a statement</a> noting “The Authority, operators, MOICI, and the Government are exploring other options of making sure that the Gambia has a second fibre cable backup considering the impact that these failures are having on our national security, economy, and social activities.”</p> ]]></content:encoded>
            <category><![CDATA[Radar]]></category>
            <category><![CDATA[Internet Shutdown]]></category>
            <guid isPermaLink="false">3A7dlgEg828TSSCYCIyffW</guid>
            <dc:creator>David Belson</dc:creator>
            <dc:creator>Tom Paseka</dc:creator>
            <dc:creator>João Tomé</dc:creator>
        </item>
        <item>
            <title><![CDATA[Cloudflare Network Interconnection partnerships launch]]></title>
            <link>https://blog.cloudflare.com/cloudflare-network-interconnect-partner-program/</link>
            <pubDate>Tue, 04 Aug 2020 13:00:00 GMT</pubDate>
            <description><![CDATA[ Today we’re excited to announce Cloudflare’s Network Interconnection Partner Program, in support of our new CNI product. As ever more enterprises turn to Cloudflare to secure and accelerate their branch and core networks, the ability to connect privately and securely becomes increasingly important. ]]></description>
            <content:encoded><![CDATA[ <p>Today we’re excited to announce Cloudflare’s Network Interconnection <a href="https://www.cloudflare.com/network-interconnect-partnerships/">Partner Program</a>, in support of our new CNI <a href="/cloudflare-network-interconnect">product</a>. As ever more enterprises turn to Cloudflare to <a href="https://www.cloudflare.com/learning/network-layer/network-security/">secure</a> and accelerate their branch and core networks, the ability to connect privately and securely becomes increasingly important. Today's announcement significantly increases the interconnection options for our customers, allowing them to connect with us in the location of their choice using the method or vendors they prefer.</p><p>In addition to our <a href="https://www.peeringdb.com/net/4224">physical locations</a>, our customers can now interconnect with us at any of 23 metro areas across five continents using <b>software-defined layer 2 networking technology</b>. Following the recent release of CNI (which includes PNI support for Magic Transit), customers can now order layer 3 DDoS protection in any of the markets below, without requiring physical cross connects, providing <b>private and secure</b> links, with <b>simpler setup</b>.</p>
    <div>
      <h3>Launch Partners</h3>
      <a href="#launch-partners">
        
      </a>
    </div>
    <p>We’re very excited to announce that five of the world's premier interconnect platforms are available at launch. <a href="http://www.consoleconnect.com/"><b>Console Connect by PCCW Global</b></a> in 14 locations, <a href="https://www.megaport.com/"><b>Megaport</b></a> in 14 locations, <a href="https://packetfabric.com/"><b>PacketFabric</b></a> in 15 locations, <a href="https://www.equinix.com/interconnection-services/cloud-exchange-fabric/"><b>Equinix ECX Fabric</b>™</a> in 8 locations and <a href="http://zayo.com/"><b>Zayo Tranzact</b></a> in 3 locations, spanning North America, Europe, Asia, Oceania and Africa.</p>
    <div>
      <h3></h3>
      <a href="#">
        
      </a>
    </div>
    <p>What is an Interconnection Platform?</p><p>Like much of the networking world, there are many terms in the interconnection space for the same thing: Cloud Exchange, Virtual Cross Connect Platform and Interconnection Platform are all synonyms. They are platforms that allow two networks to interconnect privately at layer 2, without requiring additional physical cabling. Instead the customer can order a port and a virtual connection on a dashboard, and the interconnection ‘fabric’ will establish the connection. Since many large customers are already connected to these fabrics for their connections to traditional Cloud providers, it is a very convenient method to establish private connectivity with Cloudflare.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2ewYi6iIQ3UJQsuCYVax4V/8cb32d608702df2a0ddbf03a858e8bc4/BDES-687_Hero_Image_for_Web_Page_of_New_Partner_Program.svg" />
            
            </figure>
    <div>
      <h3>Why interconnect virtually?</h3>
      <a href="#why-interconnect-virtually">
        
      </a>
    </div>
    <p>Cloudflare has an extensive <a href="/cloudflare-peering-portal-beta/">peering</a> infrastructure and already has private links to thousands of other networks. Virtual private interconnection is particularly attractive to customers with strict security postures and demanding performance requirements, but without the added burden of ordering and managing additional physical cross connects and expanding their physical infrastructure.</p>
    <div>
      <h3>Key Benefits of Interconnection Platforms</h3>
      <a href="#key-benefits-of-interconnection-platforms">
        
      </a>
    </div>
    <p><b>Secure</b>Similar to physical PNI, traffic does not pass across the Internet. Rather, it flows from the customer router, to the Interconnection Platform’s network and ultimately to Cloudflare. So while there is still some element of shared infrastructure, it’s not over the public Internet.</p><p><b>Efficient</b>Modern PNIs are typically a minimum of 1Gbps, but if you have the security motivation without the sustained 1Gbps data transfer rates, then you will have idle capacity. Virtual connections provide for “sub-rate” speeds, which means less than 1Gbps, such as 100Mbps, meaning you only pay for what you use. Most providers also allow some level of “burstiness”, which is to say you can exceed that 100Mbps limit for short periods.</p><p><b>Performance</b>By avoiding the public Internet, virtual links avoid Internet congestion.</p><p><b>Price</b>The major cloud providers typically have different pricing for egressing data to the Internet compared to an Interconnect Platform. By connecting to your cloud via an Interconnect Partner, you can benefit from those reduced egress fees between your cloud and the Interconnection Platform. This builds on our <a href="https://www.cloudflare.com/bandwidth-alliance/">Bandwidth Alliance</a> to give customers more options to continue to drive down their network costs.</p><p><b>Less Overhead</b>By virtualizing, you reduce physical cable management to just one connection into the Interconnection Platform. From there, everything defined and managed in software. For example, ordering a 100Mbps link to Cloudflare can be a few clicks in a Dashboard, as would be a 100Mbps link into Salesforce.</p><p><b>Data Center Independence</b>Is your infrastructure in the same metro, but in a different facility to Cloudflare? An Interconnection Platform can bring us together without the need for additional physical links.</p>
    <div>
      <h3>Where can I connect?</h3>
      <a href="#where-can-i-connect">
        
      </a>
    </div>
    <ol><li><p>In any of our <a href="https://www.peeringdb.com/net/4224">physical facilities</a></p></li><li><p>In any of the 23 metro areas where we are currently connected to an Interconnection Platform (see below)</p></li><li><p>If you’d like to connect virtually in a location not yet listed below, simply <a href="https://cloudflare.com/network-interconnect-partner-program">get in touch</a> via our interconnection page and we’ll work out the best way to connect.</p></li></ol>
    <div>
      <h3>Metro Areas</h3>
      <a href="#metro-areas">
        
      </a>
    </div>
    <p>The metro areas below have currently active connections. New providers and locations can be turned up on request.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1Mtk1onQbwTCMBLkicYX3e/b8bb990a2b365c9c23d223569b9545d1/Screen-Shot-2020-08-04-at-8.55.25-AM-1.png" />
            
            </figure>
    <div>
      <h3>What’s next?</h3>
      <a href="#whats-next">
        
      </a>
    </div>
    <p>Our customers have been asking for direct on-ramps to our global network for a long time and we’re excited to deliver that today with both physical and virtual connectivity of the world’s leading interconnection Platforms.</p><p>Already a Cloudflare customer and connected with one of our Interconnection partners? Then <a href="https://www.cloudflare.com/network-interconnect/">contact your account team</a> today to get connected and benefit from improved reliability, security and privacy of Cloudflare Network Interconnect via our interconnection partners.</p><p>Are you an Interconnection Platform with customers demanding direct connectivity to Cloudflare? Head to our <a href="https://www.cloudflare.com/network-interconnect-partnerships/">partner program page</a> and click “Become a partner”. We’ll continue to add platforms and partners according to customer demand.</p><p><i>"Equinix and Cloudflare share the vision of software-defined, virtualized and API-driven network connections. The availability of Cloudflare on the Equinix Cloud Exchange Fabric demonstrates that shared vision and we’re excited to offer it to our joint customers today."</i>– <b>Joseph Harding</b>, Equinix, Vice President, Global Product &amp; Platform MarketingSoftware Developer</p><p><i>"Cloudflare and Megaport are driven to offer greater flexibility to our customers. In addition to accessing Cloudflare’s platform on Megaport’s global internet exchange service, customers can now provision on-demand, secure connections through our Software Defined Network directly to Cloudflare Network Interconnect on-ramps globally. With over 700 enabled data centres in 23 countries, Megaport extends the reach of CNI onramps to the locations where enterprises house their critical IT infrastructure. Because Cloudflare is interconnected with our SDN, customers can point, click, and connect in real time. We’re delighted to grow our partnership with Cloudflare and bring CNI to our services ecosystem — allowing customers to build multi-service, securely-connected IT architectures in a matter of minutes."</i>– <b>Matt Simpson</b>, Megaport, VP of Cloud Services</p><p><i>“The ability to self-provision direct connections to Cloudflare’s network from Console Connect is a powerful tool for enterprises as they come to terms with new demands on their networks. We are really excited to bring together Cloudflare’s industry-leading solutions with PCCW Global’s high-performance network on the Console Connect platform, which will deliver much higher levels of network security and performance to businesses worldwide.”</i>– <b>Michael Glynn</b>, PCCW Global, VP of Digital Automated Innovation</p><p><i>"Our customers can now connect to Cloudflare via a private, secure, and dedicated connection via the PacketFabric Marketplace. PacketFabric is proud to be the launch partner for Cloudflare's Interconnection program. Our large U.S. footprint provides the reach and density that Cloudflare customers need."</i>– <b>Dave Ward</b>, PacketFabric CEO</p> ]]></content:encoded>
            <category><![CDATA[Better Internet]]></category>
            <category><![CDATA[Internet Performance]]></category>
            <category><![CDATA[Magic Transit]]></category>
            <category><![CDATA[Bandwidth Alliance]]></category>
            <category><![CDATA[Partners]]></category>
            <guid isPermaLink="false">5hgzfJ4XidiGTvNVzN5VCq</guid>
            <dc:creator>Steven Pack</dc:creator>
            <dc:creator>Tom Paseka</dc:creator>
        </item>
        <item>
            <title><![CDATA[How a Nigerian ISP Accidentally Knocked Google Offline]]></title>
            <link>https://blog.cloudflare.com/how-a-nigerian-isp-knocked-google-offline/</link>
            <pubDate>Thu, 15 Nov 2018 17:22:35 GMT</pubDate>
            <description><![CDATA[ Last Monday evening — 12 November 2018 — Google and a number of other services experienced a 74 minute outage. Incidents like this only serve to demonstrate just how much frailty is involved in how packets get from one point on the Internet to another. ]]></description>
            <content:encoded><![CDATA[ <p>Last Monday evening — 12 November 2018 — Google and a number of other services experienced a 74 minute outage. <a href="/why-google-went-offline-today-and-a-bit-about/">It’s not the first time this has happened</a>; and while there might be a temptation to assume that bad actors are at work, incidents like this only serve to demonstrate just how much frailty is involved in how packets get from one point on the Internet to another.</p><p>Our logs show that at 21:12 UTC on Monday, a Nigerian ISP, MainOne, accidentally misconfigured part of their network causing a "route leak". This resulted in Google and a number of other networks being routed over unusual network paths. Incidents like this actually happen quite frequently, but in this case, the traffic flows generated by Google users were so great that they overwhelmed the intermediary networks — resulting in numerous services (but predominantly Google) unreachable.</p><p>You might be surprised to learn that an error by an ISP somewhere in the world could result in Google and other services going offline. This blog post explains how that can happen and what the Internet community is doing to try to fix this fragility.</p>
    <div>
      <h3>What Is A Route Leak, And How Does One Happen?</h3>
      <a href="#what-is-a-route-leak-and-how-does-one-happen">
        
      </a>
    </div>
    <p>When traffic is routed outside of regular and optimal routing paths, this is known as a “route leak”. An explanation of how they happen requires a little bit more context.</p><p>Every network and network provider on the Internet has their own Autonomous System (AS) number. This number is unique and indicates the part of the Internet that that organization controls. Of note for the following explanation Google’s primary AS Number is 15169. That's Google's corner of the Internet and where Google traffic should end up... by the fastest path.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3WDp9ktd4pBlSeb2mt4kZH/4dcf099b010ba95396818a15413d2af8/image-1.png" />
            
            </figure><p>A Typical view of how Google/AS15169’s routes are propagated to Tier-1 Networks.</p><p>As seen above, Google is directly connected to most of the Tier-1 networks (the largest networks link large swathes of the Internet). When everything is working as it should be, Google’s AS Path, the route packets take from network to network to reach their destination, is actually very simple. For example, in the diagram above, if you were a customer of Cogent and you were trying to get to Google, the AS Path that you would see is “174 6453 15169”. That string of numbers is like a sequence of waypoints: start on AS 174 (Cogent), go to Tata (AS 6453), then go to Google (AS 15169). So, Cogent subscribers reach Google via Tata, a huge Tier-1 provider.</p><p>During the incident, MainOne misconfigured their routing as reflected in the AS Path : “20485 4809 37282 15169”. As a result of this misconfiguration, any networks that MainOne peered with (i.e. were directly connected to) potentially had their routes leaked through this erroneous path. For example, the Cogent customer in the paragraph above (AS 174) wouldn’t have gone via Tata (AS 6453) as they should have. Instead, they were routed first through TransTelecom (a Russian Carrier, AS 20485), then to China Telecom CN2 (a cross border Chinese carrier, AS 4809), then on to MainOne (the Nigerian ISP that misconfigured everything, AS 37282), and only then were they finally handed off to Google (AS 15169). In other words,  a user in London could have had their traffic go from Russia to China to Nigeria — and only then got to Google.</p>
    <div>
      <h3>But… Why Did This Impact So Many People?</h3>
      <a href="#but-why-did-this-impact-so-many-people">
        
      </a>
    </div>
    <p>The root cause of this was MainOne misconfiguring their routing. As mentioned earlier, incidents like this actually happen quite frequently. The impact of this misconfiguration should have been limited to MainOne and its customers.</p><p>However, what took this from relatively isolated and turned it into a much broader one is because CN2 — China Telecom’s premium cross-border carrier — was not filtering the routing that MainOne provided to them. In other words, MainOne told CN2 that it had authority to route Google’s IP addresses. Most networks verify this, and if it is incorrect, filter it out. CN2 did not — it simply trusted MainOne. As a result of this, MainOne’s misconfiguration propagated to a substantially larger network. Compounding this, it is likely that the Russian network TransTelecom behaved similarly towards CN2 as CN2 had behaved towards MainOne — they trusted without any verification of the routing paths that CN2 gave to them.</p><p>This demonstrates how much trust is involved in the underlying connections that make up the Internet. It's a network of networks (an internet!) that works by cooperation between different entities (countries and companies).</p><p>This is how a routing mistake made in Nigeria then propagated through China and then through Russia. Given the amount of traffic involved, the networks were overwhelmed and Google was unreachable.</p><p>It is worth explicitly stating: the fact that Google traffic was routed through Russia and China before going getting to Nigeria and only then hitting the correct destination made it appear to some people that the misconfiguration was nefarious. We do not believe this to be the case. Instead, this incident reflects a mistake that was not caught by appropriate network filtering. There was too much trust and not enough verification across a number of networks: this is a systemic problem that makes the Internet more vulnerable to mistakes than it should be.</p>
    <div>
      <h3>How to Mitigate Route Leaks Like These</h3>
      <a href="#how-to-mitigate-route-leaks-like-these">
        
      </a>
    </div>
    <p>Along with Cloudflare’s internal systems, <a href="https://bgpmon.net/">BGPMon.net</a> and ThousandEyes detected the incident. BGPMon has several methods to detect abnormalities; in this case, they were able to detect that the providers in the paths to reach Google were irregular.</p><p>Cloudflare’s systems immediately detected this and took auto-remediation action.</p><blockquote><p>Thankfully our systems detected it and mitigated it! <a href="https://t.co/qFiDkrn2Kd">pic.twitter.com/qFiDkrn2Kd</a></p>— Jerome Fleury (@Jerome_UZ) <a href="https://twitter.com/Jerome_UZ/status/1062185732544950272?ref_src=twsrc%5Etfw">November 13, 2018</a></blockquote>

<p>Some more information about Cloudflare’s Auto Remediation system: <a href="/the-internet-is-hostile-building-a-more-resilient-network/">https://blog.cloudflare.com/the-internet-is-hostile-building-a-more-resilient-network/</a></p>
    <div>
      <h3>Looking Forward</h3>
      <a href="#looking-forward">
        
      </a>
    </div>
    <p>Cloudflare is committed to working with others to drive more secure and stable routing. We recently wrote about <a href="/rpki/">RPKI</a> and how we’ll start enforcing <a href="/rpki-details/">“Strict” RPKI validation</a>, and will continue to strive for secure internet routing.  We hope that all networks providing transit services can ensure proper filtering of their customers, end hijacks and route-leaks and implement best practices like <a href="https://tools.ietf.org/html/bcp38">BCP-38</a>.</p> ]]></content:encoded>
            <category><![CDATA[BGP]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[Peering]]></category>
            <category><![CDATA[Outage]]></category>
            <guid isPermaLink="false">4owruqjpxEweZ1yvYun6Pn</guid>
            <dc:creator>Tom Paseka</dc:creator>
        </item>
        <item>
            <title><![CDATA[Norfolk and Richmond, Virginia: Cloudflare's 152nd and 153rd cities]]></title>
            <link>https://blog.cloudflare.com/norfolk-and-richmond/</link>
            <pubDate>Wed, 12 Sep 2018 15:37:00 GMT</pubDate>
            <description><![CDATA[ Virginia has a very important place in Internet history, as well as the history of Cloudflare’s network. Northern Virginia, in the area around Ashburn VA, has for a long time been core to Internet infrastructure. ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Virginia has a very important place in Internet history, as well as the history of Cloudflare’s network.</p><p>Northern Virginia, in the area around Ashburn VA, has for a long time been core to Internet infrastructure. In the early 1990’s, MAE-East (Metropolitan-Area-Exchange East) , an Internet Exchange Point (IXP) was established. <a href="https://en.wikipedia.org/wiki/MAE-East">MAE-East</a> and West were some of the earliest <a href="https://en.wikipedia.org/wiki/Internet_exchange_point">IXPs</a>. Internet Exchange Points are crucial interconnection points for ISPs and other Internet Networks to interconnect and exchange traffic. Eco-systems have grown around these through new data center offerings and new Internet platforms. Like many pieces of the Internet, MAE-East had <a href="https://www.parking.org/2017/09/05/internet-began-parking-garage/">a humble beginning</a>, though not many humble beginnings grew to handle around 50% of Internet traffic exchange.</p><p>Cloudflare’s second Data Center, and one that still plays a critical component in our <a href="https://cloudflare.com/network-map">Global Network</a> was Ashburn, Virginia. Similarly across many organizations, the Northern Virginia area has become a Data Center mecca. Many of the largest Clouds have a substantial amount of their footprint in Northern Virginia. Although MAE-East no longer exists, other Internet Exchange Points have come and grown in its place.</p><p>Cloudflare’s network has grown beyond what was traditional Interconnection points, like Ashburn/Northern VA, to a new Edge of the network. Cloudflare will continue to grow its Edge closer to every end-user, so today we’re announcing our Richmond and Norfolk data centers. These two data centers will cover much more of Virginia and neighboring regions.</p> ]]></content:encoded>
            <category><![CDATA[Cloudflare Network]]></category>
            <category><![CDATA[Data Center]]></category>
            <category><![CDATA[USA]]></category>
            <guid isPermaLink="false">5FnOq3Sgwd4jzDkoqx8TyX</guid>
            <dc:creator>Tom Paseka</dc:creator>
        </item>
        <item>
            <title><![CDATA[IPv6 in China]]></title>
            <link>https://blog.cloudflare.com/ipv6-in-china/</link>
            <pubDate>Thu, 19 Jul 2018 00:03:37 GMT</pubDate>
            <description><![CDATA[ At the end of 2017, Xinhua reported that there will be 200 Million IPv6 users inside Mainland China by the end of this year.. Halfway into the year, we’re seeing a rapid growth in IPv6 users and traffic originating from Mainland China. ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Photo by <a href="https://unsplash.com/@chuttersnap?utm_source=ghost&amp;utm_medium=referral&amp;utm_campaign=api-credit">chuttersnap</a> / <a href="https://unsplash.com/?utm_source=ghost&amp;utm_medium=referral&amp;utm_campaign=api-credit">Unsplash</a></p><p>At the end of 2017, Xinhua reported that there will be 200 Million IPv6 users inside Mainland China <a href="http://www.xinhuanet.com/english/2017-11/26/c_136780735.htm">by the end of this year</a>. Halfway into the year, we’re seeing a rapid growth in IPv6 users and traffic originating from Mainland China.</p>
    <div>
      <h3>Why does this matter?</h3>
      <a href="#why-does-this-matter">
        
      </a>
    </div>
    <p>IPv6 is often referred to the next generation of IP addressing. The reality is, IPv6 is what is needed for addressing today. Taking the largest mobile network in China today, China Mobile has over 900 Million mobile subscribers and over <a href="https://www.chinamobileltd.com/en/ir/operation_m.php">670 Million 4G/LTE subscribers</a>. To be able to provide service to their users, they need to provide an IP address to each subscriber’s device. This means close to a billion IP addresses would be required, which is far more than what is available in IPv4, especially as the available IP address pools have been <a href="https://en.wikipedia.org/wiki/IPv4_address_exhaustion">exhausted</a>.</p>
    <div>
      <h3>What is the solution?</h3>
      <a href="#what-is-the-solution">
        
      </a>
    </div>
    <p>To solve the addressability of clients, many networks, especially mobile networks, will use <a href="https://en.wikipedia.org/wiki/Carrier-grade_NAT">Carrier Grade NAT (CGN)</a>. This allows thousands, possibly up to hundreds of thousands, of devices to be shared behind a single internet IP address. The CGN equipment can be very expensive to scale and further, given the scale of the networks, they might need to layer CGNs behind other CGNs. This increases costs per subscriber, can reduce performance and makes scaling very challenging. A further solution, <a href="https://en.wikipedia.org/wiki/NAT64">NAT64</a>, allows IPv6 addresses to be given to subscribers, but then translated to IPv4 addresses similar to other NATs. This allows networks and ISPs to begin deploying IPv6 to subscribers, a first step in transition to IPv6.</p>
    <div>
      <h3>IPv6 IPv6 IPv6!</h3>
      <a href="#ipv6-ipv6-ipv6">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1iY3FD5e6ymhDdlae7sTwV/b509d4cb412452cba3ab40c467fba9bb/AS9808-BGP-Announcements.png" />
            
            </figure><p>Announcements IPv6 address blocks from China Mobile. Source: <a href="https://bgp.he.net/AS9808#_asinfo">Hurricane Electric</a></p><p>On June 7, China Mobile started to announce IPv6 address blocks to the Internet at large. At the same time, Cloudflare started seeing traffic being exchanged with China Mobile users over IPv6 connections.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/17A2wJdccc2D06GJEFPm2h/22fba6672a36787088cfb24a4f9bd632/AS9808-IPv6-Stats.png" />
            
            </figure><p>IPv4 to IPv6 percentage of traffic as seen from Cloudflare to AS9808 China Mobile’s Guangdong network.</p><p>Throughout the past 45 days, we’ve seen more and more IPv6 address blocks being announced to the internet, along with very aggressive usage. Interestingly this all started on-or-around June 8th 2018 (seven years to the day from <a href="https://en.wikipedia.org/wiki/World_IPv6_Day_and_World_IPv6_Launch_Day">World IPv6 Day</a>)</p><p>It’s natural to see traffic graphs like this go up; then down after a while. This could indicate there’s some testing still going on with the deployment. We fully expect that the traffic percentage will climb back up once this is fully rolled out.</p><p>It’s fantastic to see the IPv6 enablement! We congratulate China Mobile on their successful enablement going forward.</p> ]]></content:encoded>
            <category><![CDATA[IPv6]]></category>
            <category><![CDATA[China]]></category>
            <guid isPermaLink="false">1SakFiXhHjQEdWaLINxBIt</guid>
            <dc:creator>Tom Paseka</dc:creator>
        </item>
        <item>
            <title><![CDATA[Cebu City, Philippines: Cloudflare's 138th Data Center]]></title>
            <link>https://blog.cloudflare.com/cebu/</link>
            <pubDate>Fri, 23 Mar 2018 02:07:08 GMT</pubDate>
            <description><![CDATA[ Cebu City, the second most populous metro area, but oldest city in the Philippines is the home of Cloudflare’s newest Data Center.  Located centrally in the Philippines, Cebu has had a long standing tradition of trade and business activity, the word itself “Cebu” meaning trade.  ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Cebu City, the second most populous metro area, but oldest city in the Philippines is the home of Cloudflare’s newest Data Center.</p><p>Located centrally in the Philippines, Cebu has had a long standing tradition of trade and business activity, the word itself “Cebu” meaning trade. It’s central location brings excellent coverage to central and southern Philippines, while our existing location in Manila, serving the Manila Metro and northern Philippines.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6rrg9Kawiw1ynAqVJLUnA4/42d6ebf1b9fa04dea52e5bd15a2fa619/photo-1505261476952-32e25cbfc755" />
            
            </figure><p>Photo by <a href="https://unsplash.com/@jenrielzany?utm_source=ghost&amp;utm_medium=referral&amp;utm_campaign=api-credit">Zany Jadraque</a> / <a href="https://unsplash.com/?utm_source=ghost&amp;utm_medium=referral&amp;utm_campaign=api-credit">Unsplash</a></p><p>Cebu’s history covers hundreds of years, with rich local culture and international influence dating back from the first Spanish visitors to modern trade and shipping. One of the more popular dishes is Lechon.</p><p>Cebu is known for its white sand beaches. In between making millions of websites and applications faster and more secure for the Philippine internet users, we hope our servers can get some excellent R&amp;R on the famous beaches.</p>
    <div>
      <h3>The Cloudflare Global Anycast Network</h3>
      <a href="#the-cloudflare-global-anycast-network">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4QdAtlcJB8W706nEB1BspK/8be3e771a5761692f54ada867d09935a/Cebu.png" />
            
            </figure><p>This map reflects the network as of the publish date of this blog post. For the most up to date directory of locations please refer to our <a href="https://www.cloudflare.com/network/">Network Map on the Cloudflare site</a>.</p> ]]></content:encoded>
            <category><![CDATA[March of Cloudflare]]></category>
            <category><![CDATA[Data Center]]></category>
            <category><![CDATA[Cloudflare Network]]></category>
            <guid isPermaLink="false">0FCIe0gaurUlYNKr6oJKy</guid>
            <dc:creator>Tom Paseka</dc:creator>
        </item>
        <item>
            <title><![CDATA[Macau: Cloudflare Data Center 127]]></title>
            <link>https://blog.cloudflare.com/macau/</link>
            <pubDate>Fri, 09 Mar 2018 01:00:00 GMT</pubDate>
            <description><![CDATA[ Cloudflare's 127th data center is now live in Macau, helping make over 7 million Internet facing applications safer and faster. This is our 44th data center in Asia.

 ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Cloudflare's 127th data center is now live in Macau, helping make over 7 million Internet facing applications safer and faster. This is our 44th data center in Asia.</p><p><i>Cloudflare 將在澳門啟用全球第127個數據中心, 幫助超過 7,000,000 客戶的互聯網資產運行得更快、更安全。澳門也是我們在亞洲的第44個數據中心。</i></p><p><i>O 127º centro de dados da Cloudflare agora está em funcionamento em Macau, ajudando a tornar mais de 7 milhões de aplicações voltadas para a Internet de forma mais segura e rápida. Estamos felizes em compartilhar que este é o nosso 44º centro de dados na Ásia.</i></p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3HAXEwpbEXcL4NRh4OmqQO/2baacfd188e70a1bf71ad99b655dcfbc/15403760996_ac504f0cbd_z.jpg" />
            
            </figure><p>_<a href="https://creativecommons.org/licenses/by-nc-nd/2.0/">CC BY-NC-ND 2.0</a> <a href="https://flic.kr/p/ptbiu9">image</a> by <a href="https://www.flickr.com/photos/kidchen915/">kidchen915</a>_</p><p>Blending Chinese and Portuguese culture, just last year, Macau welcomed over 30 million visitors. Visit Macau to experience its unique and extravagant entertainment scene, see scenic spots such as the <a href="https://en.wikipedia.org/wiki/Ruins_of_St._Paul%27s">Ruins of St Paul</a>, <a href="https://en.wikipedia.org/wiki/Senado_Square">Senado Square</a>, attempt the world's <a href="https://www.macautower.com.mo/tower-adventure/tower-adventure/bungy-jump/">highest bungy jump</a> from Macau Tower, or enjoy the foodie paradise Macau delivers!</p><p><i>有著與眾不同的中國及葡萄牙文化融合景觀，澳門至去年為止已經吸引了三千萬遊客來一睹她的風采。你可以拜訪著名的娛樂景觀，像是聖保祿大教堂遺址，議事亭廣場，挑戰澳門的美食，亦可以選擇從全世界最高的高空彈跳地點ㄧ澳門旅遊塔上一躍而下。</i></p><p><i>Combinando cultura chinesa e portuguesa, o ano passado, Macau recebeu mais de 30 milhões de visitantes. Recomendamos que visite Macau para experimentar a cena de entretenimento única e extravagante, explore os pontos cénica, como ruínas de São Paulo, Praça do Senado, e ainda, tente o "bungee jumping" mais alto do mundo deste da Torre de Macau, ou aproveite o paraíso da gastronomia que Macau oferece!</i></p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/q1YXbRzF1GntQqmWekMOe/a4b159f78e74d01e89772d3402aa2367/egg-tarts.jpg" />
            
            </figure><p>While Macau and it's bigger sister across the Pearl River, <a href="/hong-kong-data-center-now-online/">Hong Kong</a> share many similarities, one big difference is Egg Tarts. Egg tarts are suggested to have been introduced into Hong Kong in the 1940's while Macau favors the portuguese style which dates back hundreds of years. Both have been adapted for local flavors and both are wonderful! Which do you like best?</p><p><i>雖然澳門以及香港有著許多相似之處，他們對於蛋塔的喜好卻有相當大的不同。話說蛋塔大約於1940年代左右才被引入香港，而澳門當地人偏好的葡式蛋塔卻已有百年歷史了。兩地都將他們對於蛋塔的喜好昇華成不可或缺的美食。您喜歡哪種多一點呢？</i></p><p><i>Enquanto Macau e a irmã a sua irmã do outro lado do rio das pérolas, Hong Kong, ambas compartilham muitas semelhanças, uma grande diferença seriam algumas especialidades culinárias. Os deliciosos pastéis de nata, foram introduzidos em Hong Kong na década de 1940, enquanto Macau favorece o estilo português que remonta a centenas de anos. Ambos foram adaptados para sabores locais e ambos são maravilhosos! Qual é o seu favorito (pastel de nata)?</i></p><p>Next week, we'll announce even more cities around the globe added to the Cloudflare global network.</p><p><i>澳門只是個起點！從下週起，我們將會逐漸宣佈更多城市加入 Cloudflare 的全球網路。</i></p><p><i>Na próxima semana, anunciaremos ainda mais localidades ao redor do mundo a serem adicionadas à rede global Anycast da Cloudflare.</i></p> ]]></content:encoded>
            <category><![CDATA[March of Cloudflare]]></category>
            <category><![CDATA[Data Center]]></category>
            <category><![CDATA[Cloudflare Network]]></category>
            <guid isPermaLink="false">3WdoGoVw4kW97vBGbPrw82</guid>
            <dc:creator>Tom Paseka</dc:creator>
        </item>
        <item>
            <title><![CDATA[Power outage hits the island of Taiwan. Here’s what we learned.]]></title>
            <link>https://blog.cloudflare.com/power-outage-taiwan/</link>
            <pubDate>Wed, 16 Aug 2017 00:15:00 GMT</pubDate>
            <description><![CDATA[ At approximately 4:50pm local time (8:50am UTC) August 15, a major unexpected power outage hit the island of Taiwan with a significant amount of its power generation facilities going down. ]]></description>
            <content:encoded><![CDATA[ <p>At approximately 4:50pm local time (8:50am UTC) August 15, a major unexpected power outage hit the island of Taiwan with a significant amount of its power generation facilities going down.</p>
    <div>
      <h3>Blackout!</h3>
      <a href="#blackout">
        
      </a>
    </div>
    <p>Most of the island was hit with power outages, shortages and rolling blackouts, with street lights not functioning, nor power in many of Taipei’s shopping malls, and much other infrastructure.</p><p>Blackouts of this scale are very rare. Usually, during an outage of this scale, it would be expected that Internet traffic would greatly drop, as houses and businesses lose power and are unable to connect to the Internet. I’ve experienced this in the past, working at consumer ISPs. As households and businesses lose power, so do their modems or routers which connect them to the Internet.</p><p>However, during yesterday's outage, something different happened. I'd like to share some insights from yesterday's outage.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/s1UDADeYSD7EjAu5VsriO/83860663ec82ea91b1b1308773032d62/Taipei101.jpg" />
            
            </figure><p>Photo: Taipei 101 Dark during the Blackout -Source: <a href="https://www.upi.com/Top_News/World-News/2017/08/15/Taiwans-economic-minister-resigns-amid-nationwide-blackout/9031502816439/">David Chang/EPA</a></p>
    <div>
      <h3>Even when the power is out, the Internet still operates</h3>
      <a href="#even-when-the-power-is-out-the-internet-still-operates">
        
      </a>
    </div>
    <p>Most Telecom and Data Center facilities are built with redundancy in mind and have backup power generation. Our Data Center partner, <a href="http://www.chief.com.tw">Chief</a>, was able to switch to backup power generation without any service interruption, allowing our service to operate without interruption.</p><p>The lack of interruption was also reflected by many users still accessing the Internet. From our statistics, the number of requests didn’t drop, as illustrated by the graph following. At the beginning of the power outage, there was actually a spike in requests, as more people likely look on the Internet as to more details of what's happening. The graph below shows a timeline of requests per second, seen in our Taipei data center, with a red line marking the beginning of the power outage.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/FQ8QP7ZGaD4LBnVFvcSuj/13466fdbdeee510d868c51154329536c/tpe-requests-per-second-1.jpg" />
            
            </figure><p>Breaking down traffic between Mobile and Desktop clients, approximately 10% of clients shifted from Desktop to Mobile devices at the beginning of the outage. The graph also shows a spike daily around lunch time, as many clients shift to their mobile phones during lunch.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/QxDRdybNveRbNZJg6vTpg/551dd2f17c06e109e064d6f666591277/mobile-v-desktop-rqs.jpg" />
            
            </figure><p>The shift to to mobile devices did however cause a drop in bandwidth used, by approximately 25%. The following graph showing our bandwidth usage to HiNet, the largest ISP, demonstrates this sharp drop.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6PZFIXkTbguc4OIrSG9mMl/2ec87b9b33461c416c130b5cf138c4d9/hinet-bps.jpg" />
            
            </figure><p>Power was fully restored around 21:40pm, (13:40 UTC), however many users were able to regain access during power-rationing and Internet usage grew to reach its usual night-time peaks.</p><p>This power outage taught us that Internet usage does not necessarily decrease during a power outage in 2017. Number of requests can actually increase, but bandwidth usage drops, reflecting a shift to usage of mobile devices.</p><p>Whilst the entire city lived in darkness, the Internet shined bright!</p> ]]></content:encoded>
            <category><![CDATA[Outage]]></category>
            <category><![CDATA[Asia]]></category>
            <guid isPermaLink="false">26pRgiF3kAGnXXl9HlT4y6</guid>
            <dc:creator>Tom Paseka</dc:creator>
        </item>
        <item>
            <title><![CDATA[Why Google Went Offline Today and a Bit about How the Internet Works]]></title>
            <link>https://blog.cloudflare.com/why-google-went-offline-today-and-a-bit-about/</link>
            <pubDate>Tue, 06 Nov 2012 09:09:00 GMT</pubDate>
            <description><![CDATA[ Today, Google's services experienced a limited outage for about 27 minutes over some portions of the Internet. The reason this happened dives into the deep, dark corners of networking.  ]]></description>
            <content:encoded><![CDATA[ <p>Today, Google's services experienced a limited outage for about 27 minutes over some portions of the Internet. The reason this happened dives into the deep, dark corners of networking. I'm a network engineer at CloudFlare and I played a small part in helping ensure Google came back online. Here's a bit about what happened.</p><p>At around 6:24pm PST / 02:24 UTC (5 Nov. 2012 PST / 6 Nov. 2012 UTC), CloudFlare employees noticed that Google's services were offline. We use Google Apps for things like email so when we can't reach their servers the office notices quickly. I'm on the Network Engineering team so I jumped online to figure out if the problem was local to us or global.</p>
    <div>
      <h3>Troubleshooting</h3>
      <a href="#troubleshooting">
        
      </a>
    </div>
    <p>I quickly realised that we were unable to resolve all of Googles services — or even reach 8.8.8.8, Googles public DNS server — so I started troubleshooting DNS.</p><p>$ dig +trace google.com</p><p>Here's the response I got when I tried to reach any of Google.com's name servers:</p><p>google.com.                172800        IN        NS        ns2.google.com.google.com.                172800        IN        NS        ns1.google.com.google.com.                172800        IN        NS        ns3.google.com.google.com.                172800        IN        NS        ns4.google.com.;; Received 164 bytes from 192.12.94.30#53(e.gtld-servers.net) in 152 ms;; connection timed out; no servers could be reached</p><p>The fact that no servers could be reached means something was wrong. Specifically, it meant that from our office network we were unable to reach any of Googles DNS servers.</p><p>I started to look at the network layer, see if that's where the problems lay.</p><p>PING 216.239.32.10 (216.239.32.10): 56 data bytesRequest timeout for icmp_seq 092 bytes from 1-1-15.edge2-eqx-sin.moratelindo.co.id (202.43.176.217): Time to live exceeded</p><p>That was curious. Normally, we shouldn't be seeing an Indonesian ISP (Moratel) in the path to Google. I jumped on one of CloudFlare's routers to check what was going on. Meanwhile, others reports from around the globe on Twitter suggested we weren't the only ones seeing the problem.</p>
    <div>
      <h3>Internet Routing</h3>
      <a href="#internet-routing">
        
      </a>
    </div>
    <p>To understand what went wrong you need to understand a bit about how networking on the Internet works. The Internet is a collection of networks, known as "Autonomous Systems" (AS). Each network has a unique number to identify it known as AS number. CloudFlare's AS number is 13335, Google's is 15169. The networks are connected together by what is known as Border Gateway Protocol (BGP). BGP is the glue of the Internet — announcing what IP addresses belong to each network and establishing the routes from one AS to another. An Internet "route" is exactly what it sounds like: a path from the IP address on one AS to an IP address onanother AS.</p><p>BGP is largely a trust-based system. Networks trust each other to say which IP addresses and other networks are behind them. When you send a packet or make a request across the network, your ISP connects to its upstream providers or peers and finds the shortest path from your ISP to the destination network.</p><p>Unfortunately, if a network starts to send out an announcement of a particular IP address or network behind it, when in fact it is not, if that network is trusted by its upstreams and peers then packets can end up misrouted. That is what was happening here.</p><p>I looked at the BGP Routes for a Google IP Address. The route traversed Moratel (23947), an Indonesian ISP. Given that I'm looking at the routing from California and Google is operating Data Centre's not far from our office, packets should never be routed via Indonesia. The most likely cause was that Moratel was announcing a network that wasn't actually behind them.</p><p>The BGP Route I saw at the time was:</p><p><a>tom@edge01.sfo01</a>&gt; show route 216.239.34.10                          inet.0: 422168 destinations, 422168 routes (422154 active, 0 holddown, 14 hidden)+ = Active Route, - = Last Active, * = Both216.239.34.0/24    *[BGP/170] 00:15:47, MED 18, localpref 100                      AS path: 4436 3491 23947 15169 I                    &gt; to 69.22.153.1 via ge-1/0/9.0</p><p>Looking at other routes, for example to Google's Public DNS, it was also stuck routing down the same (incorrect) path:</p><p><a>tom@edge01.sfo01</a>&gt; show route 8.8.8.8 inet.0: 422196 destinations, 422196 routes (422182 active, 0 holddown, 14 hidden)+ = Active Route, - = Last Active, * = Both8.8.8.0/24         *[BGP/170] 00:27:02, MED 18, localpref 100                      AS path: 4436 3491 23947 15169 I                    &gt; to 69.22.153.1 via ge-1/0/9.0</p>
    <div>
      <h3>Route Leakage</h3>
      <a href="#route-leakage">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/22fmryxOS3Tm9oxu2HxywQ/316632dbcb404c1c21d9df929a7fe5fb/fingersyouhaveusedtodial.png.scaled500.png" />
            
            </figure><p>(Image Credit: The Simpsons)</p><p>Situations like this are referred to in the industry as "route leakage", as the route has "leaked" past normal paths. This isn't an unprecedented event. Google previously suffered a <a href="http://www.renesys.com/blog/2008/02/pakistan-hijacks-youtube-1.shtml">similar outage</a> when Pakistan was allegedly trying to censor a video on YouTube and the National ISP of Pakistan null routed the service's IP addresses. Unfortunately, they leaked the null route externally. Pakistan Telecom's upstream provider, PCCW, trusted what Pakistan Telecom's was sending them and the routes spread across the Internet. The effect was YouTube was knocked offline for around 2 hours.</p><p>The case today was similar. Someone at Moratel likely "fat fingered" an Internet route. PCCW, who was Moratel's upstream provider, trusted the routes Moratel was sending to them. And, quickly, the bad routes spread. It is unlikely this was malicious, but rather a misconfiguaration or an error evidencing some of the failings in the BGP Trust model.</p>
    <div>
      <h3>The Fix</h3>
      <a href="#the-fix">
        
      </a>
    </div>
    <p>The solution was to get Moratel to stop announcing the routes they shouldn't be. A large part of being a network engineer, especially working at a large network like CloudFlare's, is having relationships with other network engineers around the world. When I figured out the problem, I contacted a colleague at Moratel to let him know what was going on. He was able to fix the problem at around 2:50 UTC / 6:50pm PST. Around 3 minutes later, routing returned to normal and Google's services came back online.</p><p>Looking at peering maps, I'd estimate the outage impacted around 3–5% of the Internet's population. The heaviest impact will have been felt in Hong Kong, where PCCW is the incumbent provider. If you were in the area and unable to reach Google's services around that time, now you know why.</p>
    <div>
      <h3>Building a Better Internet</h3>
      <a href="#building-a-better-internet">
        
      </a>
    </div>
    <p>This all is a reminder about how the Internet is a system built on trust. Today's incident shows that, even if you're as big as Google, factors outside of your direct control can impact the ability of your customers to get to your site so it's important to have a network engineering team that is watching routes and managing your connectivity around the clock. CloudFlare works every day to ensure our customers get the optimal possible routes. We look out for all the websites on our network to ensure that their traffic is always delivered as fast as possible. Just another day in our ongoing efforts to <a href="https://twitter.com/search?q=%23savetheweb">#savetheweb</a>.</p>
    <div>
      <h4>Update: Tuesday, November 6 11:00am PST</h4>
      <a href="#update-tuesday-november-6-11-00am-pst">
        
      </a>
    </div>
    <p>Moratel says the issue was caused by an unexpected hardware failure, causing this abnormal condition. This was not a malicious attempt. Moratel immediately shutdown the BGP peering with Google after contact was made while the hardware failure was being looked into.</p><hr /><p><i>Thanks for reading all the way to the end. If you enjoyed this post, take a second to </i><a href="http://www.cloudflare.com/overview"><i>learn more about CloudFlare</i></a><i> or </i><a href="http://crunchies2012.techcrunch.com/nominate/?MTpDbG91ZEZsYXJl"><i>nominate us for the 2012 Crunchie Award for Best Technical</i><i>Innovation</i></a><i>.</i></p> ]]></content:encoded>
            <category><![CDATA[Network]]></category>
            <category><![CDATA[BGP]]></category>
            <category><![CDATA[Outage]]></category>
            <category><![CDATA[Google]]></category>
            <category><![CDATA[Reliability]]></category>
            <category><![CDATA[Tech Talks]]></category>
            <category><![CDATA[Deep Dive]]></category>
            <guid isPermaLink="false">4faLB1sIG2FgoyFXgvQ6Qr</guid>
            <dc:creator>Tom Paseka</dc:creator>
        </item>
        <item>
            <title><![CDATA[CloudFlare is Now Part of the Hong Kong Internet Exchange (HKIX)]]></title>
            <link>https://blog.cloudflare.com/cloudflare-is-now-part-of-the-hong-kong-inter/</link>
            <pubDate>Wed, 28 Mar 2012 17:14:00 GMT</pubDate>
            <description><![CDATA[ CloudFlare is now connected to HKIX (Hong Kong Internet Exchange). HKIX is the largest Internet Exchange in Asia transferring around 200Gbit to its 160 ISP, Carrier and Content networks.  ]]></description>
            <content:encoded><![CDATA[ <p></p><p>CloudFlare is now connected to <a href="http://www.hkix.net">HKIX (Hong Kong Internet Exchange)</a>. HKIX is the largest Internet Exchange in Asia transferring around 200Gbit to its 160 ISP, Carrier and Content networks. HKIX is fundamental to the local Hong Kong ISP Market, with every provider in Hong Kong connected, as well as excellent regional coverage with networks from Thailand to Japan to Australia.</p><p>So, what does a connection to HKIX mean for CloudFlare users? Faster performance for your Hong Kong web surfers. Being a part of the HKIX allows us to deliver traffic to all Hong Kong web surfers within Hong Kong. This will improve web browsing performance by keeping the traffic local and the latency low.</p><p>Connecting to HKIX is one of the steps CloudFlare is working on to bring the Internet closer to you. Watch for more news to come over 2012!</p> ]]></content:encoded>
            <category><![CDATA[Hong Kong]]></category>
            <category><![CDATA[Peering]]></category>
            <category><![CDATA[Speed & Reliability]]></category>
            <category><![CDATA[Speed]]></category>
            <category><![CDATA[Cloudflare Network]]></category>
            <guid isPermaLink="false">IpcXLrpp6M65atwmnRELY</guid>
            <dc:creator>Tom Paseka</dc:creator>
        </item>
    </channel>
</rss>